I use Arch in WSL BTW. This is not a joke its actually quite nice
I use Arch in WSL BTW. This is not a joke its actually quite nice
It would be luck based for pure LLMs, but now I wonder if the models that can use Python notebooks might be able to code a script to count it. Like its actually possible for an AI to get this answer consistently correct these days.
My bad, its Microsoft that keeps changing their recommendations, had it in my mind it was bad for some reason.
ICANN can pry “.local” from my cold dead hands!
The way I understand the users didn’t necessarily realize McAfee is responsible, just that a bunch of sqlite files appeared in temp so they might not connect the dots here anyway. Or even know McAfee is installed considering their shady practices.
I do think we’re machines, I said so previously, I don’t think there is much more to it than physical attributes, but those attributes let us have this discussion. Remarkable in its own right, I don’t see why it needs to be more, but again, all personal opinion.
I read this question a couple times, initially assuming bad faith, even considered ignoring it. The ability to change, would be my answer. I don’t know what you actually mean.
Personally my threshold for intelligence versus consciousness is determinism(not in the physics sense… That’s a whole other kettle of fish). Id consider all “thinking things” as machines, but if a machine responds to input in always the same way, then it is non-sentient, where if it incurs an irreversible change on receiving any input that can affect it’s future responses, then it has potential for sentience. LLMs can do continuous learning for sure which may give the impression of sentience(whispers which we are longing to find and want to believe, as you say), but the actual machine you interact with is frozen, hence it is purely an artifact of sentience. I consider books and other works in the same category.
I’m still working on this definition, again just a personal viewpoint.
I think I see where you’re coming from. The computer in the comic is a Rule 110 automata, known to be Turing complete. It can perform complex calculations, allegedly.
I suppose it can get a bit philosophical whether an incomplete time instant is even visible from the inside of a simulation, because nothing moves after a single pass until the full frame is complete, hence limiting perception.
Unless you mean continuity as in non discrete physics, which is fair play for this specific computer but then there is the Planck length to consider.(edit: I am aware that discrete vs continuous is a whole holy war on its own)
He bases the next row of stones on the previous one, changing them by a consistent rule? Its an unorthodox computer with infinite memory. Why does that not count as a simulation? I’m not following
Its a thing. https://en.m.wikipedia.org/wiki/Busy_waiting
Not an answer to the question, but in case performance is the goal, Torchaudio has it here
Ah, even then it could just be a consequence of training samples usually being chronological(most often the expected resolution for conflicting instructions is “whatever you heard last”, with some exceptions when explicitly stated) so it learns to think that way. I did find the pattern also applies to GPT trained on long articles where you’d expect it not to, so wanted to just explain why that might be.
Or I should explain better: most training samples will be cut off at the top, so the network sort of learns to ignore it a bit.
Yes, that’s by design, the networks work on transcripts per input, it does genuinely get cut off eventually, usually it purges an entire older line when the tokens exceed a limit.
I was a curious child, and things spiralled out of control from there…
Ah, that makes sense. Most cloud providers have the full nine yards with online hardware provisioning and imaging I forgot you could still just rent a real machine.
Hmm, wonder if there was some reason they didnt just extract the original certificates from the VPS if it was actually the hosting provider, I mean even with mitigation it should be sitting in a temp folder somewhere, surely they could? Issuing new ones seems like a surefire way to alert the operators, unless they already used Let’s Encrypt of course.
Surprisingly just setting the systemd flag in WSL settings worked, though for a long time I simply didn’t use systemd.