

It is not a leading question. The answer just happens to be meaningless.
Asking whether something is good is the vast majority of human concern. Most of our rational activity is fundamentally evaluative.


It is not a leading question. The answer just happens to be meaningless.
Asking whether something is good is the vast majority of human concern. Most of our rational activity is fundamentally evaluative.


deleted by creator


Behaviorally, analog systems are not substrate dependent.
This is partly true, as I already explained at length, since the behavior of any system can be crudely modeled. It’s how LLMs work! But it’s also a non-sequitur.
Modeling what a system can do and doing what a system can do are not the same.


I explicitly explained that you can model an analog machine using a digital computer. When you make a topological map of a weather system (or a brain) or take a digital picture of a flower, you are generating a model. This is the subject of the articles you linked me.
No matter how accurate your digital model of a weather system, however, it will never produce rain. The byproduct of Turing machines (digital models) is strictly discrete.
You can model digital computers using analog computers. And the reverse is also possible. But digital systems are substrate-independent, whereas analog systems are substrate-dependent. They’re fundamentally inextricable from the stuff of which they’re made.
On the other hand, digital models aren’t made of stuff. They’re abstract. You can certainly instantiate a digital model within a physical substrate (silicon chips), the way you can print a picture of an engine on a piece of paper, but it won’t produce torque like an actual engine let alone rain like an actual weather system.
On a separate note, you reallllly need to acquaint yourself with Complexity Theory, if you actually believe our models will ever be anything other than decent estimates.
To learn more, please take a Theoretical Computer Science course.
Irreducibility isn’t a part of physics
Correct. It’s theoretical computer science. Again, analog systems are irreducible to digital ones by definition. They can only be modeled (functionally and crudely).



Biological neurons are actually more digital than artificial neural nets are.
There are three types of computers.
Digital means reducible to a Turing machine. Analog, which includes things like flowers and cats, means irreducible by definition. (Otherwise, they would be digital.)
Brains are analog computers (maybe with some quantum components we don’t understand).
Making a mathematical model of an analog computer is like taking a digital picture of a flower. That picture is not the same as the flower. It won’t work the same way. It will not produce nectar, for instance, or perform photosynthesis.
Everything about how a neuron works is completely undigitizable. There’s integration at the axon hillock; there are gooey vesicles full of neurotransmitters whose expression is chemically mediated, dumped into a synaptic cleft of constantly variegated width and browning motion to activate receptors whose binding affinity isn’t even consistent. The best we can do is build mathematical models that sort of predict what happens next on average.
These crude neural maps are not themselves engaged in brain activity — the map is not the territory.
Idk where you got the idea that neurons can be digitized, but someone lied to you.


isn’t an AI replacing artists evidence it has an experience
I can only speak about the literary world, and I was quite sanguine about ChatGPT in the early days, before I learned about how LLMs actually work. Having experimented with these tools extensively, I am certain that not a single page of good fiction has ever been produced by these statistical models. Their banality is almost uncanny — unless you know how they work, in which case it makes sense.
Now to be fair, fewer than 1 in 100 people can write fiction well, and fewer than 1 in 10,000 can do it at a level I’d consider “art” (as opposed to amateur dabbling).
LLMs are limited by the mathematics of their design. They’re just tracking weighted averages about what word comes next. That’s why they’re so good at corpospeak and technical writing, and so utterly worthless and cringey at writing fiction (or “art”).
If a collection of cells can be creative, then an extremely large mathematical system embodied in a GPU could also, potentially, be creative.
Sure. And a hundred monkeys with typewriters could reproduce the works of Shakespeare. Like you said, the issue is how to do it consistently and not in an infinite sea of garbage, which is what would happen if you increase stochasticity in service of originality. It’s a design limitation.
I have no idea what it’s “like” to be an LLM
The same thing that it’s “like” to be a fax machine. They’re not significantly different, and you can literally program an LLM inside a fax machine if you wanted to.
Anyway, leaving you with the thought that you can’t compare “a collection of cells” to digital computers for two reasons.
Cellular activity is the domain of biologists, who do not study creativity or art. We have absolutely no idea how the tiny analog machinery of multicellular organisms give rise to consciousness.
Comparing digital stuff to analog stuff is a category error.
“If a collection of cells can be creative, why not a mathematical system in a GPU?”
“If a collection of cells can be creative, why not cheeseburgers?”
In both cases the answer is potato.


deleted by creator


Social media converted half of Americans into nazis in about a decade and elected Trump twice. If you don’t think the internet is dangerous, you’re not paying attention.
That said, it’s easy to keep kids off instagram and TikTok without spying. Simply require devices to which children have access to be sold without access to half the internet. Problem solved.


He’s literally a remorseless psychopath.


Consider the following question: “why did you write something sad?”
Maybe the sadness is random. (That’s depression for you.) But it doesn’t change the fact that the subjective nature of sadness fuels creative decisions. It is why characters in a novel do so and so, and why their feelings are described in a way that is original and yet eerily familiar — i.e., creatively.


randomness is a central part of a human coming up with an idea.
So, here’s how I understand this claim. Either
(1) means randomness is background noise cancelled out at scale. We would still ask why some people are more creative than others, (or why some planets are redshifted compared to others) and presumably we have more to say than “luck,” since the chances that Shakespeare wrote his plays at random is 0.
Interpretation (2) suggests that creativity doesn’t exist and this whole conversation is senseless.


My point is that “weirdness” is rooted in subjectivity. Since LLMs have no subjectivity, they’re forced to rely on randomness, monkey-with-a-typewriter style, which is why their outputs are either banal or nonsensical.


if there’s no random element to human cognition
I didn’t say there’s no randomness in human cognition. I said that the originality of human ideas is not a matter of randomized thinking.
Randomness is everywhere. But it’s not the “randomness” of an artist’s thought process that accounts for the originality of their creative output (and is detrimental to it).
For LLMs, the opposite is true.


novelty and correctness are opposite each other in humans
So, when it comes to mental illness and creativity, despite some empirical correlations, “There is now growing evidence for the opposite association.”

However, there are inverse-U-shaped relationships between several mental characteristics and creativity:

Although you’ll notice that disinhibition rapidly becomes detrimental.


If we increase an LLM’s predictive utility it becomes less interesting, but if we make it more interesting it becomes nonsensical (since it can less accurately predict typical human outputs).
Humans, however, can be interesting without resorting to randomness, because they have subjectivity, which grants them a unique perspective that artists simply attempt (and often fail) to capture.
Anyways, however we eventually create an artificial mind, it will not be with a large language model; by now, that much is certain.


LLMs are mathematically limited to amateur skill ceiling in creativity. Additionally, they’re fundamentally combinatorial and incapable of originality. This is why we are yet to see a single page of LLM fictional prose that doesn’t suck balls.
deleted by creator