Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.
Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.
This whole thread is absurd.
Chatgpt has a form of intelligence depending on your definition of intelligence. It may also be considered conscious in a very alien and undeveloped way. It is definitely not sentient.
Kind of like having the stochastic word generating part of a brain and nothing else.
You can still shape it into something capable of intelligent and directed activity.
People are really bad at accepting the level of nuance necessary for this topic.
It is useful and fantastic for what it already is. People are just really bad at understanding what it is.
A lot of people are deeply invested in the notion that human intelligence is unique and special and impossible to replicate. Either their personal sense of worth is bound up in that notion (see for example many of the artists who get very angry when people call AI generated images “art”) or it’s simply a threat to their jobs and economic wellbeing. The result is a powerful need to convince themselves that there’s a special something that’s missing from ChatGPT and its ilk that will “never” be replicated by machines.
It’s true that ChatGPT isn’t intelligent in the same way that human brains are intelligent. But it is intelligent, in ways that are useful. And “never” is a bad bet to make for the rest of those capabilities.
My sense in reading the article was not that the author thinks artificial general intelligence is impossible, but that we’re a lot farther away from it than recent events might lead you to believe. The whole article is about the human tendency to conflate language ability and intelligence, and the author is making the argument both that natural language does not imply understanding of meaning and that those financially invested in current “AI” benefit from the popular assumption that it does. The appearance or perception of intelligence increases the market value of AIs, even if what they’re doing is more analogous to the actions of a very sophisticated parrot.
Edit all of which is to say, I don’t think the article is asserting that true AI is impossible, just that there’s a lot more to it than smooth language usage. I don’t think she’d say never, but probably that there’s a lot more to figure out—a good deal more than some seem to think—before we get Skynet.
Chatgpt is not intelligent. Not in the sense where we use that word anywhere else, including the animal kingdom. Transformer is an extraordinary clever and sophisticated algorithm, thou
As I said:
There isn’t just one kind of intelligence.
Based upon what, your feelings?
Based on its abilities. Based on studies that researchers have done on it. It has learned to do more than just regurgitate bits of its training material. It has learned from the patterns in it and can extrapolate new information.
Do you think this is not possible?
You are simply factually incorrect. You need to read morecthsn just fanboy sources.
what? what part? what “fanboy sources”?
i mean, i’m a fanboy of things like Earl K. Miller’s recent presentation on thought as an emergent property.
or general belief in different neural functions in tandem allowing us to react to the environment in ‘intelligent’ ways
you can see at the end how certain neuronal events can be related to something like transformers.
at what point from amoeba to human to you consider “intelligence” to be a valid description of what is happening?
do you understand how obscure alien intelligences can be?
what are your non-fanboy “sources”?