• 0 Posts
  • 17 Comments
Joined 4 years ago
cake
Cake day: July 8th, 2020

help-circle

  • The merits are real. I do understand the deep mistrust people have for tech companies, but there’s far too much throwing out of the baby with the bath water.

    As a solo developer, LLMs are a game-changer. They’ve allowed me to make amazing progress on some of my own projects that I’ve been stuck on for ages.

    But it’s not just technical subjects that benefit from LLMs. ChatGPT has been a great travel guide for me. I uploaded a pic of some architecture in Berlin and it went into the history of it, I asked it about some damage to an old church in Spain - turned out to be from the Spanish civil war, where revolutionaries had been mowed down by Franco’s firing squads.

    Just today, I was getting help from an LLM for an email to a Portuguese removals company. I sent my message in English with a Portuguese translation, but the guy just replied back with a single sentence in broken English:

    “Yes a can , need tho mow m3 you need delivery after e gif the price”

    The first bit is pretty obviously “Yes I can” but I couldn’t really be sure what he was trying to say with the rest of it. So I asked ChatGPT who responded:

    It seems he’s saying he can handle the delivery but needs to know the total volume (in cubic meters) of your items before he can provide a price. Here’s how I’d interpret it:

    “Yes, I can [do the delivery]. I need to know the [volume] in m³ for delivery, and then I’ll give you the price.”

    Thanks to LLMs, I’m able to accomplish so many things that would have previously taken multiple internet searches and way more effort.



  • I certainly am not surprised that OpenAI, Google and so on are overstating the capabilities of the products they are developing and currently selling. Obviously it’s important for the public at large to be aware that you can’t trust a company to accurately describe products it’s trying to sell you, regardless of what the product is.

    I am more interested in what academics have to say though. I expect them to be more objective and have more altruistic motivations than your typical marketeer. The reason I asked how you would define intelligence was really just because I find it an interesting area of thought which fascinates me and has done long before this new wave of LLMs hit the scene. It’s also one which does not have clear answers, and different people will have different insights and perspectives. There are different concepts which are often blurred together: intelligence, being clever, being well educated, and consciousness. I personally consider all of these to be separate concepts, and while they may have some overlap, they nevertheless are all very different things. I have met many people who have very little formal education but are nonetheless very intelligent. And in terms of AI and LLMs, I believe that an LLM does encapsulate some degree of genuine intelligence - they appear to somehow encode a model of the universe in their billions of parameters and they are able to meaningfully respond to natural language questions on almost any subject - however an LLM is unquestionably not a conscious being.


  • You’re right that we need a clear definition of intelligence if we are to make any predictions about achieving AGI. The researchers behind this article appear to mean “human-level cognition” which doesn’t seem to be a particularly objective or useful yardstick. To begin with, which human are we talking about? If they’re talking about an idealised maximally intelligent human, then I don’t think we should be surprised that we aren’t about to achieve that. The goal is not to recreate human cognition as if that’s some kind of holy grail. The goal is to make intelligent systems which can give results which are at least as good as what would be produced by a skilled and well-trained human working on the same problem.

    Can I ask you how you would define intelligence? And in particular, how would you - if you would at all - differentiate intelligence from being clever, or from being well educated?






  • It models only use of language

    This phrase, so casually deployed, is doing some seriously heavy lifting. Lanuage is by no means a trivial thing for a computer to meaningfully interpret, and the fact that LLMs do it so well is way more impressive than a casual observer might think.

    If you look at earlier procedural attempts to interpret language programmatically, you will see that time and again, the developers get stopped in their tracks because in order to understand a sentence, you need to understand the universe - or at the least a particular corner of it. For example, given the sentence “The stolen painting was found by a tree”, you need to know what a tree is in order to interpret this correctly.

    You can’t really use language *unless* you have a model of the universe.






  • What do you think evolved first - verbal communication or thoughts? Presumably we were able to think before we could speak, no? The words we have in our language are like pointers to internal concepts, and it seems to me that those internal concepts would have existed before language was a thing. The mouth-sounds as you put it are not the thoughts themselves, rather just labels for specific concepts. It might be possible and even convenient to think in mouth-sounds but it’s not necessary for logical thought.