You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • joe_archer@lemmy.world
    link
    fedilink
    English
    arrow-up
    65
    arrow-down
    2
    ·
    6 months ago

    It is probably the most telling demonstration of the terrible state of our current society, that one of the largest corporations on earth, which got where it is today by providing accurate information, is now happy to knowingly provide incorrect, and even dangerous information, in its own name, an not give a flying fuck about it.

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      edit-2
      6 months ago

      Wikipedia got where it is today by providing accurate information. Google results have always been full of inaccurate information. Sorting through the links for respectable sources just became second nature, then we learned to scroll past ads to start sorting through links. The real issue with misinformation from an AI is that people treat it like it should be some infallible Oracle - a point of view only half-discouraged by marketing with a few warnings about hallucinations. LLMs are amazing, they’re just not infallible. Just like you’d check a Wikipedia source if it seemed suspect, you shouldn’t trust LLM outputs uncritically. /shrug

      • blind3rdeye@lemm.ee
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        edit-2
        6 months ago

        Google providing links to dubious websites is not the same as google directly providing dubious answers to questions.

        Google is generally considered to be a trusted company. If you do a search for some topic, and google spits out a bunch of links, you can generally trust that those links are going to be somehow related to your search - but the information you find there may or may not be reliable. The information is coming from the external website, which often is some unknown untrusted source - so even though google is trusted, we know that the external information we found might not be. The new situation now is that google is directly providing bad information itself. It isn’t linking us to some unknown untrusted source but rather the supposedly trustworthy google themselves are telling us answers to our questions.

        None of this would be a problem if people just didn’t consider google to be trustworthy in the first place.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          6 months ago

          I do think Perplexity does a better job. Since it cites sources in its generated response, you can easily check its answer. As to the general public trusting Google, the company’s fall from grace began in 2017, when the EU fined them like 2 billion for fixing search results. There’ve been a steady stream of controversies since then, including the revelation that Chrome continues to track you in private mode. YouTube’s predatory practices are relatively well-known. I guess I’m saying that if this is what finally makes people give up on them, no skin off my back. But I’m disappointed by how much their mismanagement seems to be adding to the pile of negativity surrounding AI.