Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.

  • Stoneykins [any]@mander.xyz
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    3
    ·
    edit-2
    1 year ago

    There needs to be like an information campaign or something… The average person doesn’t realize these things say what they think you want to hear, and they are buying into hype and think these things are magic knowledge machines that can tell you secrets you never imagined.

    I mean, I get the people working on the LLMs want them to be magic knowledge machines, but it is really putting the cart before the horse to let people assume they already are, and the little warnings that some stuff at the bottom of the page are inaccurate aren’t cutting it.

    • TheRealKuni@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I had a friend who read to me this beautiful thing ChatGPT wrote about an idyllic world. The prompt had been something like, “write about a world where all power structures are reversed.”

      And while some of the stuff in there made sense, not all of it did. Like, “in schools, students are in charge and give lessons to the teachers” or something like that.

      But she was acting like ChatGPT was this wise thing that had delivered a beautiful way for society to work.

      I had to explain that, no, ChatGPT gave the person who made the thing she shared what they asked for. It’s not a commentary on the value of that answer at all, it’s merely the answer. If you had asked ChatGPT to write about a world where all power structures were double what they are now, it would give you that.

    • fsmacolyte@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I mean, on the ChatGPT site there’s literally a disclaimer along the bottom saying it’s able to say things that aren’t true…

      • Flambo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        people assume they already are [magic knowledge machines], and the little warnings that some stuff at the bottom of the page are inadequate.

        You seem to have missed the bottom-line disclaimer of the person you’re replying to, which is an excellent case-in-point for how ineffective they are.

      • stopthatgirl7@kbin.socialOP
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        Unfortunately, people are stupid and don’t pay attention to disclaimers.

        And, I might be wrong, but didn’t they only add those in recently after folks started complaining and it started making the news?

        • fsmacolyte@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I feel like I remember them being there since January of this year, which is when I started playing with ChatGPT, but I could be mistaken.