A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    2
    ·
    8 months ago

    normie-friendly

    Whenever people say things like this, I wonder why that person thinks they’re so much better than everyone else.

    • Hackerman_uwu@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 months ago

      Tangentially related: the more people seem to support AI all the things the less it turns out they understand it.

      I work in the field. I had to explain to a CIO that his beloved “ChatPPT” was just autocomplete. He become enraged. We implemented a 2015 chatbot instead, he got his bonus.

      We have reached the winter of my discontent. Modern life is rubbish.

    • Bobby Turkalino@lemmy.yachts
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      10
      ·
      8 months ago

      Normie, layman… as you’ve pointed out, it’s difficult to use these words without sounding condescending (which I didn’t mean to be). The media using words like “hallucinate” to describe linear algebra is necessary because most people just don’t know enough math to understand the fundamentals of deep learning - which is completely fine, people can’t know everything and everyone has their own specialties. But any time you simplify science so that it can be digestible by the masses, you lose critical information in the process, which can sometimes be harmfully misleading.

      • Krauerking@lemy.lol
        link
        fedilink
        English
        arrow-up
        15
        ·
        8 months ago

        Or sometimes the colloquial term people have picked up is a simplified tool for getting the right point across.

        Just because it’s guessing using math doesn’t mean it isn’t hallucinating in a sense the additional data. It did not exist before and it willed it into existence much like a hallucination while being easy for people to catch onto quickly as not trustworthy thanks to previous definitions and understanding of the word.

        Part of language is finding the right words to use so that people can quickly understand topics even if it means giving up nuance but absolutely it should be based on getting them to the right conclusion even if in a simplified form which doesn’t always happen when there is bias. I think this one works just fine.

      • cucumberbob@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        It’s not just the media who uses this term. According to this study which I’ve had a very brief skim of, the term “hallucination” was used in literature as early as 2000, and in Table 1, you can see hundreds of studies from various databases which they then go on to analyse the use of “hallucination” in.

        It’s worth saying that this study is focused on showing how vague the term is, and how many different and conflicting definitions of “hallucination” there are in the literature, so I for sure agree it’s a confusing term. Just it is used by researchers as well as laypeople.

      • Hackerman_uwu@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        8 months ago

        LLMs (the models that “hallucinate” is most often used in conjunction with) are not Deep Learning normie.

          • Hackerman_uwu@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            8 months ago

            I’m not going to bother arguing with you but for anyone reading this: the poster above is making a bad faith semantic argument.

            In the strictest technical terms AI, ML and Deep Learning are district, and they have specific applications.

            This insufferable asshat is arguing that since they all use fuel, fire and air they are all engines. Which’s isn’t wrong but it’s also not the argument we are having.

            @OP good day.

                • Bobby Turkalino@lemmy.yachts
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  7 months ago

                  Ok but before you go, just want to make sure you know that this statement of yours is incorrect:

                  In the strictest technical terms AI, ML and Deep Learning are district, and they have specific applications

                  Actually, they are not the distinct, mutually exclusive fields you claim they are. ML is a subset of AI, and Deep Learning is a subset of ML. AI is a very broad term for programs that emulate human perception and learning. As you can see in the last intro paragraph of the AI wikipedia page (whoa, another source! aren’t these cool?), some examples of AI tools are listed:

                  including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics

                  Some of these - mathematical optimization, formal logic, statistics, and artificial neural networks - comprise the field known as machine learning. If you’ll remember from my earlier citation about artificial neural networks, “deep learning” is when artificial neural networks have more than one hidden layer. Thus, DL is a subset of ML is a subset of AI (wow, sources are even cooler when there’s multiple of them that you can logically chain together! knowledge is fun).

                  Anyways, good day :)