• ag_roberston_author@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I’m actually surprised by the comments in here. This technology is incredibly disruptive to authors, if they are correct that their intellectual property has been misused by these companies to train LLMs, then they absolutely should have the right to prevent that.

    You can both be pro AI and advancement, and still respect creators intellectual rights and the right to not have all content stolen by megacorporations and used by them to create profits while decimating entire industries.

    • Safi Scarlett@sffa.community
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I agree. This technology doesn’t exist in a vacuum. This isn’t some utopia where a Human artist can just solely focus on creating their art and not worry about financial gain because their survival needs are always guaranteed to be met or whatever.

    • SinJab0n@mujico.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Exactly this, this is the equivalent of me taking a movie, making a function, charge for it, and then be displeased when the creators demand an explanation about it.

    • FIash Mob #5678@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Eventually the bad actors are going to lose a lot of money trying to litigate their theft of people’s art. It was always going to end up in the legal system. These apps are even programmed to scrub watermarks and signatures. It’s deliberate theft.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      One of the largest communities on Lemmy is !piracy@lemmy.dbzer0.com, so I’m not really surprised that there’s people that don’t care about copyright :)

      On the other hand, if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing? Are they going to sue cliffnotes too?

      • ag_roberston_author@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.

        Training databases should be ethically sourced from opt in programs, that some companies are already doing, such as Adobe.

        • dan@upvote.au
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.

          How can one prove that their content is being used to train the LLM though, rather than something that’s derivative of their content like reviews of it?

          • Storksforlegs@beehaw.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            there is already lots of evidence that they have scraped copyrighted art and photographs for their datasets.

      • Chahk@beehaw.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing?

        Said human presumably would have to purchase or borrow a book in order to read it, which earns the author some percentage of the profits. If giant corps want to use the books to train their LLMs, it’s only fair that they’d have to negotiate with the publishers much like libraries do.

        • dan@upvote.au
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Said human presumably would have to purchase or lend a book in order to read it

          Borrowing a book from a library doesn’t earn the author any more profits for each time it’s lended out, I don’t think. My local library just buys books off Amazon.

          What if I read the CliffNotes and make my own summary based on that? What if I read someone else’s summary and reword it? I think that’s more like what ChatGPT is doing - I really don’t think it’s being fed entire copyrighted books as training data. There’s no actual proof LibGen or ZLib is being used to train it.

          • jursed@beehaw.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            authors do get money from libraries that buy the books. and in some places they even get money depending on how much its checked out.

  • Storksforlegs@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    People keep taking issue with this articles use of “summarizing” and linking to wikipedia… Summaries of copyrighted work are obviously not illegal.

    This article is oversimplified and does a crummy job of explaining the problem. Ars Technica does a much better job explaining.

    The fact that the ai can summarize these works in detail is proof that they were trained using copyrighted material without permission, (which is not fair use) Sarah Silverman is obviously not going to be hurt financially by this, but there are hundreds of thousands of authors who definitely will be affected. They have every right to sue.

  • world_hopper@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    A lot of these comments are missing a large point which is that, if the claim is true, the books are being pirated and then effectively used for a commercial application.

    So the authors are losing money through this process and did not give their permission for their work to be used in a commercial way.

    The decision of this case will be wildly important for the development of AI.

  • nothacking@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    if asked by a user prompts chatGPT to summarize a copyrighted book, it will do so.

    So will a human. Let’s stop extending copyright law. Also, how you know it read the book, and not a summary of it, of which there are loads on the internet?

    • SpaceToast@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      This is why I am pro AI art. It’s no different than a human taking inspiration from other work.

      Nobody comes up with anything truly original. It’s all inspired by someone before them.

      • AndrewZabar@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I don’t know how anyone is pro AI anything other than the pigs making money from it. Only bad can result of it. And will.

        • SpaceToast@mander.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I don’t know how anyone can be anti AI.

          It’s just a tool. To say that only bad can result of it is a bold claim that doesn’t make any sense.

          Can you provide an example?

  • Moonrise2473@feddit.it
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Seems very improbable that they scraped a pirate website with forced registration and tight daily download limits (10 books a day max?) to get content that’s often mislabeled and not presented in an homogeneous way.

    Probably it’s just using the excerpt from Amazon (which instead with paid API access is much more easy to access) as a prompt and build on it

    • luciole (he/him)@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      There’s been ongoing suspicions that pirated content was used to train popular LLMs simply because popular datasets used for training LLMs do include such content. The Washington Post did an article about it.

      Google’s C4 dataset used for research included illegal websites. What remains to be seen is if it was cleaned up before training Bard as we know it today. OpenAI as revealed nothing on its dataset.

            • Dominic@beehaw.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              For now, we’re special.

              LLMs are far more training data-intensive, hardware-intensive, and energy-intensive than a human brain. They’re still very much a brute-force method of getting computers to work with language.

    • SinJab0n@mujico.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Dude, tell me, why do u think they have being doing this only with books and art but no music?

      Thats because music really has people protecting their assets. U can have ur opinion about it, but that’s the only reason they haven’t ABUSED companies and people’s work in music.

      It’s not reading, it’s the equivalent of me taking a movie, making a function, charge for it, and then be displeased when the creators demand an explanation.

      • Dominic@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        There are a few reasons why music models haven’t exploded the way that large-language models and generative image models have. Maybe the strength of the copyright-holders is part of it, but I think that the technical issues are a bigger obstacle right now.

        • Generative models are extremely data-inefficient. The Internet is loaded with text and images, but there isn’t as much music.

        • Language and vision are the two problems that machine learning researchers have been obsessed with for decades. They built up “good” datasets for these problems and “good” benchmarks for models. They also did a lot of work on figuring out how to encode these types of data to make them easier for machine learning models. (I’m particularly thinking of all of the research done on word embeddings, which are still pivotal to large language models.)

        Even still, there are fairly impressive models for generative music.

    • HughJanus@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      This is what I never understood about the whole training on AI thing.

      When a human creates an artwork, they don’t do it out of a vacuum. They’ve had a lifetime of inspiration from artwork they’ve discovered that inspires then to create something wholly new. AI does the same thing

      • Dominic@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        AIs are trained for the equivalent of thousands of human lifetimes (if not more). There’s no precedent for anything like this.

      • luciole (he/him)@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        The AIs we are talking about are large language models. They take human work as input and produce facsimiles. They are owned by individuals or companies that have no permission to exploit in this way intellectual property tied to other people’s livelihoods to copy them.

        LLMs are not sentient, they don’t have inspiration, they are not creative and therefore do not create in the sense an artist would. They are an elaborate mathematical equation.

        “Training” an AI has nothing to do with training an actual living being. It’s just tuning: adjusting an algorithm incrementally until the operator is satisfied with the result. I think it’s defendable to amount this form of extraction to plagiarism.