• Vince@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    12
    ·
    3 months ago

    Ok, dumb question time. I’m assuming no one has any significant issues, legal or otherwise, with a person studying all Van Gogh paintings, learning how to reproduce them, and using that knowledge to create new, derivative works and even selling them.

    But when this is done with software, it seems wrong. I can’t quite articulate why though. Is it because it takes much less effort? Anyone can press a button and do something that would presumably take the person from the example above years or decades to do? What if the person was somehow super talented and could do it in a week or a day?

    • tyler@programming.dev
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      3
      ·
      3 months ago
      1. Because it’s not human. We distinguish ourselves in everything, that’s why we think we’re special. The same applies to inventions, e.g. why monkeys can’t have a patent.
      2. Time. New “products” whether that be art, engineering, science, all take time for humans. So value is created with time, because it creates scarcity and demand.
      3. Talent. Due to the time factor, talent and practice are desired traits of a human. You mention that a talented human can do something in just a few days that might take someone else years, but it might only take them a few days because they spent years learning.
      4. Perfection. Striving for perfection is a human experience. A robot doing something perfect isn’t impressive, a human doing something perfect is amazing. Even the most amateur creator can strive for perfection.

      Think about paintings vs prints. Paintings are much more valuable because they aren’t created as quickly as the prints are. Even the most amateur artwork is more valuable as a physical creation rather than a copy, like a child’s crayon drawing.

      This even applies to digital art because the first instance of something is the most difficult thing to create, everything after that is then just a copy, and yes this does apply to some current Gen AI tech, but very soon that will no longer be the case.

      This change from humans asking for something and having other humans create it to humans asking for something and having computers create it is a loss of our humanity, what makes us human.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      3 months ago

      If you’re looking for a universally-applicable moral framework, join the thousands of years of philosophers striving for the same.

      If you’re just looking for an explanation that allows you to put one foot in front of the other…

      Laws exist for us to spell out the kind of society we’d like to live in. Generally, we prefer that individuals be able to participate in cultural conversations and offer their own viewpoint. And generally, we prefer that groups of people don’t accumulate massive amounts of power over other groups of people.

      Dedicating your life to copying another artist’s style is participating in a cultural conversation, and you won’t be able to help yourself from infusing your own lived experience into your work of copying the artist. If only by the details that you focus on getting exactly right, the slight mistakes that repeat themselves or morph over the course of your career, the pieces you prioritize replicating over and over again. It says something about who you are, and that’s worth appreciating.

      Now, if you’re trying to pass those off as originals and not your own tributes, then you’re deceiving people and that’s a problem because you’re damaging the cultural conversation by lying about the elements you’re putting into it. Even so, sometimes that’s an interesting artistic enterprise in itself. Such as when artists pretend to be someone else. Warhol was a fan of this. His whole career revolved around messing with concepts of authenticity in art.

      As for power, you don’t gain that much leverage over another artist by simply copying their work. And if you riff on it to upstage them, you’re just inviting them to do the same to you in turn.

      But if you can do that mechanically, quickly, so that any creative twist they put out there to undermine your attempts to upstage them, you have an instant response at little cost to yourself, now you’re in a position of great power. The more the original artist produces, the stronger your advantage over them becomes. The more they try, the harder it is for them to win.

      We don’t generally like when someone has accumulated tons of power, especially when they subsequently use that power to prevent others from being able to compete.

      Edit: I’d also caution against trying to make an objective test for whether a particular act of copying is “okay”. This invites two things:

      1. Artists can’t help but question what’s acceptable and play around with it. They will deliberately transgress in order to make a point, and you’ll be forced to admit that your objective test is worthless.

      2. Tech companies are relentlessly horny for this kind of objective legal framework, because they want to be able to algorithmically approach the line and fill its border to fractal levels of granularity without technically crossing the line. RealPage, DoorDash, Uber, Amazon, OpenAI all want “illegal” to be as precisely and quantitatively defined as possible, so that they can optimize for “barely legal”.

    • aStonedSanta@lemm.ee
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      5
      ·
      3 months ago

      They are copying your intellectual property and digitizing its knowledge. It’s a bit different as it’s PERMANENT. With humans knowledge can be lost, forgotten, or ignored. In these LLMs that’s not an option. Also the skill factor is a big issue imo. It’s very easy to setup an LLM to make AI imagery nowadays.

        • ForgotAboutDre@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          4
          ·
          3 months ago

          They are copying. These LLM are a product of their input, and solely a product of their input. It’s why they’ll often directly output their training data. Using more data to train reduces this effect, that’s why all these companies are stealing and getting aggressive in stopping others stealing their data.

        • aStonedSanta@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          3 months ago

          Proof? I am fairly certain I am correct but I will gladly admit fault. This whole LLM thing is indeed new to me also

    • MinFapper@startrek.website
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      edit-2
      3 months ago

      So, before the invention of the camera, the most valuable and most popular creative skill was replicating people on canvas as realistically as possible. Yes, we remember famous exceptions like Picasso, but by sheer number of paintings the most common were portraits of rich people.

      After the cameras took that job away, prevailing art changed to become more abstract and “creative”. But that still pissed off a lot of people that had spent a very long time honing a skill that was now no longer in demand.

      What we’re seeing is a similar shift. I think future generations of artists will value color theory, composition, etc. over specific brush stroke techniques. AI will make art much more accessible once enough time has passed for AI assisted art to be considered art. Make no mistake: it will always be people that actually create the art - AI will just reduce/remove the grunt work so they can focus more on creativity.

      Now, whether billion dollar corporations deserve to exploit the labor of millions of people is a whole separate conversation, but tl;dr: they don’t, but they’re going to anyway because there is little to stop them in correct economic/governance models.

    • Cornelius_Wangenheim@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      10
      ·
      edit-2
      3 months ago

      Artists who rips off other great works are still developing their talent and skills. They can then go on to use to make original works. The machine will never produce anything original. It is only capable of mixing together things it has seen in its training set.

      There is a very real danger that of ai eviscerating the ability for artists to make a living, making it where very few people will have the financial ability to practice their craft day in and day out, resulting in a dearth of good original art.

      • Dkarma@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        10
        ·
        3 months ago

        The machine will never produce anything original. It is only capable of mixing together things it has seen in its training set.

        This is patently false and shows you don’t know a single thing about how ai works.

    • FooBarrington@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      3 months ago

      There’s a simple argument: when a human studies Van Gogh and develops their own style based on it, it’s only a single person with very limited output (they can only paint so much in a single day).

      With AI you can train a model on Van Gogh and similar paintings, and infinitely replicate this knowledge. The output is almost unlimited.

      This means that the skills of every single human artist are suddenly worth less, and the possessions of the rich are suddenly worth more. Wealth concentration is poison for a society, especially when we are still reliant on jobs for survival.

      AI is problematic as long as it shifts power and wealth away from workers.

      • saplyng@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        3 months ago

        Just as an interesting “what if” scenario - a human making the effort to stylize Van Gogh is okay, and the problem with the AI model is that it can spit out endless results from endless sources.

        What if I made a robot and put the Van Gogh painting AI in it, never releasing in elsewhere. The robot can visualize countless iterations of the piece it wants to make but its only way share it is to actually paint it - much in the same way a human must do the same process.

        Does this scenario devalue human effort? Is it an acceptable use of AI? If so does that mean that the underlying issue with AI isn’t that it exists in the first place but that its distribution is what makes it devalue humanity?

        *This isn’t a “gotcha”, I just want a little discussion!

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          It’s an interesting question! From my point of view, “devaluing human effort” (from an artistic perspective) doesn’t really matter - humans will still be creating new and interesting art. I’m solely concerned about the shift in economic power/leverage, as this is what materially affects artists.

          This means that if your robot creates paintings with an output rate comparable to a human artist, I don’t really see anything wrong with it. The issue arises once you’re surpassing the limits of the individual, as this is where the power starts to shift.

          As an aside, I’m still incredibly fascinated by the capabilities and development of current AI systems. We’ve created almost universal approximators that exhibit complex behavior which was pretty much unthinkable 15-20 years ago (in the sense that it was expected to take much longer to achieve current results). Sadly, like any other invention, this incredible technology is being abused by capitalists and populists for profit and gain at the expense of everyone else.

    • taaz@biglemmowski.win
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      I am guessing the closest opposite argument would be how close it is to outright copying the original work?

      • Vince@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        3 months ago

        I’m more trying to figure out why it’s generally acceptable when a human does it vs when a machine does it.

        I don’t know for sure, but I think they would be able to adjust settings so that it looks nothing like any original work, but still have the same style, as I’ve seen people do.

    • Dkarma@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      5
      ·
      3 months ago

      Easier than that:

      Google has been doing this for years for their search engine and no one said a thing. Why do you care now that it’s a different program scanning your media?

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      8
      ·
      3 months ago

      Dumb question: why do you feel you need to defend billion dollar companies getting even richer off somebody else’s work?

      Also Van Gogh’s works are public domain now.

      • Vince@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        3
        ·
        3 months ago

        I’m not defending any companies, just thinking out loud, but I supposed I can see if that’s how it reads.

        I was just asking myself why it feels wrong when a machine does it vs when a human does it. By your argument, would it be ok if some poor nobody invented and is using this technology vs a billion dollar company? Is that why it feels wrong?

        • tjsauce@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          3 months ago

          The issue isn’t the final, individual art pieces, it’s the scale. An AI can produce sub-par art quickly enough to threaten the livelyhood of artists, especially now that there is far too much art for anyone to consume and appreciate. AI art can win attention via spam, drowning out human artists.

          • TheRealKuni@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            The issue isn’t the final, individual art pieces, it’s the scale. An AI can produce sub-par art quickly enough to threaten the livelyhood of artists, especially now that there is far too much art for anyone to consume and appreciate. AI art can win attention via spam, drowning out human artists.

            This is literally what people said about photography.

            And they were right, painting became less prolific as photography became available to the masses. People generally don’t get their portrait painted.

            But people also generally don’t go to photo studios to have their picture taken, either, and those used to be in every shopping mall. But now we all have camera phones that adjust lighting and color and focus for us, and we can send a sufficiently decent picture off to be printed and mailed back to us. For those who want it done professionally that option is available and will be higher quality, just like portrait painting is still available, but technology has shrunk those client pools.

            Technology always changes job markets. Generative AI will, just as others have done. People will lose careers they thought were stable, and it will be awful, but this isn’t anything unique to generative AI.

            The only constant is that things change.

        • wewbull@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          A generative AIs only purpose is to generate “works”. So it’s only purpose in consuming “work” is to use them as reference. It exists to produce derivative works. Therefore the person feading the original work into the machine is the one making the choice on how that work will be used.

          A human can consume a “work” for no other reason but to admire it, be entertained by it, be educated by it, to evoke an emotion and finally to produce another work based on it. Here the consumer of the work is the one deciding how it will be used. They are the ones responsible.

    • primrosepathspeedrun@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      3 months ago

      tl;dr: copyright law has always been nonsense designed to protect corporations and fuck over artists+consumers

      but now corpo daddy and corpo mommy are fighting, and we need to take sides.

      and it’s revealing that copyright law never existed to protect artists, and will continue to not do that, but MUCH more obviously, and all the cucks who whined about free culture violating laws are reaping what they fucking sowed.

    • SkyNTP@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      3 months ago

      Generative AI is incapable of contributing new material, because Generative AI does not sense the world through a unique perspective. So the comparison to creators that incorporate prior artists work is a false comparison. Artists are allowed to incorporate other artists work in the same way that scientists cite other’s work without it being plagiarism.

      In art, in science, we stand on the shoulders of giants. AI models do not stand on the shoulders of giants. AI models just replicate the giants. Society has been fooled to think otherwise.

      • TheRealKuni@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Generative AI is a tool. It is neither a creator nor an artist, any more than paintbrushes or cameras are. The problem arises not with the tool itself but with how it is used. The creativity must come from the user, just like the way Procreate or GIMP or even photography works.

        The skill factor is certainly lower than other forms of artistic expression, but that is true of photography vs painting as well.

        I am not trying to say all uses of generative AI are art, anymore than every photograph is art. But that doesn’t mean it cannot be a tool to create art, part of the workflow as utilized by someone with a vision willing to take the time to get the end product they want.

        Generative AI doesn’t stand on the shoulders of giants, but neither does a camera.

    • Dark_Dragon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      So try doing Disney style animation and similar character and similar style story line. And start profiting from it. Lets see if the “Disney” the “corporation” will remain silent or sue you to oblivion.

        • Dark_Dragon@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 months ago

          I don’t hate him. Its just that when corporation steals individual idea or data its for research and stuff. If its other way around, us as individual will have to face lawsuit.

          So i hope they sue nvidia and other big corporations who are harvesting our data for AI.

          • blazera@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            3 months ago

            Thats the thing, nothings being stolen. Beauty and the Beast didnt up and disappear because Bluth and Fox Studios made Anastasia. Theres style similarities but it is undeniably its own work. Dont even think about the style sharing going on in the thousands of Anime out there.

    • bluestribute@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      3 months ago

      If someone studies Van Gogh and reproduces images, they’re still not making Van Gogh - they’re making their art inspired by Van Gogh. It still has their quirks and qualms and history behind the brush making it unique. If a computer studies Van Gogh and reproduces those images, it’s reproducing Van Gogh. It has no quirks or qualms or history. It’s just making Van Gogh as if Van Gogh was making Van Gogh.

    • FiskFisk33@startrek.website
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      3 months ago

      agreed.

      What if banksy sued anyone who shared or archived photos of his wall art, that wouldn’t make sense