I heard a bunch of explanations but most of them seem emotional and aggressive, and while I respect that this is an emotional subject, I can’t really understand opinions that boil down to “theft” and are aggressive about it.

while there are plenty of models that were trained on copyrighted material without consent (which is piracy, not theft but close enough when talking about small businesses or individuals) is there an argument against models that were legally trained? And if so, is it something past the saying that AI art is lifeless?

  • WoodScientist@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    1 day ago

    The key in my mind is that this technology cannot work independently. A bucket excavator can replace the work of many people digging by hand. But the operation of the machine truly replaces the laborers. Some hand labor is still required in any excavation, but the machine itself is capable of operating just fine without the workers it is replacing.

    But LLM image generators? They are only possible from the work of artists. They are directly trained off of artists’ work. Even worse, the continued existence of LLMs requires the never-ending continual contribution of humans. When AI image generators are trained off the results from AI image generators, things rapidly generate into literal static. It’s making a copy of a copy. If all art becomes made by LLMs, then the only recent data to train future models will be the output of other LLMs, and the whole thing collapses like a snake devouring its own tail.

    This is also the crucial difference between how image generators and actual artists work. Some will say that how LLMs work is simply the same learning process that humans learn through. The image generator trains off pre-existing art, and so does a human artist, proponents of AI will say.

    But we can see the flaw in this in that real artists do not suffer generational decay. Human artists have trained off the work of other artists, in a chain unbroken since before the rise of civilization. Yes, artists can learn technique and gain inspiration from the work of other artists, but humans are capable of true independent creation. Image generators OTOH are just blindly copying and summarizing the work of others. They have no actual sense of what art is, what makes it good, or what gives it soul. They don’t even have a sense of what makes an image comprehensible. They’re just playing a big blind correlation game of inputs and outputs. And so, if you train one AI off another AI’s output, it decays like making a copy of a copy.

    This is a crucial difference between AI “art” and human art. Human art is an original creation. As such, new art can be endlessly created. AI “art” can only blindly copy. So unless the AI can get continual references from actual real human art, it quickly diverges into uselessness.

    The ditch digger replaced by an excavator has no real means to legally object. They were paid for their previous jobs, and are simply no longer needed. But real human artists and AI? This software is going to be a never-ending vampire on their creative output. It has only been created by stealing their past work, and it will only remain viable if it can continue to steal their work indefinitely into the future.

    • MTK@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      6 hours ago

      Wow, thank you, I think this is the first argument that clicked for me.

      But it does raise for me 2 questions:

      • If the technology ever gets to a point where it does not degenerate into static by creating its own feedback loop, would it then be more like an excavator?
      • What if this is the start of a future (understandably a bad start) where you have artist who get paid to train AI models? Kind of like a an engineer that designs a factory
      • Fushuan [he/him]@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 hours ago

        About your first point: think of it like inbreeding, you need fresh genes on the pool or mutations occur.

        A generative model will generate some relevant results and some non relevant results, it’s the job of humans to curate that.

        However, the more content the llm generates, it is used on the web and thus becomes part of it’s training data.

        Imagine that 95% of results are accurate, from those only 1% doesn’t get fact checked and gets released into the internet where other humans will complain, but that will be used as input of an llm regardless. Anyway, so we have a 99% accuracy in the next input, and only 95% of that will be accurate.

        It’s literally a sequence that will reach very innacurate values very fast:

        f(1) = 1
        f(x_n) = x_n-1 * 0.95
        

        You can mitigate it by not training it on generated data, but as long as AI content replaces genuine content, specially with images, AI will train itself from its own output and it will degenerate fast.

        About the second point, you can pay artists to train models, sure, but that’s not so clear when talking about text based generative models that depend on expert input to give relevant responses. About voice LLMs too, any given money would not be enough for a voice actor because doing so would effectively destroy their future jobs and thus future income.

      • WoodScientist@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        6 hours ago
        • Can it ever get to the point where it wouldn’t be vulnerable to this? Maybe. But it would require an entirely different AI architecture than anything that any contemporary AI company is working on. All of these transformer-based LLMs are vulnerable to this.

        • That would be fine. That’s what they should have done to train these models in the first place. Instead they’re all built on IP theft. They were just too cheap to do so and chose to build companies based on theft instead. If they hired their own artists to create training data, I would certainly lament the commodification and corporatization of art. But that’s something that’s been happening since long before OpenAI.

        • MTK@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          6 hours ago

          Thank you, out of all of these replies I feel like you really hit the nail on the head for me.