I heard a bunch of explanations but most of them seem emotional and aggressive, and while I respect that this is an emotional subject, I can’t really understand opinions that boil down to “theft” and are aggressive about it.

while there are plenty of models that were trained on copyrighted material without consent (which is piracy, not theft but close enough when talking about small businesses or individuals) is there an argument against models that were legally trained? And if so, is it something past the saying that AI art is lifeless?

  • WoodScientist@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    14 hours ago
    • Can it ever get to the point where it wouldn’t be vulnerable to this? Maybe. But it would require an entirely different AI architecture than anything that any contemporary AI company is working on. All of these transformer-based LLMs are vulnerable to this.

    • That would be fine. That’s what they should have done to train these models in the first place. Instead they’re all built on IP theft. They were just too cheap to do so and chose to build companies based on theft instead. If they hired their own artists to create training data, I would certainly lament the commodification and corporatization of art. But that’s something that’s been happening since long before OpenAI.

    • MTK@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      13 hours ago

      Thank you, out of all of these replies I feel like you really hit the nail on the head for me.