I heard a bunch of explanations but most of them seem emotional and aggressive, and while I respect that this is an emotional subject, I can’t really understand opinions that boil down to “theft” and are aggressive about it.

while there are plenty of models that were trained on copyrighted material without consent (which is piracy, not theft but close enough when talking about small businesses or individuals) is there an argument against models that were legally trained? And if so, is it something past the saying that AI art is lifeless?

  • Susaga@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    edit-2
    1 day ago

    AI feels like a Lovecraftian horror to me. It’s trying to look authentic, but it’s wrong on a fundemental level. Nothing’s the right shape, nothing’s the right texture, nothing’s consistent, nothing belongs together… But somehow, nobody else has noticed what should be blatantly obvious! And when you try to point it out, you get a hivemind responding that it’s good actually, and you’re just a luddite.

    But let’s assume AI stops being awful in a technical sense. It’s still awful in a moral sense.

    Artists are poor. That’s a well known sentiment you see a lot and, given how many times I see commission postings, it’s pretty accurate. That artist needs to work to live, and that work is creating art.

    AI is deliberately depriving these artists of work in order to give the AI’s owner a quick, low quality substitute. In some cases, it will copy an artist’s style, so you’re deliberately targetting a specific artist because they’re good at their job. And it’s using the artist’s work in order to replace them.

    • oce 🐆@jlai.lu
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      Isn’t this point also valid for any kind of automation? Machines removed worked from manual workers, software engineers remove work from manual and office workers since they started, way before LLMs. The point that artists actual love their work could also be made for other people whose work have been automated before.
      I think the real issue is that automation should benefit everyone equally, and not only its owners.

      • WoodScientist@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        19 hours ago

        The key in my mind is that this technology cannot work independently. A bucket excavator can replace the work of many people digging by hand. But the operation of the machine truly replaces the laborers. Some hand labor is still required in any excavation, but the machine itself is capable of operating just fine without the workers it is replacing.

        But LLM image generators? They are only possible from the work of artists. They are directly trained off of artists’ work. Even worse, the continued existence of LLMs requires the never-ending continual contribution of humans. When AI image generators are trained off the results from AI image generators, things rapidly generate into literal static. It’s making a copy of a copy. If all art becomes made by LLMs, then the only recent data to train future models will be the output of other LLMs, and the whole thing collapses like a snake devouring its own tail.

        This is also the crucial difference between how image generators and actual artists work. Some will say that how LLMs work is simply the same learning process that humans learn through. The image generator trains off pre-existing art, and so does a human artist, proponents of AI will say.

        But we can see the flaw in this in that real artists do not suffer generational decay. Human artists have trained off the work of other artists, in a chain unbroken since before the rise of civilization. Yes, artists can learn technique and gain inspiration from the work of other artists, but humans are capable of true independent creation. Image generators OTOH are just blindly copying and summarizing the work of others. They have no actual sense of what art is, what makes it good, or what gives it soul. They don’t even have a sense of what makes an image comprehensible. They’re just playing a big blind correlation game of inputs and outputs. And so, if you train one AI off another AI’s output, it decays like making a copy of a copy.

        This is a crucial difference between AI “art” and human art. Human art is an original creation. As such, new art can be endlessly created. AI “art” can only blindly copy. So unless the AI can get continual references from actual real human art, it quickly diverges into uselessness.

        The ditch digger replaced by an excavator has no real means to legally object. They were paid for their previous jobs, and are simply no longer needed. But real human artists and AI? This software is going to be a never-ending vampire on their creative output. It has only been created by stealing their past work, and it will only remain viable if it can continue to steal their work indefinitely into the future.

      • Susaga@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        1 day ago

        Technically, yes, but I would argue that this is worse.

        An excavator saves you days of digging a single hole. An assembly line saves you from having to precisely construct a toy. A printer saves you from having to precisely duplicate a sheet of paper. All of this is monotonous and soul-destroying work that people are happy they don’t need to do.

        But you still need to decide where to dig the hole. You still need to design the toy. You still need to fill in the first sheet of paper. All of the work left over is more creatively fulfilling.

        We are now attempting to automate creativity.