I heard a bunch of explanations but most of them seem emotional and aggressive, and while I respect that this is an emotional subject, I can’t really understand opinions that boil down to “theft” and are aggressive about it.

while there are plenty of models that were trained on copyrighted material without consent (which is piracy, not theft but close enough when talking about small businesses or individuals) is there an argument against models that were legally trained? And if so, is it something past the saying that AI art is lifeless?

  • Fushuan [he/him]@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 hours ago

    About your first point: think of it like inbreeding, you need fresh genes on the pool or mutations occur.

    A generative model will generate some relevant results and some non relevant results, it’s the job of humans to curate that.

    However, the more content the llm generates, it is used on the web and thus becomes part of it’s training data.

    Imagine that 95% of results are accurate, from those only 1% doesn’t get fact checked and gets released into the internet where other humans will complain, but that will be used as input of an llm regardless. Anyway, so we have a 99% accuracy in the next input, and only 95% of that will be accurate.

    It’s literally a sequence that will reach very innacurate values very fast:

    f(1) = 1
    f(x_n) = x_n-1 * 0.95
    

    You can mitigate it by not training it on generated data, but as long as AI content replaces genuine content, specially with images, AI will train itself from its own output and it will degenerate fast.

    About the second point, you can pay artists to train models, sure, but that’s not so clear when talking about text based generative models that depend on expert input to give relevant responses. About voice LLMs too, any given money would not be enough for a voice actor because doing so would effectively destroy their future jobs and thus future income.