• Natanael@slrpnk.net
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    5 months ago

    The problem here is that we don’t have real AI.

    We have fancier generative machine learning, and despite the claims it does not in fact generalize that well from most inputs and a lot of recurring samples end up actually embedded in the model and can thus be replicated (there’s papers on this such as sample recovery attacks and more).

    They heavily embedd genre tropes and replicate existing bias and patterns much too strongly to truly claim nothing is being copied, the copying is more of a remix situation than accidental recreation.

    Elements of the originals is there, and many features can often be attributed to the original authors (especially since the models often learn to mimic the style of individual authors, which means it embedds information about features of copyrighted works from individual authors and how to replicate them)

    While it’s not a 1:1 replication in most instances, it frequently gets close enough that a human doing it would be sued.

    This photographer lost in court for recreating the features of another work too closely

    https://www.copyrightuser.org/educate/episode-1-case-file-1/