• Asifall@lemmy.world
    link
    fedilink
    arrow-up
    20
    ·
    1 year ago

    Not really, if you read the paper what they’re doing is creating an image that looks like a dog, is labeled as a dog, but is very close to the model’s version of a cat in feature space. This means manual review of the training set won’t help.

    • ubermeisters@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      What are the implications for the non-ai viewer? I have tonassume that these changes aren’t perceptible to humans but I find that to be a stretch also. I don’t see artists willing to have an AI manipulate thier art, so that AI can’t recreate it.

      • Asifall@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        1 year ago

        I don’t think the idea is to protect specific images, it’s to create enough of these poisoned images that training your model on random free images you pull off the internet becomes risky.