A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.

The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission.
[…]
Zhao’s team also developed Glaze, a tool that allows artists to “mask” their own personal style to prevent it from being scraped by AI companies. It works in a similar way to Nightshade: by changing the pixels of images in subtle ways that are invisible to the human eye but manipulate machine-learning models to interpret the image as something different from what it actually shows.

  • TheWiseAlaundo@lemmy.whynotdrs.org
    link
    fedilink
    English
    arrow-up
    27
    ·
    1 year ago

    Lol… I just read the paper, and Dr Zhao actually just wrote a research paper on why it’s actually legally OK to use images to train AI. Hear me out…

    He changes the ‘style’ of input images to corrupt the ability of image generators to mimic them, and even shows that the super majority of artists even can’t tell when this happens with his program, Glaze… Style is explicitly not copywriteable in US case law, and so he just provided evidence that the data OpenAI and others use to generate images is transformative which would legally mean that it falls under fair use.

    No idea if this would actually get argued in court, but it certainly doesn’t support the idea that these image generators are stealing actual artwork.

  • RVMWSN@lemmy.ml
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    1 year ago

    I generally don’t believe in intellectual property, I think it creates artificial scarcity and limits creativity. Of course the real tragedies in this field have to do with medicine and other serious business. But still, artists claiming ownership of their style of painting is fundamentally no different. Why can’t I paint in your style? Do you really own it? Are you suggesting you didn’t base your idea mostly on the work of others, and no one in turn can take your idea, be inspired by it and do with it as they please? Do my means have to be a pencil, why can’t my means be a computer, why not an algorythm? Limitations, limitations, limitations. We need to reform our system and make the public domain the standard for ideas (in all their forms). Society doesn’t treat artists properly, I am well aware of that. Generally creative minds are often troubled because they fall outside norms. There are many tragic examples. Also money-wise many artists don’t get enough credit for their contributions to society, but making every idea a restricted area is not the solution. People should support the artists they like on a voluntary basis. Pirate the album but go to concerts, pirate the artwork but donate to the artist. And if that doesn’t make you enough money, that’s very unfortunate. But make no mistake: that’s how almost all artists live. Only the top 0.something% actually make enough money by selling their work, and that’s is usually the percentile that’s best at marketing their arts, in other words: it’s usually the industry. The others already depend upon donations or other sources of income. We can surely keep art alive, while still removing all these artificial limitations, copying is, was and will never be in any way similar to stealing. Let freedom rule. Join your local pirate party.

    • ElectroVagrant@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I generally don’t believe in intellectual property, I think it creates artificial scarcity and limits creativity. Of course the real tragedies in this field have to do with medicine and other serious business.

      But still, artists claiming ownership of their style of painting is fundamentally no different. Why can’t I paint in your style? Do you really own it? Are you suggesting you didn’t base your idea mostly on the work of others, and no one in turn can take your idea, be inspired by it and do with it as they please? Do my means have to be a pencil, why can’t my means be a computer, why not an algorythm?

      Limitations, limitations, limitations. We need to reform our system and make the public domain the standard for ideas (in all their forms). Society doesn’t treat artists properly, I am well aware of that. Generally creative minds are often troubled because they fall outside norms. There are many tragic examples. Also money-wise many artists don’t get enough credit for their contributions to society, but making every idea a restricted area is not the solution.

      People should support the artists they like on a voluntary basis. Pirate the album but go to concerts, pirate the artwork but donate to the artist. And if that doesn’t make you enough money, that’s very unfortunate. But make no mistake: that’s how almost all artists live. Only the top 0.something% actually make enough money by selling their work, and that’s is usually the percentile that’s best at marketing their arts, in other words: it’s usually the industry. The others already depend upon donations or other sources of income.

      We can surely keep art alive, while still removing all these artificial limitations, copying is, was and will never be in any way similar to stealing. Let freedom rule. Join your local pirate party.

      Reformatted for easier readability.

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    I remember in the early 2010s reading an article like this one on openai.com talking about the dangers of using AI for image search engines to moderate against unwanted content. At the time the concern was CSAM salted to prevent its detection (along with other content salted with CSAM to generate false positives).

    My guess is since we’re still training AI with pools of data-entry people who tag pictures with what they appear to be, so that AI reads more into images than their human trainers (the proverbial man inside the Iron Turk).

    This is going to be an interesting technology war.

  • lloram239@feddit.de
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    “New snake oil to give artists a false sense of security” - The last of these tools I tried had absolutely zero effect on the AI, which is not exactly surprising given that there are hundreds of different ways to make use of image data as well as lots of completely different models. You’ll never cover that all with some pixel twisting.

  • BellaDonna@mujico.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    What a dumb solution to a problem that doesn’t need a solution. The problem isn’t AI, it’s the lack of understanding for the tech that has people thinking AI is theft.

    • the_q@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Is it not theft? These “AI” are trained on other people’s work, often without their knowledge or permission.

      • BellaDonna@mujico.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        This is why I think people don’t know what they are talking about.

        You can look at a picture from an artist without it being considered theft, so are your memories and impressions theft? That’s what training data does, it teaches AI what something looks like, with many samples. It’s literally what your brain does, the way you see multiple dogs and know what a dog looks like is the same way that AI trains pattern recognition.

        It’s completely reasonable and desirable to have AI consume all available images, regardless of copyright the way your eyes and brain can do the same. Training data isn’t theft no more than going to a museum and looking at art is theft.

        This take that this is bad is completely unhinged and indicates people don’t understand AI.

        • the_q@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’d be careful with claiming who does and does not understand things.

          First of all, a person can’t go to a museum, see a piece of art then go home and reproduce that art or style. Given enough time, sure they might be able to learn to replicate the style. Those that are particularly good at reproduction might even become forgers which is a crime.

          Second, these llms aren’t AI. They can’t think in terms of how a living being can, only regurgitate information. They’re glorified search engines in a way.

          Lastly, I can assume that you aren’t a creative person. You probably type in some prompt to an image generator and think “I made this”. It’s easier for someone like you to overlook issues because they don’t effect you because you lack depth, which I know is hard to accept. Maybe one day you’ll gain some insight into your own lack of understanding… But I doubt it.

          • BellaDonna@mujico.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I used to be a musician, I also used to paint. I think my thought processes are no more complex than most computers, and I genuinely don’t believe human creativity is special even a little bit, like consciousness, it’s a subjective illusion.

            I do not believe in things like copyright, or intellectual property, or even ownership of these things, I think these things should be collectively owned by society.

            I don’t disagree with you from lack of experience, I disagree from fundamentally different ideological underpinnings.

            I believe there is nothing special about human perception and experience, and I can see the ways that technology maps near perfectly to the way we think. AI shouldn’t be limited, it should replace us.

            • the_q@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Okie dokie, doc. If you think the human brain isn’t “special” then I don’t know what to tell you.

              Also, you can’t know how we think when we as a species don’t know, but you being the smartest person in the room is clearly very important to you so I’ll leave you to it!

  • zwaetschgeraeuber@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    this is so dumb and clear it wont work at all. thats not the slightest how ai trains on images.

    you would be able to get around this tool by just doing the nft thing and screenshot the image and boom code in the picture is erased.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      There’s trivial workarounds for Glaze, which this is based off of, so I wouldn’t be surprised.

    • hh93@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      The problem is identifying it. If it’s necessary to preprocess every image used for training instead of just feeding it is a model that already makes it much more resources costly

    • Meowoem@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      It doesn’t even need a work around, it’s not going to affect anything when training a model.

      It might make style transfer harder using them as reference images on some models but even that’s fairly doubtful, it’s just noise on an image and everything is already full of all sorts of different types of noise.

  • ayaya@lemdro.id
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Obviously this is using some bug and/or weakness in the existing training process, so couldn’t they just patch the mechanism being exploited?

    Or at the very least you could take a bunch of images, purposely poison them, and now you have a set of poisoned images and their non-poisoned counterparts allowing you to train another model to undo it.

    Sure you’ve set up a speedbump but this is hardly a solution.

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Obviously, with so many different AIs, this can not be a factor (a bug).

      If you have no problem looking at the image, then AI would not either. After all both you and AI are neural networks.

      • driving_crooner@lemmy.eco.br
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        An AI don’t see the images like we do, an AI see a matrix of RGB values and the relationship they have with each other and create an statistical model of the color value of each pixel for a determined prompt.

        • lloram239@feddit.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          That’s not quite how it works. The pixels are just the first layer. Those get broken down into edges. The edges get broken down into shape. The shapes get broken down into features like eyes, noses, etc. Those get broken down into faces. And so on. It’s hierarchical feature detection. Which also happens to be what the human brain does.

          The actual “drawing” the AI does is quite a bit different however. The diffusion works by starting with random noise and then gradually denoising it until an image emerges. While humans can approach painting that way, it’s rather rarely done so.

  • guyrocket@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Invisible changes to pixels sound like pure BS to me. I’m sure others know more about it than i do but I thought pixels were very simple things.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Pixels are very simple things, literally 3-5 3 digit numbers.

      But pixels mean little too a generative AI - it’s all about relationship between pixels. All AI are high dimensional shapes right now… If you break up the shape strategically, it’ll poison the image

      Will this poison pill work? Probably, for at least a while…

  • gsa@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Sorry “Artists” but I’m still working around your silly tools and generating beautiful AI Art 🥹