It’s all made from our data, anyway, so it should be ours to use as we want

  • m-p{3}@lemmy.ca
    link
    fedilink
    English
    arrow-up
    41
    ·
    17 hours ago

    It could also contain non-public domain data, and you can’t declare someone else’s intellectual property as public domain just like that, otherwise a malicious actor could just train a model with a bunch of misappropriated data, get caught (intentionally or not) and then force all that data into public domain.

    Laws are never simple.

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      16 hours ago

      So what you’re saying is that there’s no way to make it legal and it simply needs to be deleted entirely.

      I agree.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        7
        ·
        14 hours ago

        There’s no need to “make it legal”, things are legal by default until a law is passed to make them illegal. Or a court precedent is set that establishes that an existing law applies to the new thing under discussion.

        Training an AI doesn’t involve copying the training data, the AI model doesn’t literally “contain” the stuff it’s trained on. So it’s not likely that existing copyright law makes it illegal to do without permission.

        • grue@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          There’s no need to “make it legal”, things are legal by default until a law is passed to make them illegal.

          Yes, and that’s already happened: it’s called “copyright law.” You can’t mix things with incompatible licenses into a derivative work and pretend it’s okay.

        • xigoi@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 hours ago

          By this logic, you can copy a copyrighted imege as long as you decrease the resolution, because the new image does not contain all the information in the original one.

          • yetAnotherUser@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 hours ago

            Am I allowed to take a copyrighted image, decrease its size to 1x1 pixels and publish it? What about 2x2?

            It’s very much not clear when a modification violates copyright because copyright is extremely vague to begin with.

            • grue@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 hours ago

              Just because something is defined legally instead of technologically, that doesn’t make it vague. The modification violates copyright when the result is a derivative work; no more, no less.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            8 hours ago

            In the case of Stable Diffusion, they used 5 billion images to train a model 1.83 gigabytes in size. So if you reduce a copyrighted image to 3 bits (not bytes - bits), then yeah, I think you’re probably pretty safe.

    • drkt@scribe.disroot.org
      link
      fedilink
      English
      arrow-up
      19
      ·
      17 hours ago

      Forcing a bunch of neural weights into the public domain doesn’t make the data they were trained on also public domain, in fact it doesn’t even reveal what they were trained on.

      • deegeese@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        10
        ·
        16 hours ago

        LOL no. The weights encode the training data and it’s trivially easy to make AI generators spit out bits of their training data.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              8
              ·
              14 hours ago

              No, he’s challenging the assertion that it’s “trivially easy” to make AIs output their training data.

              Older AIs have occasionally regurgitated bits of training data as a result of overfitting, which is a flaw in training that modern AI training techniques have made great strides in eliminating. It’s no longer a particularly common problem, and even if it were it only applies to those specific bits of training data that were overfit on, not on all of the training data in general.

              • 31337@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 hours ago

                Last time I looked it up and calculated it, these large models are trained on something like only 7x the tokens as the number of parameters they have. If you thought of it like compression, a 1:7 ratio for lossless text compression is perfectly possible.

                I think the models can still output a lot of stuff verbatim if you try to get them to, you just hit the guardrails they put in place. Seems to work fine for public domain stuff. E.g. “Give me the first 50 lines from Romeo and Juliette.” (albeit with a TOS warning, lol). “Give me the first few paragraphs of Dune.” seems to hit a guardrail, or maybe just forced through reinforcement learning.

                A preprint paper was released recently that detailed how to get around RL by controlling the first few tokens of a model’s output, showing the “unsafe” data is still in there.

        • stephen01king@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          ·
          16 hours ago

          How easy are we talking about here? Also, making the model public domain doesn’t mean making the output public domain. The output of an LLM should still abide by copyright laws, as they should be.

    • merc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      15 hours ago

      It wouldn’t contain any public-domain data though. That’s the thing with LLMs, once they’re trained on data the data is gone and just added to the series of weights in the model somewhere. If it ingested something private like your tax data, it couldn’t re-create your tax data on command, that data is now gone, but if it’s seen enough private tax data it could give something that looked a lot like a tax return to someone with an untrained eye. But, a tax accountant would easily see flaws in it.