While LLMs have been used for… a lot, it seems like this use might be one where it’s not only reliable but it appears to outperform existing methods of image compression. Being able to cram more data into less space tends to lead to interesting developments, so I will be keeping my eye on this.

What do you guys think? Seem like it’s deserving of less hype than I’m giving it? What kind of security holes do you think this could open?

  • Heresy_generator@kbin.social
    link
    fedilink
    arrow-up
    73
    ·
    edit-2
    9 months ago

    It’s neat from a research and proof-of-concept perspective but practically speaking I’d like to see the CPU cycles required for the LLM compression compared to PNG or FLAC compression. We’ve always known we can increase compression by throwing more computing power at the problem but we settle on a happy medium at the intersection of “good enough” for compression and performance.

    • aard@kyu.de
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      While that’s generally true we might want to look into utilizing available cores more - but I guess with LLM it might be harder to scale that while keeping file size the same.

      A lot of the current compression programs only use one thread properly - which was still perfectly fine a few years ago, but thanks to AMD cores have become cheap. Few years ago most notebooks would come with two cores, and either two or four threads, with higher end models with 4c/4t. Something bigger pretty much didn’t exist for notebooks, and was expensive for desktops.

      Nowadays you can get 16 cores in a reasonably priced notebook, and if it benefits your work you won’t think much about spending a bit extra for a 32 or 64 core CPU in your workstation - where just 6 years ago you’d have had no option for such a notebook, and paid the equivalent of a not too shabby car for the workstation.