• Gianni R@lemmy.ml
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 year ago

    Data compression. Something about “making less data out of … The same data” is really mind blowing, & the math is sick

    • Fallenwout@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 year ago

      It is not that complicated, to make a simple example with strings: AAAABBBABABAB takes up 13 spaces, but write (compress) it like 4A3B3AB take up 6 spaces compressing it more than 50%.

      Now double it like AAAABBBABABABAAAABBBABABAB with 26 spaces and write it as 2(4A3B3AB) with 9 spaces it takes only 30% of the space.

      Compression algorithms just look for those repetitive spaces.

      Takes those letters and imagine them being colored pixels of a picture to compress a picture

      • quinkin@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        Once you get into audio, images and video it revolves a lot around converting temporal and/or positional data into the frequency domain rather than simple token replacement.

      • MrFunnyMoustache@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Wait, isn’t your first example goes from 13 spaces binary to a 6 spaces of base 12 (base 10 + the two values A or B).

        That would make the “compressed” result be 110111010111011101110011 which is larger than the original message when both are in binary…

          • MrFunnyMoustache@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Fair enough. The general idea is correct, I just found that example rather jarring… It is generally more difficult to compress an already small amount of data anyway.