• lemmyvore@feddit.nl
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    And then you have to put a filesystem on it, which has its own metadata – file attributes and folder/file names and so on. If you use NTFS you lose at least 12.5% to the metadata so now you’re down to 11.8 GiB. 😛

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      As an amusing side note, I once came across a joke compression program that could compress any data down to zero bytes. It did this by creating directories filled with zero-sized files whose filenames contained the actual data of the file in question.

      If you right-clicked on the folder and asked the OS how big it was, it’d report 0 bytes. But of course all that data still had to be stored somewhere, in the metadata of the filesystem.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        That’s part of why I use du on Linux instead of df/ls -l to figure out file/directory/partition usage. The former figures out actual size on disk, whereas the latter ignores metadata like the list of files in the directory.