I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.
This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.
Feedback is very much welcome. Thank you.
It’s a scam by HDD makers to sell less storage for more money.
Did you read the blog post? It’s not a scam. HDD vendors might profit from “bigger numbers” but using the units they do is objectively the only sensible and correct option. It’s like saying that the weather report is in Fahrenheit because in Celsius the numbers would be lower and feel somehow colder 🤣
If it would be about bigger numbers why don’t HDD manufacturers just use Terabit instead of terabyte? The “bigger number” argument is not a good one.
Because it’s much easier to mistake a number for a somewhat close number than one that is orders of magnitude different…
I’ll try to read the article later but the reality is that HDD manufacturers could help customers disambiguate but that would hurt their bottom line so they don’t.
Videogame companies literally did use “megabit” when the truth was “128KiB”, because it sounded better. Actual computer companies were still listing binary power numbers, because buyers had more to invest and care about accuracy.
You say “sensible”, but it’s lying for profit.
that’s what it was initially, reporting decimal ‘megabytes’ for hdd capacity. lawsuits and settlements followed.
the dust settled and what we have now is disclaimers on storage products (from the legal settlements) and they continue to use ‘decimal’ measurements…
and we also a different set of prefixes for ‘binary’ units of measurements (standards body trying to address the problem of confusion): kibi, mebi, gibi, tebi, pebi, exbi; which are not widely used yet… the ‘old’ ones are for decimal but still commonly used for binary.