The year was 2006, and the 80 GB HDD in my Dell Optiplex 790 was full of podcasts, stolen music, and episodes of Dr. Who…
The year was 2006, and the 80 GB HDD in my Dell Optiplex 790 was full of podcasts, stolen music, and episodes of Dr. Who…
ITT people trying to be edgy but I’m going to say invading Russia in the winter.
Stealing other people’s cultural heritage is their cultural heritage
If anything, the regulations that mandate that Rogers/Bell need to wholesale bandwidth on their networks helps startup ISPs. Gimme more of that.
Especially given the lack of ways to really differentiate your product, it was bound to become increasingly commodified and end up with a few producers who manage to operate efficiently and the rest going under.
Honestly I’d kinda be glad if, when I go to the store, I’m not met with 65 completely identical options and have to explain to the pot sommelier that I just would like some pot please, and that the 16 creative adjectives that have been affixed to the front of the word “preroll” are largely inconsequential to me.
Honestly the way I always look at it is just take the lifetime cost and divide it by the yearly cost and if I think the product/license deal will exist for that long (and I’ll use it for that long) it’s worth it otherwise not. Like, I have lifetime Plex and frankly I don’t expect the, to exist forever but I like the premium features and I’ve had lifetime for long enough that I’ve saved money.
Not mine but my partner’s machine (which I build and largely maintain for her) is a custom Debian install on ZFS root using ZFS boot menu and running a custom minimal i3 desktop environment.
Honestly, if you’re doing regular backups and your ZFS system isn’t being used for business you’re probably fine. Yes, you are at increased risk of a second disk failure during resilver but even if that happens you’re just forced to use your backups, not complete destruction of the data.
You can also mitigate the risk of disk failure during resilver somewhat by ensuring that your disks are of different ages. The increased risk comes somewhat from the fact that if you have all the same brand of disks that are all the same age and/or from the same batch/factory they’re likely to die from age around the same time, so when one disk fails others might be soon to follow, especially during the relatively intense process of resilvering.
Otherwise, with the number of disks you have you’re likely better off just going with mirrors rather than RAIDZ at all. You’ll see increased performance, especially on write, and you’re not losing any space with a 3-way mirror versus a 3-disk RAIDZ2 array anyway.
The ZFS pool design guidelines are very conservative, which is a good thing because data loss can be catastrophic, but those guidelines were developed with pools that are much larger than yours and for data in mind that is fundamentally irreplaceable, such as user generated data for a business versus a personal media server.
Also, in general backups are more important than redundancy, so it’s good you’re doing that already. RAID is about maintaining uptime, data security is all about backups. Personally, I’d focus first on a solid 3-2-1 backup plan rather than worrying too much about trying to mitigate your current array suffering catastrophic failure.
Another option is to avoid the installer entirely and install from a live environment using chroot and whatever your distro’s installation bootstrap tool is. I started using this method to install Debian on ZFS root using this method for a while and it’s become my go-to method for installing most distros as it gives you the most control over the resulting OS. It will also often take some distro-specific knowledge but is also a valuable learning opportunity.
I tend to agree - I have no love lost for Microsoft but I’m also willing to admit when they’ve got some good tech.
Especially with ChatGPT you don’t really need to be that good at it, just good enough to read the script over and to know how to execute it.
Would they make it worse than watching ads?
Hey… it sorts properly alphabetically
Yeah basically all a “distribution” is is a selection of software and configurations, and they distribute (hence the name) that software and configurations as a bundle. It definitely can be daunting to learn all of this at once as a newcomer, but on the other side of that coin I’ve seen many people begin their Linux journey on a “beginner friendly” distribution who come to see that distro’s configs as default and need to unlearn/relearn many habits as they progress through their journey. I think, too, that often people who are immersed in the Linux world don’t have a great perspective on what is/isn’t confusing for a new user and often end up obfuscating things with other things that are just as complicated, if not more.
This is true, but I don’t know if you’d be counted as a seeder on that list though if you don’t have the full torrent.
While I find that I agree with his takes like, 55% of the time, I do agree that Debian and Arch are basically the S-tier distros. So many of the other ones are basically just opinionated Debian or Arch, and while those can be useful when you’re getting started, I’ve found that for the long haul you’re better off just figuring out how to configure the base distribution with the elements of the opinionated ones that you like rather than use those distros themselves. Also, RIP CentOS. I would have put that in a high tier before the RHELmageddon (not top tier mind you, but it had a well defined use case and was great for that purpose).
We are in contact with the team there to understand why this incident occurred.
I can tell you right now why this incident occurred, it’s because of all the rats.
I’m personally a big fan of OpenAudible. It’s not free, but it’s not crazy expensive and it does all the work for you. You sign into your Audible account in the app, it will pull your library, download each book, decrypt it, and convert it to the format of your choice (I usually do M4B). I’ve been using it for years and it makes downloading your Audible library in an ongoing basis a breeze.
So two things about this:
Tailscale doesn’t actually route through Tailscale’s servers, it just uses its servers to establish a direct connection between your nodes. You can use Headscale and monitor the traffic on the client and server sides to confirm this is the case. Headscale is just a FOSS implementation of that handshake server, and you point the Tailscale client there instead.
Doesn’t renting a $3 VPS and routing your traffic through that expose many of the same vulnerabilities regarding a 3rd party potentially having access to your VPN traffic, namely the VPS provider?
For what it’s worth, I generally think that the Headscale route is the most privacy- and data-sovereignty-preserving route, but I do think it’s worth differentiating between Tailscale and something like Nord or whatever, where the traffic is actually routed through the provider’s servers versus Tailscale where the traffic remains on your infrastructure.
Yeah it was 2006 and that was how you got the MP3 files onto your iPod Nano. This was back when “mobile internet” consisted of “m.website.com” links that loaded a page without a style sheet at dial-up speeds that was designed to be navigated with a D-pad.