The two aren’t even in the same league. I’m a big open source advocate don’t get me wrong, but VirtualBox is horrible to use and its not what OP asked.
The two aren’t even in the same league. I’m a big open source advocate don’t get me wrong, but VirtualBox is horrible to use and its not what OP asked.
Its very much still needed and heavily utilised in the enterprise world. Volume size is usually the lowest priority when it comes to arrays, redundancy and IOPS (the amount of concurrent transactions to the storage) is typically the priority. The exception here would be backup and archive storage, where IOPS is less important and volume size is more important.
As far as replacing sectors goes, I’ve never heard of this and I might just be ignorant on the subject but as far as I know you can’t “replace” a bad sector. Only mark it as bad and not use it, and whatever was there before is gone. This has existed since HDD days. This is also why we use RAID - parity across disks to protect data.
Generally production storage will be in RAID-10, and backup/archive storage in RAID-6 or in some cases RAID-60 but I’m personally not a fan.
You also would consider how many disks are in the volume because there is a sweet spot. Too many disks = higher likelihood of total array failure due to simultaneous disk failures and more data loss in the event it does, but too few disks and you won’t have good redundancy, capacity or performance either (depending on RAID level).
The biggest change I see in RAID these days is moving away from hardware RAID cards and into software-based solutions like Microsoft Storage Spaces, md, ZFS and similar. These all have their own way of doing things and some can even synchronise the data with other hosts.
Hope this helps!
Where my download accelerator plus gang at
Is there a faster way to switch profile than going into the settings? Sounds like you’ve got a much better way than what I’ve been doing
Putting his whole Sisyphussy into it
Sorry I meant TIL about it being considered stable, haha. I’ve known about Fedora because I used it when it was meant to replace the free Red Hat Linux.
As for Steam, I don’t recall how I installed it, sorry! I just recall significant grief getting it going (again, perhaps a skill issue) but had no big roadblocks using OpenSUSE.
TIL about Fedora, last I knew it was a rolling bleeding edge OS. Clearly lots of movement in the Red Hat camp.
As for gaming, drivers were not the problem for me. Getting games to run with ease was. On OpenSUSE, I just install Steam, enable Proton and basically go at that point. Red Hat was non-trivial to do this. Could be a skill issue, but I had a better time getting going with OpenSUSE TW.
Sort of, OpenSUSE Tumbleweed. I started on OpenSUSE Leap but had issues getting things like GPU and Steam working. Red Hat was also a non-starter because of the lack of gaming functionality.
TW works great for gaming and the enterprise features I care about (like domain joining) work out of the box. Its certainly harder to set up than something more geared towards home use (typically one of the various the downstreams of Debian or Arch) but that doesn’t bother me.
Second to this - for what its worth (and I may be tarred and feathered for saying this here), I prefer commercial software for my backups.
I’ve used many, including:
What was important to me was:
Believe it or not, I landed on Backup Exec. Veeam was the only other one to even get close. I’ve been using BE for years now and it has never skipped a beat.
This most likely isn’t the solution for you, but I’m mentioning it just so you can get a feel for the sort of considerations I made when deciding how my setup would work.
As others have mentioned its important to highlight the difference between a sync (basically a replica of the source) vs a true backup which is historical data.
As far as tools goes, if the device is running OMV you might want to start by looking at the options within OMV itself to achieve this. A quick google hinted at a backup plugin that some people seem to be using.
If you’re going to be replicating to a remote NAS over the Internet, try to use a site-to-site VPN for this and do not expose file sharing services to the internet (for example by port forwarding). Its not safe to do so these days.
The questions you need to ask first are:
Once you know that you will be able to determine:
I hope I haven’t overwhelmed, discouraged or confused you more and feel free to ask as many questions as you need. Protecting your data isn’t fun but it is important and its a good choice you’re making to look into it
Back in the day when the self-hosted $10 license existed I was using JIRA Service Desk to do this. As far as ticketing systems go it was very easy to work with and didn’t slow me down too much.
I know you don’t want a ticket system but I’m just curious what other people will suggest because I’m in the same boat as you.
Currently I haphazardly use Joplin to take very loose notes and sync them to Nextcloud.
If you want a very simple option with minimal setup and overhead you could use Joplin to create separate notes for each “part” of your lab and just add a new line with a date, time and summary of the change.
I do also use SnipeIT to track all my hardware and parts, which allows you to add notes and service history against the hardware asset.
Other than that, I’m keen to see what everyone else says
Servers are a different story but for Desktop, OpenSUSE.
Because:
Part of my transition from Windows to Linux was that basic tasks like installing software or even the OS itself shouldn’t be a high effort endeavour. I should be able to point to a package file or run a package manager and be able to go about my day without running “make” and working my way through dependency hell.
I say this as a Linux user of all different flavours for well over 15 years who has a deep love for what it brings to the table. If we want it to be common place with non-IT folks, it needs to work and it needs to be simple to use.
Power
Network
Storage
Compute
A second prod host will join the R520 soon to add some redundancy and mirror the Virtual SAN.
All VMs are backed up and kept in an encrypted on-site data store for at least 4 weeks. They’re duplicated to tape (encrypted) once a month and taken off site. Those are kept for 1 year minimum. Cloud backup storage will never replace tape in my setup.
Services
As far as “public facing” goes, the list is very short:
Though I do run around 30-40 services all up on this setup (not including actual non-prod lab things that are on other servers or various SBCs around the place).
If I had unlimited free electricity and no functioning ears I’d be using my Cisco UCS chassis and Nexus 5K switch/fabric extenders. But it just isn’t meant to be (for now, haha).
Because prospective customers get shy when the browser says that your site is “insecure”
Because it factually is insecure. It is not encrypted and trivial to inspect.
Because it makes for better google ranking.
No, in this day and age it is permission to play. Firefox has a built in feature to only load HTTPS sites, which I have enabled. This has nothing to do with Google. Your issue is with expensive CAs, to which there is a free solution (Let’s Encrypt). Not HTTPS itself.
So there you go. Mob hype and googlian dictatorship.
Incorrect. It is a matter of safety and security and a trivial thing to implement. You are free to not use HTTPS if you want, just as people are free to not consume your service if you don’t.
Calling it a “dictatorship” is hyperbole and demonstrates that you clearly have no idea what you’re talking about and won’t listen to people that do.
Some do. It depends on the type of certificate. Thankfully now we have LetsEncrypt so that there is a free alternative to the big CAs.
To answer your initial question - yes it is necessary. Without HTTPS or encryption in general, anybody who can intercept your connection can see everything you’re doing.
A real world example of this is let’s say you’re connected to a WiFi network that has no password and are browsing a plain HTTP site. Open wifi networks are unencrypted, as is HTTP.
I can sit across the road in a vehicle, unseen, on a laptop and sniff the traffic to view what you’re doing. If you log into your bank, I now have your credentials and can do what I like, and you don’t even know.
This is why we need encryption. It is an (almost) guarantee that your traffic is only viewable to yourself and the other end of whatever you’re connecting to and not anyone in the middle.
Edit: for Anyone downvoting OP remember this is nostupidquestions. Take the time to educate if you know better but don’t downvote “stupid” questions lol.
Jumping on the OpenSUSE bandwagon. I use it daily, have been running the same install of Tumbleweed for years without issue. I’m using KDE Plasma which it let’s you choose as part of the installation which fulfils that requirement for you as well.
If you’re familiar with Redhat you’ll feel at home on it. Zypper is the package manager instead of yum/dnf and works really well (particularly when coping with dependency issues.
I’ve worked with heaps of distros over the years (Ubuntu, Debian, Fedora, RHEL, old school Red Hat, CentOS, Rocky, Oracle, even a bit of Alpine and some BSD variants) and OpenSUSE is definitely my favourite for a workstation.
Not a distro but maybe Plasma Bigscreen is in the ballpark of what you’re after?
Depends on your use case, but you can use some Group Policy Objects on Linux (at least with sssd). See: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/windows_integration_guide/sssd-gpo
You can also grant sudo to AD group members in the sudoers file, which is how I’ve done it in a corporate setting.
I believe there are 3rd party ADMX templates you can add to your domain controllers to get more granular as well as additions to the AD schema, but I haven’t gone that deep with it since between sssd and the sudoers file I can achieve what I need to.
Authelia is popular, as is Keycloak. I believe Red Hat develops Keycloak or at least has a hand in it.
I’m on this journey as well, figuring out what I’m going to use. Currently most of my services just use LDAP back to AD but I’m looking to do something more modern like SAML, oAuth or OpenID Connect so that I can simplify the number of MFA tokens I have.
Just as an anecdote you may find useful - Personally I used to run an Active Directory for Windows and FreeIPA for my Linux machines and have managed to simplify this to just AD. Linux machines can be joined, you can still use sudo and all the other good stuff while only having one source of truth for identity.
This is the method I use in your scenario, OP. You can use Folder2iso to get the files in that you need. If the OS has official VMware tools, you can also mount the VMware Tools ISO straight from workstation into the VM and this will give you the clipboard service so you can copy and paste files between the host and VM, if this scenario is permitted within your isolation needs.
Otherwise, go the ISO route. You just can’t bring stuff out of the VM back to the host is all.