(I know that this is about selfhosting, but I am forced to use cloud services due to it not being viable to selfhost because of DSL internet speeds in my house, and I need this to be accessible outside my home.)
I recently made a Linode account (and got the free credit), and I am planning on only paying $5 a month if I can. I noticed that Nextcloud AIO (from Linode “Marketplace”) ran very well on the lowest shared CPU plan (1GB ram, 25GB storage, 1 CPU core (CPU seems to me an AMD Epyc?)).
Will it be okay for me to host a Wordpress website and a Nextcloud instance from the same server? I will be using Docker/Podman, and only I will be using the Nextcloud instance.
Should be fine, but its definitely on the low end in terms of ram.
Nextcloud requires 128mb minimum, but ideally 512mb minimum.
Wordpress should fit in the left over space.
If your traffic/usage is not excessive, you should be fine.
This lines up with my experience. I have nextcloud and wordpress on two different vps’s and just checked their ram usage.
- nextcloud: 468 MB
- wordpress: 120 MB
Caveat to the above is that nextcloud is installed bare metal rather than docker and I have both nextcloud and wordpress set up to use object storage as the media back end.
edit: To add to this OP, the reason we are only talking about ram numbers is that the cpu usage for these applications (with primarily only a single user) is pretty much zero most of the time, so you aren’t going to be limited by the single core machine.
Also, depending on your use case (large amount of data on nextcloud or large media files in wordpress), you might run out of disk space pretty quickly. In those cases, you should consider using object storage as your nextcloud or wordpress media backends as it is cheaper than block storage (there are plugins/tutorials to configure object storage and Linode offers it).
WordPress tends to use a lot of RAM.
If you really need to use WordPress, take a VPS from ovh which gives you 4x the RAM at half the price
and doesn’t work a third of the time.
I never had any problems for almost 10 years outside of a few major outages (that every provider has from time to time).
OVH has unexpected outages even on the higher plans, let alone the cheap ones. Google it.
You’re better off either paying a little more and going with a more reliable provider or using Oracle VPS at least it’s free.
I’ve been on OVH for 10 years now and had almost no downtime apart from the fire.
The fire? 😃 Keep these replies coming guys, they’re amazing! 👍
deleted by creator
Have you looked at Oracle free tier? They have decent specs for free, meaning you can use your $5 to upgrade where you need it once you’ve tried it out.
Having said that those specs should be fine for a single user.
I’ve heard plenty of stories about people’s free tier VM’s being deleted without notice on Oracle Cloud. It doesn’t seem like a trustworthy option for document storage or hosting a website.
I’m assuming they’d be using the $5 per month mentioned in the opening post to pay for some upgrade, e.g. more storage, more RAM, etc. So they’d be on a paid account, but using services that cost zero dollars for the most part. This is what I do and it’s been great.
I am using a $5/month server with 1GB of ram, and 25GB of storage. If I want to upgrade it, I need to upgrade to $10/month. Linode doesn’t have a free plan after 60 days.
I meant that if you went to Oracle instead of Linode, you could use their free services, and then spend the $5 you’re currently spending on Linode on upgrading your Oracle server instead.
If you do a barebones install / without the Docker overhead it might work.
Docker overhead is practically zero. It’s a bit more memory usage, but that’s it.
Ahahaha
Convincing argument, but unfortunately a cursory Google search will reveal he was right. There is very little CPU overhead. The only real consideration is a bite extra storage and RAM to store and load the redundant dependencies of the container.
You’re also ignoring the amount of work the kernel has to do to shift UUIDs around, the resources that the docker daemon itself uses and amounts of redundant stuff to make sure those processes are running that would usually be handled by systemd on a clean system. Yes, containerization is much better nowadays but still overhead.
Can’t comment much about the docker side since it’s not something I’m familiar with.
For the kernel part, assuming what you’re referring to as UUIDs is the pid namespace mechanism, I’m failing to see how that would add overhead with containers. The namespace lookups/permission checks are performed regardless of whether the process is in a container or not. There is no fast path for non-containerized processes. The worst overhead that this could add is probably one extra ptr chase in the namespace linked list.
You can get 2 Ampere ARM cores, 4G RAM and 40G space for that money at hetzner.