I remember Watchtower helpfully stopping Pihole before pulling the new image when I only had the one instance running… All while I was out at work with the fiancée on her day off. So many teaching moments in so little time.
Raspberry pi4 Docker:- gluetun(qBit, prowlarr, flaresolverr), tailscale(jellyfin, jellyseerr, mealie), rad/read/sonarr, pi-hole, unbound, portainer, watchtower.
Raspberry pi3 Docker:- pi-hole, unbound, portainer.
I remember Watchtower helpfully stopping Pihole before pulling the new image when I only had the one instance running… All while I was out at work with the fiancée on her day off. So many teaching moments in so little time.
Home server: Proxmox (Debian). Redundant DNS: Raspbian (Debian). Parent’s server: Debian (Debian).
Gonna be honest, I mostly live off my phone and a retroid pocket.
Linux does what Windont?
Posts on public forums get replies from the public.
As a beginner in self hosting I like plugging the random commands I find online into a llm. I ask it what the command does, what I’m trying to achieve and if it would work…
It acts like a mentor, I don’t trust what it says entirely so I’m constantly sanity checking it, but it gets me to where I want to go with some back and forth. I’m doing some of the problem solving, so there’s that exercise, it also teaches me what commands do and how the flags alter it. It’s also there to stop me making really stupid mistakes that I would have learned the hard way without.
Last project was adding a HDD to my zpool as a mirror. I found the “attach” command online with a bunch of flags. I made what I thought was my solution and asked chatgpt. It corrected some stuff: I didn’t include the name of my zpool. Then gave me a procedure to do it properly.
In that procedure I noticed an inconsistency in how I was naming drives vs how my zpool was naming drives. Asked chat gpt again, I was told I was a dumbass, if thats the naming convention I should probably use that one instead of mine (I was using /dev/sbc and the zpool was using /dev/disk/by-id/). It told me why the zpool might have been configured that way so that was a teaching moment, I’m using usb drives and the zpool wants to protect itself if the setup gets switched around. I clarified the names and rewrote the command, not really chatgpt was constantly updating the command as we went… Boom I have mirrored my drives, I’ve made all my stupid mistakes in private and away from production, life is good.
A good general suggestion. The WAF I follow are ‘reasonable’ expense, reasonable form factor, and a physical investment. I floated the idea of a VPS and that’s when I learned of the third criteria. It is what it is.
I just started on this 8tb HDD so it isn’t very full right now, I could raise the ratio limits. But, I worry about filling the HDD and part of me worries about 100s of torrents on an n100 doing other things. So I’m keeping the habit from my pi4+1TB days of deleting media behind us and keeping the torrent count low.
I justify it as self managing though: popular Isos are on then off my harddrive fairly quickly, but the ones that need me will sit and wait until they hit the ratio of 3 however long that is. I would like to do “3 + (get that last seeder to 100%)” but I don’t know how/if it’s possible to automate through prowlarr.
I should probably keep sharing Linux Isos longer than I do, but data hording has a low WAF. Instead I have prowlarr set the ratio to 3 (one for me, one for a leecher, and one to add to the pool) to keep the data churning.
“The white man will try and satisfy us with symbolic victories rather than economic equity and real justice” - Malcom X
Hehe, I see what you did there.
MLK on climate change (probably):
[…]that [humanities] great stumbling block in his stride toward freedom is not the [oil company] or the [billionaire] but the white moderate, who is more devoted to ‘order’ than to justice; who prefers a negative peace which is the absence of tension to a positive peace which is the presence of justice; who constantly says: ‘I agree with you in the goal you seek, but I cannot agree with your methods of direct action’; who paternalistically believes he can set the timetable for another man’s freedom; who lives by a mythical concept of time and who constantly advises [climate aware] to wait for a ‘more convenient season.’
The firestick is what I chose as my TV’s, a 10yo LG, jellyfin client. Works as intended, better really. One day I’ll block the stick’s internet connection, and it’ll be the almost perfect device, in that it plays almost anything natively. My server is a rpi4 so anything I can do to stop transcoding, I do.
Aoostar n100 2 Bay nas is what I’m currently thinking about. Or the same device but rebadged.
Pros: n100 for quicksync. 2 bays of HDD for media storage. Low power at idle. Cheap for a box with all relevant codecs + sata storage. High WAF compared to other HTPCs
Cons: Unknown brand for build quality and bios updates. General Chinese security anxieties. Idle power, while low, is higher than other n100 options. Fan isn’t pwm. Personally don’t like the aesthetics.
Favourite game - 1, it was the first one I played and the one I’m most familiar with.
Least is 3, it was the first game I encountered with day 1 dlc, so didn’t get any. Last ME game I bought too, Jokes on me I guess because I got the remaster instead.
I enjoyed KOTOR/II before it and I was hoping for more of the same, more HK-47 really. No HK but the play is familiar: go to a planet do some quests, X person wants to talk and on to the next.
Femshep is the only shep for me.
I guessed it was a “once bitten twice shy” kind of thing. This is all a hobby to me so the cost-benefit, I think, is vastly different, nothing on my setup is critical. Keeping all those records and up to date on what version everything is on, and when updates are available and what those updates do and… sound like a whole lot of effort when currently my efforts can be better spent in other areas.
In my arrogance I just installed Watchtower, and accepted it can all come crashing down. When that happens I’ll probably realise it’s not so much effort after all.
That said I’m currently learning, so if something is going to be breaking my stuff, it’s probably going to be me and not an update. Not to discredit your comment, it was informative and useful.
When I asked this question
So there are many reasons, and this is something I nowadays almost always do. But keep in mind that some of us have used Docker for our applications at work for over half a decade now. Some of these points might be relevant to you, others might seem or be unimportant.
- The first and most important thing you gain is a declarative way to describe the environment (OS, dependencies, environment variables, configuration).
- Then there is the packaging format. Containers are a way to package an application with its dependencies, and distribute it easily through the docker hub (or other registries). Redeploying is a matter of running a script and specifying the image and the tag (never use latest) of the image. You will never ask yourself again “What did I need to do to install this again? Run some random install.sh script off a github URL?”.
- Networking with docker is a bit hit and miss, but the big thing about it is that you can have whatever software running on any port inside the container, and expose it on another port on the host. Eg two apps run on port :8080 natively, and one of them will fail to start due to the port being taken. You can keep them running on their preferred ports, but expose one on 18080 and another on 19080 instead.
- You keep your host simple and empty of installed software and packages. Less of a problem with apps that come packaged as native executables, but there are languages out there which will require you to install a runtime to be able to start the app. Think .NET, Java but there is also Python out there which requires you to install it on the host and have the versions be compatible (there are virtual environments for that but im going into too much detail already).
I am also new to self hosting, check my bio and post history for a giggle at how new I am, but I have taken advantage of all these points. I do use “latest” though, looking forward to seeing how that burns me later on.
But to add one more:- my system is robust, in that I can really break my containers (and I do), and to recover is a couple clicks in Portainer. Then I can try again, no harm done.
Finally got it set up, pointed Prowlarr at it which synced to Sonarr and Radarr, not readarr or lidarr though. I couldn’t manually point readarr at it either without getting a
Query successful, but no results in the configured categories were returned from your indexer. This may be an issue with the indexer or your indexer category settings
which is a shame. Still a potentially powerful bit of kit regardless.
I use Mullvad and have a qbit go through gluetun. I don’t mind the lack of port forwarding, as I leave the Pi on 24/7 and I’m not under ratio constraints. Also, my system isn’t secure enough for me to be messing with that stuff, next build I’ll get everything off root, set proper permissions, route everything through a single port etc, then think about port forwarding. For now I’ll hide behind my ISP and Mullvad’s security while I learn and make mistakes.
Down is quick enough for me and Up is slow but constant.
My unbound is on v1.13.1 (Raspbian) after update/upgrade. I’ve read it lags behind the main release by alot, should I trust the process that everything is fine.
Ah, I knew it was bypassing the pi-hole, I thought it was IPv6. I think I made the mistake of changing more than one thing at once, what I did worked and I moved on to the next functionality I was chasing. I’ll try enabling IPv6 on the pihole, I know at least if I get Ads with it on its not IPv6.
Someone identifying with Homelander would. That’s the real meme here.