It started when a colleague rather enthusiastically told me about ZFS, the file system to end all file systems. ZFS is the successor to software RAID, originally created by Sun, the people we know and love from LibreOffice and purple operating systems. ZFS is more flexible, faster, robuster and better than RAID. My colleague had been happily using it for over a decade, had since replaced all his disks and has never lost a single bit of data.
It turned out, there was also a name, complete with acronym, for what was going on in my personal server: JBOD. Just a Bunch of Disks.
Priorities
To clarify, my list of self-hosting priorities is as follows:
- it has to work
- it must be secure
- it must be robust (back-ups)
- it should cost me …
It started when a colleague rather enthusiastically told me about ZFS, the file system to end all file systems. ZFS is the successor to software RAID, originally created by Sun, the people we know and love from LibreOffice and purple operating systems. ZFS is more flexible, faster, robuster and better than RAID. My colleague had been happily using it for over a decade, had since replaced all his disks and has never lost a single bit of data.
It turned out, there was also a name, complete with acronym, for what was going on in my personal server: JBOD. Just a Bunch of Disks.
Priorities
To clarify, my list of self-hosting priorities is as follows:
- it has to work
- it must be secure
- it must be robust (back-ups)
- it should cost me exactly as much time as I want it to
When all four priorities are met, I can start enjoying it.
But self-hosting is not just a collection of chores. It is a hobby, an identity even. And JBOD sounded eerily like another rather public acronym that belonged very much on an operating system that nobody should (be forced to) use.
ZFS, it turned out, is not in the kernel, at least not by default. You need a third party module. Worse, it uses lots and lots of memory, something my rather modest server doesn’t have. It is a Gigabyte Mini-ITX board with 8GB of RAM and an AMD Ryzen 5 3600X CPU in a Fractal Design Node 304. That appears to be plenty for 14 docker containers, including Nextcloud, Luanti, Mediawiki, Jellyfin and this blog. Nothing feels slow, and I wasn’t going to change that just to get rid of an acronym.
Reading into ZFS, I discovered a third disadvantage. You have to plan ahead. Once you’ve implemented a lay-out, you’re stuck with it until you agree to move lots of data around. And data will accumulate at scale over the years.
That Mini-ITX has just four SATA ports, but it also has an M.2 slot. I thought to use the four SATA ports for my un-JBOD project and the M.2 slot for the operating system. I hadn’t decided on what to connect the SATA ports with, might be hard drives or solid state drives, but the M.2 slot looked very inviting, so I went ahead and bought a 256 GB M.2 drive, screwed it in and turned the server on again.
My home server
Screenfulls of failed pings are what followed.
Connecting a monitor to the GeForce GT 710 2GB (did I mention modesty?)’s HDMI port told me Debian 11 “Bullseye” had recognised the Ethernet port but failed to use it. No amount of research helped, so I looked at the back of the machine and saw the two little lights, on both ends of the network cable, were completely and utterly off. Changing cables didn’t help.
Was the Ethernet port incompatible with a populated M.2 slot? Or perhaps this very type of M.2 disk?
Cheaper
Upgrading the operating system, though, solved it. I haven’t the foggiest why.
I had planned to upgrade to Debian 12 “Trixie” this winter, but had wanted to get a lot of improvements done first. Like writing a proper docker-compose.yml and get rid of fourteen elaborate run commands to finally be able to refer to container names instead of changing IP addresses.
In the kernel by default, more flexible and less memory hungry than ZFS, is btrfs (ButterFS, BetterFS). It does all the things that makes ZFS great, but I can add and change whatever I want whenever I want. I don’t even have to buy four new disks now, so in the short run it’s also cheaper. And as long as I stay away from RAID5 and 6, my data is safe, so that’s what I ended up with.
I had more challenges last week, one causing me to get out of bed at one in the morning in a seemingly hopeless but ultimately successful attempt to recover my Mediawiki database. As mentioned earlier, it has all my notes from the last twenty years, on my entire games collection from the last forty years, and on all the books I read. It’s not something I’m prepared to lose.
Podman
So I’m on Trixie now, it works and I must say, it’s spotless. I only installed git, vim, docker and the fish shell. The rest is in containers. Even my Groovy scripts are now in containers because I didn’t want to install a JVM. All the data is on a set of two btrfs formatted disks in RAID1. I love how they don’t even have partitions.
I still want to change to Podman and have other improvement plans. But at any rate, the above it why you, your feed reader and/or your training bots had a very large chance of only getting a “Service unavailable” or something similar for your trouble. That should be over now.
But I still don’t understand why Bullseye didn’t want to power my Ethernet port any more.