I started off self-hosting on my main PC as a way to get familiar with the concepts. But as I wanted to access my self-hosted services around the clock, I started moving my favorite apps to my NAS. I quickly learned that while I was using Docker containers on both my Windows desktop and my Synology NAS, they work very differently in practice.
This means that the process of self-hosting on my NAS is a lot more frustrating compared to my PC. Here’s a look at the different factors that contribute to that frustration...
Hardware constraints
My NAS has a lot fewer resources
When I started moving services to my NAS, I expected them to be slower due to its limited RAM, use of an HDD over an SSD, and simple C…
I started off self-hosting on my main PC as a way to get familiar with the concepts. But as I wanted to access my self-hosted services around the clock, I started moving my favorite apps to my NAS. I quickly learned that while I was using Docker containers on both my Windows desktop and my Synology NAS, they work very differently in practice.
This means that the process of self-hosting on my NAS is a lot more frustrating compared to my PC. Here’s a look at the different factors that contribute to that frustration...
Hardware constraints
My NAS has a lot fewer resources
When I started moving services to my NAS, I expected them to be slower due to its limited RAM, use of an HDD over an SSD, and simple CPU. But I didn’t expect quite as much instability as I’ve encountered with certain services.
The main problem is that while I can start up and run containers easily, certain tasks require more resources. This means that I can use a service reliably for some time, but it might suddenly stop working if it has too many tasks to process at one time.
These limited resources wouldn’t be as frustrating if it was easier to tell when a particular service was causing issues, rather than waiting to learn the hard way. Even though I keep an eye on RAM and CPU usage through my NAS’s Resource Monitor, the interface is limited. I’ve had containers stop working suddenly and my NAS interface freeze up even when I haven’t reached the 80% RAM threshold that triggers a warning.
It’s also difficult to identify how many resources a specific container is using at a glance. This information is buried a few menus down in Container Manager and isn’t historical. I’ve also had the interface stop refreshing with new data when I want to test specific tasks and their load on the containers.
For example, I wanted to see how much uploading a few images to Immich increased the container’s RAM usage. However, the data simply stopped refreshing. My mobile app finished the upload task, but I later realized there was an issue because my other containers had shut down.
Trying to use Resource Monitor also doesn’t help much. It doesn’t show Container Manager as a service on my NAS, while it doesn’t seem to identify any issues with specific containers when I look at processes.
The UI differences despite using Docker solutions
Container Manager doesn’t feel as intuitive
On Windows, I use Docker Desktop to manage my Docker containers. On my NAS, I use Container Manager. Both solutions are developed by Docker Inc, but they feature very different user interfaces. Docker Desktop isn’t perfect, but I can say I prefer it much more to Container Manager.
This is down to consistency and ease of use. Docker Desktop shows all your containers under one view, while Container Manager separates Docker Compose containers into a Projects view and single containers into a Containers view.
In Docker Desktop, it’s also relatively simple to access the link to your self-hosted services. In Container Manager, the main view doesn’t even show you the container’s port. Instead, you have to go to a container’s settings to see what port you exposed if you don’t remember off-hand.
There is also a completely different flow for creating containers depending on whether you’re using a YAML file or not. Which brings me to my next point.
The lack of a built-in command line tool
A command line has its uses
I wouldn’t say I’m a command line enthusiast, but there are times that a command line comes in handy even if you’re not a developer. When I use Docker on Windows, I use my terminal to run Docker Compose commands, pull updated images, and rebuild containers without losing data. But on my NAS, I have to enable SSH and connect with another device. I’ve mostly used this to try to find my user ID and group ID for containers that require them, since I’m not really familiar with using this method.
Due to my lack of familiarity with this method, the lack of an easily accessible command line makes it more difficult to reliably update and rebuild containers for me. Container Manager allows you to click an update button to pull a new image, but I’ve found that this feature is inconsistent. You can also view a terminal tool when looking at a specific container, but this doesn’t have the same freedom as a command line tool.
You can rebuild and restart containers using buttons in Container Manager, but because of the way the UI separates containers and projects, it’s not exactly convenient. So troubleshooting my containers and using the commands I’m familiar with feels a lot less intuitive when using my NAS.
Folder permission quirks
I run into frequent issues
When I create a project on my PC, Docker is able to create all the sub-folders it needs to without me needing to configure any special permissions. But permissions on Synology seem to have a lot of quirks when it comes to Docker. Technically, Container Manager should be able to create folders in the directory allocated to Docker, but I frequently run into issues with this. I have also tried fixes, such as creating a separate user for Docker and allocating it the appropriate permissions, or assigning my own user ID to a container.
However, none of these fixes have been consistent. The only reliable way to ensure that a project starts up correctly is to manually create all the sub-folders it will need. This adds an extra bit of friction to the process that isn’t there when I use my PC.
Despite the frustrations, I’m not giving up on my NAS
While I have to deal with certain frustrations when self-hosting on my NAS, it also comes with certain benefits. My NAS is an energy-efficient way to ensure 24/7 access to my most important services. In fact, it’s the reason I was able to use Jotty to replace Google Keep and it’s where I run my Home Assistant container.
I’m looking into alternative interfaces I can use to manage containers, such as Portainer or Dockge, so that I’ll be caught off guard by the difference between my PC and NAS less often. However, I’ll still need to take my hardware constraints into account when doing this.