I usually have a number of different coding projects on the go, which use a variety of stacks like node, php, python, wordpress etc. Like most people in this situation, I use docker containers to manage all the different development environments.
Since the rise of AI agents, this is even more critical. I’ve already had the wonderful experience of copilot deleting the entire git directory and everything else during an entirely innocuous documentation task. I definitely don’t want to run any AI agent on the host PC itself.
One issue with using the official images like node, python and so on is the underlying development tools are never installed. I also don’t like the random user names assigned (some are even root), and I l…
I usually have a number of different coding projects on the go, which use a variety of stacks like node, php, python, wordpress etc. Like most people in this situation, I use docker containers to manage all the different development environments.
Since the rise of AI agents, this is even more critical. I’ve already had the wonderful experience of copilot deleting the entire git directory and everything else during an entirely innocuous documentation task. I definitely don’t want to run any AI agent on the host PC itself.
One issue with using the official images like node, python and so on is the underlying development tools are never installed. I also don’t like the random user names assigned (some are even root), and I like all my code to be present in /app.
So I’ve taken some time to build a standard set of customisations of the official images. This gives me a consistent look and feel within VS Code and still get access to the correct version of official tools.
make and docker
All the custom versions of official images are created with a single Makefile and Dockerfile. This allows me to issue a command like:
make python VERSION=3.14
and I’ll get a new -prod and -dev version of the standard image in my public Dockerhub repo.
The prod version contains the standard user structure, always operating on UID=1000 when I deploy the container. I could also put some standard monitoring tools there in the future, or anything else that is common across all my production software.
The dev version extends prod and adds in a common suite of tools I need in every project.
Otherwise I keep the name and version number exactly the same as the official version so it’s easy to track and upgrade to new containers over time.
The nice thing is that if I update my preferred set of installed tools in the dockerfile, I just re-issue a make python VERSION=3.14, the dockerhub will be updated and just a project pull gets the new tools.
This keeps things very simple – just one Dockerfile to maintain that applies to all official images.
Extending the extension
However, having the images in Dockerhub does make it very easy to then extend the standard images even further.
Take the example of wordpress in my Dockerhub – you can see that there is my standard custom version wordpress:php8.2 prod and dev with the same tools and structure as all the others. But I also need some additional development tools like php compose, testing tools and a database client for my wordpress projects.
So the make target for wordpress first of all builds and pushes the standard customisation. Then, in another Dockerfile I pull the extended version and add in the wordpress specific toolset.
I give this new version my own name uncountablewp but keep the same tag structure. So the image uncountablewp:php8.2-dev contains the official :php8.2 plus my standard development tools plus my wordpress specific ones.
Creating another image for the next php version is as simple as issuing a make wordpress VERSION=php8.3.
I can keep extending this further. Although uncountablewp will be applicable for all my wordpress projects, there may be a specific one where I need additional project specific software installed. If and when that arises, I can have a Dockerfile in that application source that extends uncountablewp further with the new stuff.
The magic of Dockerhub
I intend to make most of my projects open source. I don’t really want lots of cut and pasted Dockerfiles to try and keep consistent across them all. So pulling images from my own repo means that anyone else using that code has the same consistent base as me.
There are no limits on the number of public images you can create on dockerhub. Nothing I am installing here is a secret, so there’s no downside in publishing the environments for everyone to access.
What I’m not doing (yet) is cross compiling architectures. So all my repos are linux/amd64. However, my production server is ARM so I will need to add that in when it comes to deployment so I can pull the images for live.
I’ve been battling with dockerfiles and make for years trying to standardise my environment, but I always seem to end up cut and pasting. The breakthrough here was utilising dockerhub as the intermediate layer and store all the permutations of tags, production and dev.
Read more on this topic . . .