October 15, 2025, 1:53am 1
I’ve been working for a while on a couple of projects that are meant to come together to Nixify Kubernetes that I’d like to share. They’re not battle tested, docs are sparse and examples are rough.
easykubenix
easykubenix is very heavily inspired by kubenix. Both use the NixOS module system to render manifests. The main difference is that instead of codegen to replicate the full Kubernetes API surface in Nix I use (pkgs.formats.json {}).type to cover all possible resources, and to have API validation generate a script that spins up an ephemeral etcd+kube-apiserver and apply the manifests.
It bundles a deployment script using kluctl which is an amazing deployment…
October 15, 2025, 1:53am 1
I’ve been working for a while on a couple of projects that are meant to come together to Nixify Kubernetes that I’d like to share. They’re not battle tested, docs are sparse and examples are rough.
easykubenix
easykubenix is very heavily inspired by kubenix. Both use the NixOS module system to render manifests. The main difference is that instead of codegen to replicate the full Kubernetes API surface in Nix I use (pkgs.formats.json {}).type to cover all possible resources, and to have API validation generate a script that spins up an ephemeral etcd+kube-apiserver and apply the manifests.
It bundles a deployment script using kluctl which is an amazing deployment tool if you like being able to work from the CLI and not just GitOps (and it’ll handle secrets so we don’t write them to store)
Supports rendering Helm charts, supports importing YAML files. Also supports setting _namedlist on an attrset to convert it into a “named list” in the rendering stage to allow you to reference containers as attrset rather than list.
easykubenix module example:
{
config.kubernetes.resources.namespace.Secret.name.stringData.secret = "nix is awesome";
}
Status: Usable
nix-csi
A “reimplementation” of nix-snapshotter but on the CSI “layer” instead of CRI layer, this means you can run it on any Kubernetes by just deploying the CSI driver, it doesn’t require the node OS to be NixOS. Underneath it uses hardlinks to create a “view” of a shared Nix store, if the volume is RO the hardlink folder is then bind-mounted into the pod, if the volume is RW it’s overlayfs mounted into the pod (so you can run nix commands in the pod). Just like nix-snapshotter this means in RO mode you share inodes all the way and therefore page-cache making this way of deploying more RAM efficient than container images.
It’ll link the root derivation to /nix/var/result so you can easily run binaries from it, also initializes the Nix database.
I’ve also implemented an optional cluster-local binary cache using dinix to run openssh and nix-serve-ng (More work needed)
Status: Works. Deployment docs are not there, image is on quay. It’s opinionated towards Lix currently but that’s just because I use Lix myself and it’ll be configureable soon
dinix
dinix uses the NixOS module system to render dinit configurations + start scripts. dinit is a great process supervisor that’s able to run as PID1 in containers or as a normal userspace supervisor from your terminal or systems or any other way to run processes. It’s similar to NixNG in a sense but laser focused on dinit only.
dinix module example:
config = {
services.boot = {
type = "internal";
depends-on = [ "nginx" ];
};
services.nginx = {
type = "process";
command =
pkgs.writeExeclineBin "nginx-launcher" # execline
''
${lib.getExe pkgs.nginx} -c ${nginxConfig} -e /dev/stderr
'';
restart = true;
options = [
"shares-console"
"pass-cs-fd"
];
};
};
Status: Works, renders all dinit options properly through my tests
A match made in heaven
I know this works even though I haven’t written these workflows yet: You can nix build a easykubenix manifest referencing a dinix startup script in a nix-csi volume, push the manifest derivation to your build cache then apply the manifest with the easykubenix/kluctl deployment script to have your full container lifecycle managed through Nix. This means you can run a COMPLETELY EMPTY container image (you still need one because Kubernetes) and have the power of Nix while doing it.
That being said, you can use only easykubenix, only nix-csi or only dinix depending on what your usecase is.
Final words
While I realize it might be a bit premature to announce software without stable releases I’ve been working on these on and off for awhile and would LOVE to hear from the community.
Other things I wanna do: terranix but for terragrunt long live the module system!
If you’re curious, clone the repositories and run nix repl --file . in each of them to discover, they don’t bite!
So with all that said, please ask questions, give feedback… Like and subscribe, ring the bell, join my membership, Patreon (/s). I’ll try to keep this post updated as things progress and questions are answered 
34 Likes
ksvivek October 15, 2025, 8:08am 2
Please do some tutorials on overview of these, it’ll help a lot for adoption. Could be video or detailed blog. Don’t look for perfection.
Also I recommend and request you to make a tutorial on: how to start learning kubernetes the nix way, especially with modern easy binary distros like k0s. Or just leaving the installation part and about how to use it. This will help newbies to k8s like me grasp things better. Tons of new kubernetes tools out there coming every week or 2, but I trust in the nix ecosystem based ones.
Thank You
1 Like
My intention is definitely to improve documentation, though I’m still at the “functionality” part of these projects (easykubenix didn’t exist a week ago, it’s the youngest of the three and was created because I wasn’t entirely happy with kubenix API or codegen).
I mentioned in the post that I’ve written Kubeadm modules for NixOS. I think it’s time to projectify them into something others can use too!
K3s is usable on NixOS, though I always seem to have high idle-cpu that I can’t troubleshoot on K3s while Kubeadm runs so idle I don’t even shut it down while gaming! 
(+containerd 74MB res)
Along with modules for running Kubeadm I intend to write dinix modules called KNs that manages the lifecycle of Kubelet + CRI on any Linux distro that’s got Nix installed
It’s time for the Kubernetes community to embrace Kubeadm, upstream Kubernetes is where it’s at 
(I’ve never liked the NixOS modules for Kubernetes, I think they’re replicating too much of what Kubeadm already does)
Thanks for the feedback! You’ll have updates soon 
4 Likes
I’m actually really happy to see this, I’ve been wanting to do more nix-snapshotter and your implementation removes one of the key limitations that would have had. I’ll be watching this closely!
1 Like
Thanks for the kind words! Yes being on the CSI layer opens up usage on any Kubernetes distro, CRI-O ones like OpenShift included. GKE, EKS, AKE… You name it!
Anything where you can run privileged containers will work 
I’ve put some effort in this week to get images to quay, so soon
there’s going to be a deployment guide so you can start testing things out yourself. I’m also gonna write some undeployment code to cleanup state if you end up broken, it’s still not ingress-nginx levels of robustness
I’ve still got to implement garbage collection (I do setup gcroots already) and several small improvements here and there.
1 Like
I think this is some amazing work pushing what’s possible with nix and k8s and I barely understand how it actually works. Which is frustrating and exciting at the same time. Regarding mostly the nix-csi part. What I am having troubles is how would this work end2end, e.g., how would I take daily nix/OS usage and use nix-csi for it, two examples: (a) flake based nixOS setup using home-manager, maybe I want a trimmed down version of that to run with nix-csi so that I can have a prebuilt dev nixOS container which I can hand out to users and only need to specify the home-manager part (b) a rust project which has a flake for building and ci and I would like to either run a dev version or CI job of that project using nix-csi?
kind regards
Lillecarl October 17, 2025, 10:01am 7
a
You can run full NixOS within Kubernetes, though it requires the container being privileged because systemd and stuff. dinix or NixNG are better suited for these workloads. home-manager would work too as a NixOS module(so the HM acticationscript runs on startup(?). With some effort the HM activation script could be run from dinix to setup a full user environment 
b
nix-csi takes either a storePath or a Nix expression and makes the root derivation available under /nix/var/result within the container(So you can run /nix/var/result/bin/hello from podspec), if you use a storePath it’ll be fetched from binary cache, if you use an expression it’ll be built on the node scheduled to run the pod (this is going to change eventually so builds are run within Kubernetes jobs so you can schedule builds on separate nodes).
Garbage collection
Node
There are still some kinks to be worked out regarding garbage collection. There’s an issue where the nix database from the initial image for the DaemonSet is only copied once (as it should). But then if the image changes we’ll overwrite /nix/var/result and bring new packages in without registering them properly in the database, this would cause the gcroot to be invalid and the CSI would garbage collect itself. As you see the problem is quite well understood already and will be worked out sooner than later 
in-cluster cache
For the (optional) in-cluster cache I’m using nix-serve-ng. Currently I’m just shipping shit there with no GC process at all. The intention is to rewrite registration time of derivations when they’re copied to the cache so it reflects “last build” rather than “first build”. Then we’ll run an alternative GC that sorts by regtime and tries to remove the oldest entries. I’ve got functional POC’s for this but there’s work to be done still.
How it works:
It’s honestly a glorified “script runner” in Python that takes gRPC and turns it into a couple of commands:
gRPC(NodePublishVolume) → nix build → nix path-info –recursive → rsync –one-file-system –archive –hard-links → nix-store –dump-db $pathinforesult | NIX_STATE_DIR=$nixvarforcontainer nix-store –load-db → ln → mount
There’s some mkdirs sprinkled in there and such but it’s surprisingly simple once you dig in 
Thanks for the question(s), I’m more than happy to expand further if you find something else you’re curious about!
2 Likes
Update on gc: (which was the last real blocker style bug I have experienced) I reused the script that imports and exports the Nix database for guests and it seems to work, I had to run so I didn’t put it through rigorous testing yet.
I still want to do time based GC on non gcrooted paths, it’s a pretty good indicator for if it could be of use or not 
Thanks for the great work! Seems really interesting
Just in case you didn’t see this, I thought it was presented really well. Thanks to luxzeitlos if he’s reading this.
Any thoughts on: k8nix.nixosModules.kubernetesMultiYamlAddons ?
NixCon 2025 - Kubernetes on Nix
Lillecarl October 18, 2025, 10:29am 10
Hey, thanks for sharing! It’s a good talk on the Kubernetes components, if you don’t know how Kubernetes “really works” it’s definitely a good and sound introduction.
I think his approach of using the Kubernetes modules is “wrong”. Kubeadm is the beaten path and “it should be used”.
Here’s a NixOS module that’ll help you get started with Kubeadm on NixOS rather than trying to reimplement Kubeadm with NixOS.
# Minimal configuration for ClusterAPI
{
config,
pkgs,
lib,
inputs,
...
}:
{
config = {
environment.systemPackages = with pkgs; [
kubernetes
cri-tools
];
# Configure containerd CRI
virtualisation.containerd = {
enable = true;
settings = {
# Use systemd cgroups, this will tell Kubernetes to do the same
plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options.SystemdCgroup = true;
# Force /opt/cni/bin as CNI folder (all CNI's expect this and put their binaries here)
plugins."io.containerd.grpc.v1.cri".cni.bin_dir = lib.mkForce "/opt/cni/bin";
};
};
# Install default CNI plugins
system.activationScripts.cni-install = {
text = # bash
''
${lib.getExe pkgs.rsync} --recursive --mkpath ${pkgs.cni-plugins}/bin/ /opt/cni/bin/
'';
};
# Kubelet systemd unit
# See https://github.com/kubernetes/release/blob/50887114b4fe77d28cc62776eaf03187d7f35120/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf
systemd.services.kubelet = {
description = "kubelet: The Kubernetes Node Agent";
wantedBy = [ "multi-user.target" ];
after = [ "network-online.target" ];
wants = [ "network-online.target" ];
requires = [ "containerd.service" ];
unitConfig = {
# This is our own custom thing, better than imperatively enabling the service
ConditionPathExists = "/var/lib/kubelet/config.yaml";
};
# Kubelet needs "mount" binary.
path = with pkgs; [
util-linuxMinimal
];
serviceConfig = {
EnvironmentFile = [
"-/var/lib/kubelet/kubeadm-flags.env"
"-/etc/sysconfig/kubelet"
];
ExecStart = "${lib.getExe' pkgs.kubernetes "kubelet"} $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS";
Restart = "always";
RestartSec = 1;
RestartMaxDelaySec = 60;
RestartSteps = 10;
};
environment = {
KUBELET_KUBECONFIG_ARGS = "--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf";
KUBELET_CONFIG_ARGS = "--config=/var/lib/kubelet/config.yaml";
};
};
### Code from below is taken from clusterctl default templating stuff
boot.kernelModules = [
"overlay"
"br_netfilter"
"nf_conntrack"
];
boot.kernel.sysctl = {
# Recommended for Kubernetes
"net.bridge.bridge-nf-call-iptables" = 1;
"net.bridge.bridge-nf-call-ip6tables" = 1;
"net.ipv4.ip_forward" = 1;
};
};
}
I also think for Kubernetes specific YAML it’s better to use the Kubernetes style API’s and just generate JSON instead of going through YAML, for multi-document JSON you do this:
pkgs.writeTextFile "multidoc.yaml" (builtins.toJSON {
apiVersion = "v1";
kind = "List";
items = [ { apiVersion = ""; kind = ""; .... } ];
};)
This is how Kubernetes returns multiple resources if you’re using –output=json
All in all, a good talk to learn from but I think he’s down the wrong path. The module above is all that’s required to make NixOS Kubeadm and ClusterAPI compatible, and these tools are meant to be used 
Thanks for the question, I realize I might come off as slightly dismissive which isn’t my intention. I just wanna “do the right thing” 
1 Like
That example is cool, but most nix/OS related k8s omit how you actually join nodes. Afaik your example „simply“ starts a node? I need to check the linked yamls in the comments as well.
Regarding the talk. If I remember correctly it was very focused on the certificate part which I never found to be the crux of k8s deployments and the talk actually did not really talk about Kubernetes with nix unless I am confusing it with another talk
Lillecarl October 18, 2025, 10:45am 12
It doesn’t even start a node, it only sets NixOS up to function with kubeadm
I use this module when deploying nodes with ClusterAPI (which uses kubeadm) today!
For joining nodes I think it’s best to refer to kubeadm token and kubeadm join rather than wording it myself. For initializing the control-plane you need kubeadm init
I started working on NixOS modules to configure kubeadm too but they’re not ready to share, it’s a mess of convoluted logic to render a YAML and a systemd dependency 
I work plenty with k8s, that’s not the issue. I know what to use outside of nix, but I still have not reached a conclusion on how to use nix for that because much of k8s is eventually consistent. And then you have additional “upfront reasoning costs”, e.g., do I provision the whole VM with k8s ready to, well now I need the machines IP so I can give it to kubeadm init for the apiserver addr. Then that IP is local to that machine, so it does not magically manifest in my nix files that I use for regluar node deployment etc. Maybe the answer is, nix is not the way to fix this, it’s fine to use some sort of registry/dns/service discovery registry where those IPs get published to and nodes consume them in their activation script rather than being declared upfront. I am going to explore all of that, but I am still busy migrating my whole “regular” desktop setup to nix/OS but I am drive-by checking out nix+k8s from time to time.
Lillecarl October 18, 2025, 11:25am 14
You could write joinConfigurations with Nix and use a deployment tool like colemna, deploy-rs or nixops to deploy them to the nodes. You’d have to create a systemd unit for kubeadm that runs before the kubelet. I’ve done some work for that here but I wouldn’t want anyone using that in it’s current form, it’s a hack at best (I use it to setup a a single node cluster on Hetzner for CAPI only).
But in all honesty I don’t think Nix is the right tool for this but, ClusterAPI already does the heavy lifting for us.
{ ... }:
{
config = {
services.cloud-init = {
enable = true;
btrfs.enable = false;
settings = {
cloud_init_modules = lib.mkForce [
"write-files"
"update_hostname"
]; # Some issue with some module crashing with default config
};
extraPackages = with pkgs; [
nixos-rebuild-ng
kubernetes
];
};
};
}
This is the cloud-init module I’m using together with ClusterAPI and the previously posted “kubelet module”. In ClusterAPI “preKubeadmCommands” I do a NixOS rebuild to set hostname to what cloud-init set it to. I haven’t had to deal with IP addresses and such yet since Hetzner provides DHCP, for IPv6 i’d have to investigate further.
I’ll work to release more of my work regarding ClusterAPI and node management, currently it’s in a private repo for a client I’m working for. Luckily I own the code so it just has to be cleaned up to “OSS standards” 
Lillecarl October 18, 2025, 12:22pm 15
Update: Garbage collection works in nix-csi now. It’s very basic and only collects garbage on CSI node startup, a time based GC will be implemented eventually 
Edit: If someone here is good at Python I would LOVE to have the code reviewed, I’m average at best
0.1.3 “released”.
arianvp October 19, 2025, 6:17am 16
Why are you rsyncing CNI plugins to /opt?
You should be able to just point containerd to find them in the nix store.
Lillecarl October 19, 2025, 9:16am 17
It’s the “well known” path for CNI’s to install themselves to. Cilium, Calico, Flannel and others will hostPath mount that directory into their DaemonSet and copy the CNI binary there for the CRI. This is if you don’t want/can build the CNI from Nix and install it (which I wouldn’t expect from the regular Kubernetes operator).
Cilium templated, Calico untemplated, flannel templated 
Note: In reality you wouldn’t have to do the copy since the “real CNIs” do the installation themselves, this is just a generic “good enough” solution. I use bridge CNI in single-node configuration and Cilium in multi-node ones. This will just work either way, the copied CNI’s are unused in multi-node setups.
Edit: Another thing worth adding is creating /var/lib/etcd in activation and trying to chattr +C on it, CoW databases are bad and etcd leaves that entirely to the operator 
Lillecarl October 20, 2025, 8:26am 18
I added a feature that converts lists in imported YAML (Helm or importyaml) into attrsets. Lists where all elements have the name attribute are indexed by name and other lists are indexed by a strinteger.
# Rendered by Helm in a fixed output derivation (thanks kubenix)
# Notice spec.containers.hcloud-cloud-controller-manager :)
nix-repl> eval.config.kubernetes.resources.kube-system.Deployment.hcloud-cloud-controller-manager.spec.template.spec.containers.hcloud-cloud-controller-manager.args
{
"0" = "--allow-untagged-cloud";
"1" = "--cloud-provider=hcloud";
"2" = "--feature-gates=CloudControllerManagerWatchBasedRoutesReconciliation=false";
"3" = "--route-reconciliation-period=30s";
"4" = "--webhook-secure-port=0";
"5" = "--leader-elect=false";
_numberedlist = true;
}
This means you can easily override any single value of any imported manifests easily! 
There’s a special case for initContainers where it’s converted into a number indexed attrset rather than a named once because the list order controls the initContainer execution order.
@ElvishJerricco I guess you were right about preferring attrsets in the modprobe config… 
Lillecarl October 23, 2025, 1:49pm 19
Implemented support in nix-csi to set volumeAttribute $system (x86_64-linux usually) to a storepath, then nix-csi won’t do any building and just fetch that path from configured caches.
Implemented support in easykubenix to push the generated manifest to binary cache. If you set
volumeAttributes.${pkgs.system} = pkgs.hello;
the manifest depends on pkgs.hello which will be pushed to the cache as well, meaning it’ll be available for fetching by nix-csi 
@claes Now I have a good example for how you’d develop your local application in a remote Kubernetes cluster swiftly and easily!
claes October 23, 2025, 2:16pm 20
Starting from an end user perspective. I want to run pkgs.hello in a pod. How would I do that with your tooling?