About 6 months ago, I imported our photo collection into Ente Photos.
Before that, our memories lived in Nextcloud. Although it’s great for file syncing, Nextcloud didn’t really work all that well for us as a photo manager (in fairness, that’s probably largely because I’d pulled them in via a Shared Storage rather than putting any real effort in).
Like Nextcloud, Ente is open source. Originally, I’d intended to use Ente’s SaaS offering for a little while and the migrate to self-hosting once I was happy that it was the right solution.
That migration never happened (I’ve been happy enough, and the cost is low enough, that I’ve not really felt the urge to move).
It does mean, though, that the avai…
About 6 months ago, I imported our photo collection into Ente Photos.
Before that, our memories lived in Nextcloud. Although it’s great for file syncing, Nextcloud didn’t really work all that well for us as a photo manager (in fairness, that’s probably largely because I’d pulled them in via a Shared Storage rather than putting any real effort in).
Like Nextcloud, Ente is open source. Originally, I’d intended to use Ente’s SaaS offering for a little while and the migrate to self-hosting once I was happy that it was the right solution.
That migration never happened (I’ve been happy enough, and the cost is low enough, that I’ve not really felt the urge to move).
It does mean, though, that the availability of our (ever growing) photo collection is reliant on Ente’s disaster recovery posture.
Ente have not given me any reason to doubt them (in fact, their approach to reliability is well documented), but our photos are utterly irreplaceable and using any hosted offering comes with some risk of disappearance (or of being acquired by a company which goes on to ruin it), often without any prior warning.
To their credit, this is something that Ente explicity recognised when first introducing the CLI.
This blog post talks about using the ente CLI to automate a periodic incremental backup of the photos that we store in Ente. It’s primarily focused on deploying into Kubernetes but also details how to do so using Docker or a native install.
Contents
Containerising
There wasn’t any particular need for the backup to run within Kubernetes, other than that I’ve had a cluster that I could run it in.
The CLI is actually pretty simple, so setting things up without using Docker or Kubernetes isn’t too much different (details of that are below).
To keep things lightweight, I based my container on Wolfi:
FROM cgr.dev/chainguard/wolfi-base AS builder
Ente is written in Go, so I installed go and git before cloning Ente’s source down and compiling the CLI:
RUN apk add go git \
&& mkdir /build \
&& cd /build \
&& git clone --depth=1 --branch=$ENTE_RELEASE https://github.com/ente-io/ente.git \
&& cd ente/cli \
&& go build -o "bin/ente" main.go
This produced a standalone binary, so I copied it into a fresh image, created the directories that it needed and configured the container to run as a non-privileged user:
FROM cgr.dev/chainguard/wolfi-base
# Copy the built binary over
# Make sure we also ship the license file
COPY --from=builder /build/ente/cli/bin/ente /usr/bin
COPY --from=builder /build/ente/LICENSE /LICENSE
RUN mkdir /cli-data/ /cli-export/ \
&& chown -R nonroot:nonroot /cli-data/ \
&& chown -R nonroot:nonroot /cli-export/
USER nonroot
ENTRYPOINT ["/usr/bin/ente"]
The full Dockerfile can be found in Codeberg and my build of the image can be pulled from codeberg.org/bentasker/ente-cli-docker.
Storage
The CLI requires a couple of storage volumes:
/cli-datathis is where the CLI will maintain a database of image metadata (along with the creds it uses to talk to Ente)./cli-exportthis is where photos will be exported to
The CLI data path can be overridden via env variable ENTE_CLI_CONFIG_DIR. The export path can be any arbitrary path, but has to be provided when adding an account to the CLI’s config.
Running In Kubernetes
First Time Setup
The CLI isn’t of much use until it’s linked to an account.
Unfortunately, there isn’t a programmatic way to pre-configure it, so I needed to spin up a pod so that I could login to the CLI.
As it seemed possible that I might need to manually interact with the CLI again in future, rather than manually creating a pod, I defined a deployment but set it to be scaled to 0 pods:
apiVersion: v1
kind: Namespace
metadata:
name: ente-backup
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ente-backup-cli
namespace: ente-backup
spec:
selector:
matchLabels:
app: ente-backup
replicas: 0
template:
metadata:
labels:
app: ente-backup
spec:
containers:
- name: ente-backup
image: codeberg.org/bentasker/ente-cli-docker:v0.1
env:
- name: ENTE_CLI_CONFIG_DIR
value: "/cli-data/"
- name: ENTE_CLI_SECRETS_PATH
value: "/cli-data/.secrets"
command: [
"/bin/sh",
"-c",
"while true; do sleep 3600; done"
]
resources:
requests:
cpu: 150m
memory: 64Mi
volumeMounts:
- mountPath: /cli-data
name: kubestorage
subPath: ente-backup/config
- mountPath: /cli-export
name: kubestorage
subPath: ente-backup/export
restartPolicy: Always
volumes:
- name: kubestorage
nfs:
server: 192.168.3.233
path: "/volume1/kubernetes_misc_mounts"
readOnly: false
The important thing here is that the pod needs to use the same storage volumes as our cronjob will.
Scaling to 0 means that the necessary configuration will be present in the cluster when I need it, but won’t waste resources by running pods unnecessarily.
I scaled the deployment up to 1 so that a pod would come online:
kubectl -n ente-backup scale --replicas=1 deployment/ente-backup-cli
I exec’d into the new pod and triggered the account addition flow:
kubectl -n ente-backup exec -it ente-backup-cli-669dff58f4-vzbsv -- /usr/bin/ente account add
When prompted, I set the export directory to /cli-export/ (you can enter whatever you want, but be aware that the path needs to exist - the setup flow won’t create it for you if it doesn’t).
Once the account had been added, I scaled the deployment back down to 0:
kubectl -n ente-backup scale --replicas=0 deployment/ente-backup-cli
Scheduling
ente was now configured to work with my account.
The next step was to configure an automated run, using a CronJob.
The podspec is, more or less, identical to the spec used for the deployment above. The only real change is the command (which invokes ente export):
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: ente-backup
namespace: ente-backup
spec:
schedule: "0 4 * * *"
failedJobsHistoryLimit: 5
successfulJobsHistoryLimit: 5
concurrencyPolicy: "Forbid"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: ente-backup
image: codeberg.org/bentasker/ente-cli-docker:v0.1
imagePullPolicy: IfNotPresent
env:
- name: ENTE_CLI_CONFIG_DIR
value: "/cli-data/"
- name: ENTE_CLI_SECRETS_PATH
value: "/cli-data/.secrets"
command: ["/usr/bin/ente", "export"]
volumeMounts:
- mountPath: /cli-data
name: kubestorage
subPath: ente-backup/config
- mountPath: /cli-export
name: kubestorage
subPath: ente-backup/export
volumes:
- name: kubestorage
nfs:
server: 192.168.3.233
path: "/volume1/kubernetes_misc_mounts"
readOnly: false
This schedules the job to trigger at 4am each day.
TL:DR
You can grab a copy of the above config from Codeberg.
You’ll need to update the storage volumes so that they are applicable to your cluster, but once that’s done, you just need to apply:
kubectl apply -f ente-backup.yml
Scale up the deployment so that you can login to the CLI:
kubectl -n ente-backup scale --replicas=1 deployment/ente-backup-cli
kubectl -n ente-backup get pods
kubectl -n ente-backup exec -it ente-backup-cli-669dff58f4-vzbsv -- /usr/bin/ente account add
Once the flow’s completed, scale back down:
kubectl -n ente-backup scale --replicas=0 deployment/ente-backup-cli
Wait for the cron to trigger (or move onto the next section to trigger it manually).
Manual Run
I didn’t want to have to wait for the next day to find out whether the backup had run, so I manually created a job from the CronJob:
kubectl -n ente-backup create job ente-backup-manual --from=cronjob/ente-backup
I then tailed the logs
kubectl -n ente-backup logs job/ente-backup-manual
It took some time to work through all our photos, but eventually it logged completion:

Starting a new job resulted in a quick exit, as there was nothing new to do:

Running Without Kubernetes
With Docker
For those without a cluster to hand, the container can also be run using Docker.
Just as with Kubernetes, the important thing here is that volumes persist between manual invocations and cron’d runs:
ENTE_BACKUP_DIR=/path/to/backups
# Set up a storage location
mkdir -p ${ENTE_BACKUP_DIR}/ente/data ${ENTE_BACKUP_DIR}/ente/config
# Do the first time setup
docker run --rm \
-it \
-v $ENTE_BACKUP_DIR/ente/data:/cli-export \
-v $ENTE_BACKUP_DIR/ente/config:/cli-data \
codeberg.org/bentasker/ente-cli-docker account add
A backup wrapper would then look something like this:
#!/bin/bash
#
# Trigger the export container
ENTE_BACKUP_DIR=/path/to/backups
cd "$ENTE_BACKUP_DIR"
docker run --rm \
-it \
-v $PWD/ente/data:/cli-export \
-v $PWD/ente/config:/cli-data \
codeberg.org/bentasker/ente-cli-docker
The backup wrapper then just needs adding to a crontab
0 4 * * * /path/to/wrapper.sh
Without Containers
ente is a standalone binary, so can also be run without using containers at all.
If you want to build it from source, you’ll need go installed - see the dockerfile steps above for an indicator of how to build it.
If you’re happy fetching a pre-built binary, though, you can grab one from Github:
curl https://github.com/ente-io/ente/releases/download/cli-v0.2.3/ente-cli-v0.2.3-linux-amd64.tar.gz | tar xvz
Setup is:
ente account add
and the command that you need to add to cron is:
ente export
Caveats
There are a couple of caveats here:
Exports are per user account: although Ente allows sharing of albums between users, it’s very much set up as an individual user thing1. If you’ve got multiple Ente users all uploading their own photos (particularly automatically), you’ll need to export from each of these (you can run account add multiple times to add them).
There’s limited overlap protection: The CronJob is configured to try and prevent overlapping runs, however there’s nothing to prevent manually triggering a job while another is running. I don’t know exactly what the outcome of an overlapping run will be, but it’s unlikely to be anything good.
Conclusion
There are additional docs on the CLI available here, but the setup above provides for a scheduled incremental backup of an Ente account.
Using this with their hosted service provides the low maintenance associated with using SaaS offerings2, but maintains some of the access to data that self-hosting would provide.
The export separates images into albums (mirroring the organisation that’s performed in-app), so if something were to happen to Ente, the backup of our photos is already sorted for convenient import into something else.
This is one of the things that I like least about Ente - shared albums currently give something of a sub-par experience because they appear in a totally different place within the interface. ↩ 1.
Well... good ones anyway ↩