Alright. In hindsight I might’ve been a bit too optimistic committing to publishing a post every few weeks. How long has it been at this point? Five months. Time really does fly by, but better late than never I suppose. I really want to write quality posts for you, but that takes time, effort, and the right motivation. I finally found all three and here I am again, writing about something as interesting as backups. No, please don’t leave. I swear it’ll be good!
Whom would you entrust with your deepest thoughts
For as long as I can remember, I’ve been concerned about privacy. Not for any particular reason, I just believe that it’s a basic right for everyone and the European Convention on Human Rights backs me up (pun intended). Data has become a highly valued commodity used for everything from advertising and training large language models to oppression by nation states. It seems like anyone who can get their hands on my data will try to monetise it or otherwise abuse it.
All these seemingly free services I’ve enjoyed for years have notoriously exploited my data to make money. While I acknowledge that I’ve gained some value from these services, it’s likely that I got the short end of the stick. Given the current political situation, with wannabe dictators rising to power and corporations finding new ways to monetise my data, I think it’s due time I did something about it, and I’ll start by bringing my data home and hosting it myself.
That doesn’t sound too difficult. I’ll just download all my data, delete my accounts, and slam bam, job done. Well, not quite. One thing these cloud services have provided me was the peace of mind that if I were to lose my laptop or phone, the data I care about, such as pictures or source code would not be lost. So I need a way to back up my data without giving large tech companies any control over it. That’s the goal of today’s post.
Narrowing down the options
Before this, I’d never really thought of backing up my data explicitly. I’d used one service for synchronising important files on my desktop, another one for keeping my photo library safe and a third one for hosting source code. I had no control over these services. I didn’t know how they were implemented, and I couldn’t verify that they would actually keep my data safe and private. In order to remedy this, I’ll have to find a backup solution I can trust or verify - preferably both.
Since I’m already using Proton for e-mail, I first considered just using Proton Drive which is their cloud storage product. However, a few things deterred me. Mainly the fact that their SDK is not ready for production and that I couldn’t get the Rclone Proton Drive backend to work properly, which would be important for automating the process. The steep price of €10 for 500 GB also didn’t help. Then I started considering what I actually needed:
- a vendor agnostic solution
- that I can easily automate
- which encrypts my data
- with no proprietary client
- that supports incremental backups
With those requirements in mind, I had another look at Rclone. It supports lots of providers and protocols, and it even has virtual providers for encryption and compression, which seemed really promising. Unfortunately it’s not designed for backups. While I could probably make it work with a bit of hacking and scripting, I would rather not have a fragile and potentially complicated backup solution to maintain. That meant going back to the drawing board and researching alternatives. I found a few tools that looked promising like rsnapshot but the one that really caught my eye was the purpose-built and modern backup program Restic. It ticked off all the boxes and is highly configurable while still looking quite simple to use, so I decided to evaluate whether it could work for me.
Restic supports a bunch of different storage options including using a local directory. This meant I could play around with it without needing any cloud storage. I’ll go through the process in more detail in the next section. For now, I installed the binary, ran a few commands, and ended up with an encrypted backup stored locally. Being able to create a local backup repository means that even if Restic doesn’t support the storage medium I want, I can just copy the repository anywhere.
With that in mind, I proceeded to look at hosting options. I just wanted something cheap, hosted in the EU. It didn’t take long before I stumpled upon Hetzner’s Storage Box, a simple storage server that supports a few different modes of transfer including SFTP, which provides me with 1 TB for €4 per month - a much better deal than what I would get from Proton. I’m not affiliated with Hetzner, I chose it because it seemed like a decent, cheap option that satisfied my requirements.
Restic in practice
Once the restic binary is installed on the system I want to back up, I can either initialise a new repository or execute commands on an existing one. I’ll go through the entire flow of setting up a new repository for the sake of completeness.
Since I’ve decided on using SFTP for my backup I’ll add the public part of my SSH key for authenticating to the server, then initialise my repository. When initialising the repository I’m prompted to enter a password. This password is used to encrypt my data and will be required for any subsequent commands. I need to keep the password safe. If I were to lose it, I wouldn’t be able to access my backup again. Ever.
# For an SFTP server - more connectors available here: https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html
➜ restic init -r sftp:[email protected]:/backup
# Alternative: for a simple directory repository
➜ restic init -r ~/backup
The largest annoyance I found when using Restic, and especially when using a non-local repository, is the fact that I have to provide the repository and password for every single command. Luckily, this can be remedied by setting a few environment variables, and as long as you don’t have a lot of different repositories to manage, this makes everything extremely simple. Since I’m using Nix I’ll use sops-nix to create two files, one with the repository path and one with the repository password and set the following environment variables.
➜ export RESTIC_REPOSITORY_FILE='/run/secrets/restic-repo'
➜ export RESTIC_PASSWORD_FILE='/run/secrets/restic-pass'
Now I can just issue the commands without thinking about the repository or the password.
There are other ways to resolve the password including providing Restic with a command it will run instead of asking for the password. It will then use the stdout from that command to authenticate. That means you could get the password straight from your password manager if it has a CLI. I’m using Bitwarden so something like this would resolve the password directly from there
➜ export RESTIC_PASSWORD_COMMAND='bw get password 00000000-0000-0000-0000-000000000000'
That’s pretty cool.
There are a more options for passing the parameters to Restic, so you’re not limited to what I’ve covered. Run restic --help to explore the alternatives.
I think that’s enough plumbing for now, let’s start backing something up. One of my most precious directories is ~/photos, so let’s back it up.
➜ restic backup ~/photos
Wait, what? Is it that simple? Yes it is! I now have a snapshot I can view using restic snapshots.
➜ restic snapshots
ID Time Host Tags Paths Size
------------------------------------------------------------------------------------------
36b31249 2026-03-30 10:17:16 argon /home/andreasvoss/photos 4.716 GiB
------------------------------------------------------------------------------------------
1 snapshots
In case I were to accidentally delete some files in the ~/photos directory, I would be able to recover them from the snapshot using the restic restore command providing it with a snapshot ID along with a target. So let’s delete some files then restore them from the snapshot!
➜ ls ~/photos
birthday-party wildflower-ultra
➜ rm -rf ~/photos/wildflower-ultra
➜ ls ~/photos
birthday-party
I’ve deleted one of my valued photo albums, so I really hope restoring it from the snapshot is going to work. In this case I just want to place the files where they were originally, so I’ll give the / directory as my target. Depending on the situation, you might not want to restore in place though.
➜ restic restore 36b31249 --target /
➜ ls ~/photos
birthday-party wildflower-ultra
That’s it. Simple. Elegant. I have only just scratched the surface of the possibilities you have when using restic, so I encourage you to dive into their documentation and give it a whirl. Here are a few commands I’ve found really useful
-
restic diff- get the difference between two snapshots, useful if you notice a significant change in snapshot size -
restic forget- delete snapshots and can be used to delete snapshots that are older than a specified age -
restic check- check the backup repository for errors to ensure you’ll be able to restore from your snapshots -
restic ls- list the files in the given snapshot
Now that I’ve set up my backup repository I just need to automate it.
Reducing the cognitive load through automation
Backing up files manually isn’t ideal, and since I’m only human, I might forget. There are a couple of options to choose from when looking to automate the process, mainly using either crontab or systemd-timers. When using Nix, it’s recommended to use systemd-timers, so that’s what I’ll do. I’m pretty sure that most of you won’t be using Nix, so here’s an example you can use without it. It’s not too bad. It’s just a couple of simple unit files and an executable script, let’s walk through it.
The backup.service unit file is just a oneshot service that points to the script I’ve made to execute the restic backup command.
[Unit]
Description=Backup
[Service]
ExecStart=%h/backup-service/backup.sh
Type=oneshot
User=andreasvoss
The backup.timer will start the backup.service on the schedule defined by the OnCalendar. I’m interested in a daily backup, so I’ve set this value to daily. You can make pretty advanced schedules with systemd-timers. In case you need that the ArchWiki is a great resource for learning about the options.
[Unit]
Description=Backup
[Timer]
OnCalendar=daily
Persistent=true
Unit=backup.service
[Install]
WantedBy=timers.target
Finally backup.sh will do the actual work using Restic. There is not much to explain that I haven’t already covered , with the exception of the --tag flag I am passing. I want to be able to differentiate between manual backups and automatic ones, so I’ve added a couple to tags my automatic backups. You can use these tags to filter snapshots when listing them, which is pretty neat.
#!/bin/sh
restic backup /home/andreasvoss/photos \
--tag daily,automatic \
--repository-file /run/secrets/restic-repo \
--password-file /run/secrets/restic-pass
The files in my example are stored in ~/backup-service so I’ll create symbolic links to ~/.config/systemd/user, make the script executable then enable and start the timer
➜ ln -sf /home/andreasvoss/backup-service/backup.service /home/andreasvoss/.config/systemd/user/backup.service
➜ ln -sf /home/andreasvoss/backup-service/backup.timer /home/andreasvoss/.config/systemd/user/backup.timer
➜ chmod +x /home/andreasvoss/backup-service/backup.sh
➜ systemctl --user daemon-reload
➜ systemctl --user enable backup.timer
➜ systemctl --user start backup.timer
There we go, my backup is now automated and my important data is safely stored.
So what’s next?
Now that I know my data is backed up securely at a secondary location, I can think about how to conveniently access my data like I was able to with cloud services I’ve now replaced. I have looked at a couple of really cool projects I want to self-host like Immich, Gitea and Syncthing. I’m going to explore these further, so stay tuned for more blog posts once I’ve had time to experiment a bit.