Borg Documentation — Borg - Deduplicating Archiver 2.0.0b21.dev10 documentation (opens in new tab)

borgbackup.readthedocs.io·11w·Open original (opens in new tab)

This is borg2!

Please note that this is the README for borg2 / master branch.

For the stable version’s docs, please see here:

https://borgbackup.readthedocs.io/en/stable/

Borg2 is currently in beta testing and might get major and/or breaking changes between beta releases (and there is no beta to next-beta upgrade code, so you will have to delete and re-create repos).

Thus, DO NOT USE BORG2 FOR YOUR PRODUCTION BACKUPS! Please help with testing it, but set it up additionally to your production backups.

TODO: the screencasts need a remake using borg2, see here:

https://github.com/borgbackup/borg/issues/6303

What is BorgBackup?

BorgBackup (short: Borg) is a deduplicating backup program. Optionally, it supports compression and authenticated encryption.

The main goal of Borg is to provide an efficient and secure way to back up data. The data deduplication technique used makes Borg suitable for daily backups since only changes are stored. The authenticated encryption technique makes it suitable for backups to targets not fully trusted.

See the installation manual or, if you have already downloaded Borg, docs/installation.rst to get started with Borg. There is also an offline documentation available, in multiple formats.

Main features

Space efficient storage

Deduplication based on content-defined chunking is used to reduce the number of bytes stored: each file is split into a number of variable length chunks and only chunks that have never been seen before are added to the repository.

A chunk is considered duplicate if its id_hash value is identical. A cryptographically strong hash or MAC function is used as id_hash, e.g. (hmac-)sha256.

To deduplicate, all the chunks in the same repository are considered, no matter whether they come from different machines, from previous backups, from the same backup or even from the same single file.

Compared to other deduplication approaches, this method does NOT depend on:

file/directory names staying the same: So you can move your stuff around without killing the deduplication, even between machines sharing a repo.

complete files or time stamps staying the same: If a big file changes a little, only a few new chunks need to be stored - this is great for VMs or raw disks.

The absolute position of a data chunk inside a file: Stuff may get shifted and will still be found by the deduplication algorithm. Speed

performance-critical code (chunking, compression, encryption) is implemented in C/Cython

local caching

quick detection of unmodified files Data encryption

Loading more...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help