unsubbed.co

Docker Volume Backup

Docker Volume Backup is a self-hosted backup & recovery tool that backups Docker volumes locally or to a number of compatible storage protocols.

Open-source Docker volume backup, honestly reviewed. No marketing fluff, just what you get when you actually need your data back.

TL;DR

  • What it is: A lightweight (~25MB) Go-based Docker companion container that backs up named volumes to local storage, S3-compatible endpoints, Azure Blob Storage, WebDAV, Dropbox, Google Drive, or SSH — or any combination of those at once [README].
  • Who it’s for: Self-hosters and homelab operators who run Docker Compose stacks and have experienced (or fear) losing everything when a drive dies. If you’ve ever rebuilt a container and realized the data was inside a named volume with no backup, this is the gap it fills [1][4].
  • Cost: $0 for the software (MPL-2.0). You pay for the storage destination: S3-compatible storage (Backblaze B2 starts at $0.006/GB/month), or $0 if you’re writing to a second local drive. There is no SaaS tier, no paid plan, no vendor relationship to manage.
  • Key strength: Configuration is entirely environment-variable-driven. You add the container to an existing Compose file, point it at volumes, set a cron schedule, and walk away. No agent to maintain, no UI to log into, no external service dependency [README][2].
  • Key weakness: No web UI, no dashboard, no monitoring surface. If a backup silently fails and you haven’t set up notifications, you won’t know until you need to restore. The tool is only as reliable as your alerting setup [README].

What is Docker Volume Backup

Docker Volume Backup is a companion container published by offen.software under the Mozilla Public License 2.0. You add it to any Docker Compose stack alongside your existing services. It wakes up on a schedule, tarballs the volumes you’ve mounted into it, optionally encrypts the archive with GPG, ships it to one or more storage backends, and prunes old backups according to retention rules you set [README].

The project started as a fork of jareware/docker-volume-backup, a shell-script-based approach that had grown heavy (Ubuntu base image, external tools). The offen team rewrote it in Go, cut the image size to roughly 25MB compressed (about 1/20th of the original), removed InfluxDB-specific functionality, added backup rotation for non-AWS backends, and added ARM64 and ARM/v7 support [README]. As of this review it sits at 3,443 GitHub stars.

The tool is genuinely unopinionated about where your backups go. A single backup.env can specify multiple destinations simultaneously — push to a local directory and an S3 bucket and a remote SSH host in a single backup run. That combination strategy is the most useful pattern in practice: local copy for fast restores, remote copy for disaster recovery [README].

One distinguishing behavior worth understanding: when you label a container with docker-volume-backup.stop-during-backup=true, the backup daemon will stop that container before snapshotting the volume and restart it afterward. This is how you get consistent backups of databases or anything that holds state in memory. If you skip the label, the backup runs against a live volume — fine for low-write workloads, risky for Postgres [README].


Why people choose it

The use case is driven almost entirely by pain. Self-hosters discover the backup problem in one of two ways: they read a cautionary post, or they live the cautionary post.

The XDA Developers piece [1] opens with exactly this: “Everyone talks about self-hosted and getting started with Docker. But there’s one key aspect of self-hosting that a lot of users miss out on — properly backing up their containers. Let’s just say, I discovered that firsthand when the SSD volume on my NAS crashed and brought down my entire smart home with it.” That’s the universal onboarding moment for Docker backup tooling.

The appeal of docker-volume-backup specifically over more complex solutions comes down to operational simplicity. It integrates into an existing Compose file rather than replacing your orchestration. David Peach’s posts [2][3] document using it to push Docker volume backups encrypted to DigitalOcean Spaces — the pattern is straightforward: mount the volume read-only into the backup container, configure S3 credentials and GPG key in the env file, set the cron schedule. Once it’s running, you don’t touch it.

The Virtualization Howto writeup [4] on Home Assistant Docker Compose design decisions puts the storage problem plainly: Home Assistant stores YAML config, a SQLite database, and long-term history in persistent storage. The author’s recommendation for bind mounts comes with the caveat that backups are “as easy as copying a folder” — but copying a folder requires something to do the copying. That’s exactly the niche this tool fills: automated, scheduled, off-host copies of the folders or named volumes your services depend on.

The alternative that gets more buzz in some communities is Repliqate, a label-based backup tool with a Docker-socket approach [1]. The difference is operational philosophy: Repliqate uses container labels to declare what gets backed up and manages all of it from a single daemon; docker-volume-backup is more composable but requires you to mount volumes explicitly into the backup service. Neither is objectively better — Repliqate is slightly more convenient for large stacks, docker-volume-backup gives you more control over per-volume backup destinations.


Features

Based on the README and documentation:

Backup destinations (can combine multiple):

  • Local directory (/archive mount)
  • Any S3-compatible storage (AWS S3, Backblaze B2, MinIO, Cloudflare R2, Wasabi, etc.)
  • Azure Blob Storage
  • WebDAV (Nextcloud, ownCloud, any WebDAV endpoint)
  • Dropbox
  • Google Drive
  • SSH/SFTP remote hosts [README]

Scheduling and execution:

  • Cron-based recurring backups via environment variable
  • One-off backup via docker run --entrypoint backup
  • Manual trigger support documented in how-tos [README]

Data integrity:

  • Container stop-during-backup via Docker labels (docker-volume-backup.stop-during-backup=true)
  • Docker socket mount for container lifecycle control
  • Read-only volume mounts on the backup side
  • Custom pre/post backup commands via container labels [README]

Retention and rotation:

  • Automatic pruning of old backups by age
  • Works on all supported backends, not just S3 (unlike the original which relied on S3 lifecycle policies)
  • Configurable retention window [README]

Security:

  • GPG encryption of backup archives
  • IAM instance profile authentication for S3 (no static credentials required in AWS environments)
  • Env-file based configuration (credentials stay out of Compose YAML) [README]

Notifications:

  • Notifications on completed (success and failure) backup runs
  • Supports multiple notification backends [README]

Infrastructure:

  • Docker Swarm mode support
  • ARM64 and ARM/v7 architecture support (runs on Raspberry Pi, Synology ARM NAS, etc.)
  • Published to both Docker Hub (offen/docker-volume-backup) and GitHub Container Registry (ghcr.io/offen/docker-volume-backup)
  • Image size ~25MB compressed [README]

What it does NOT do:

  • No web UI or dashboard
  • No database-native backup (no pg_dump, no mysqldump — it snapshots the volume files directly, so database consistency depends on stopping the container)
  • No incremental backups — each run produces a full tarball
  • No built-in restore workflow beyond docs guidance

Pricing: software cost vs. storage cost math

There is no SaaS version, no pricing tier, no commercial license. The software is MPL-2.0 and runs on infrastructure you already control. What you pay for is the storage backend.

If you back up to a second local disk or a network mount: cost is whatever drives cost. Effectively $0 in incremental spend.

If you use S3-compatible object storage, here are real numbers for small homelab-scale backups (assume 20GB total backup data with 7-day retention = ~140GB stored):

ProviderStorage cost/monthNotes
Backblaze B2~$0.84/mo$0.006/GB, first 10GB free
Cloudflare R2$0Free tier covers 10GB stored, 1M Class A ops
AWS S3 Standard~$3.22/mo$0.023/GB
DigitalOcean Spaces$5/mo flat250GB included [2]
MinIO self-hosted$0You run it on your own hardware

For a typical homelab operator backing up 20–50GB of Docker volume data, the actual cash cost of storage is under $5/month. For anyone currently paying for a managed backup SaaS or using a commercial NAS backup solution, the delta is meaningful.

The consulting angle: the project README explicitly offers paid one-hour consulting sessions for teams that need help integrating docker-volume-backup into existing setups. This is a signal about who uses it professionally — not hobbyists, but teams running Docker in production who want backup without a full backup platform [README].


Deployment reality check

The Compose-based setup from the README is genuinely simple. You add a backup service to an existing docker-compose.yml, mount your volumes into it as read-only, mount the Docker socket if you want container lifecycle control, and provide a backup.env with your credentials and schedule. That’s it for the happy path.

What you actually need:

  • An existing Docker Compose stack (this is not a standalone tool)
  • A storage destination and credentials for it
  • A cron expression for your backup schedule
  • Optionally: a GPG key if you want encrypted archives
  • Optionally: Docker socket access if you want consistent database backups via container stopping

What can go sideways:

The single most dangerous failure mode is silent. If your backup destination is unavailable (expired credentials, full bucket, network partition) and you haven’t configured notifications, backups silently fail and you learn about it when you need a restore. Setting up notifications — even a simple webhook to a Discord channel — is not optional in practice [README].

Database consistency is a genuine concern. The tool stops-and-starts containers via labels, which works for single-container setups. If you have a Postgres container with replicas, or a multi-container application with transactional state, the stop-one-container approach may not be enough. For serious database backups you want pg_dump or equivalent running inside the container, not a volume snapshot [4][README].

GPG encryption requires key management. Encrypting the archive is one line of config, but if you lose the private key, the encrypted backups are unrecoverable. Document your key storage separately from your backup storage [2].

The Virtualization Howto piece [4] notes that named Docker volumes (as opposed to bind mounts) require a container to access raw data: docker run --rm -v volume:/data busybox. This is exactly the problem docker-volume-backup solves, but it also means restoration is a manual Docker operation, not a GUI click. Factor that into your recovery plan.

Time estimate: For a developer already comfortable with Docker Compose: 15–30 minutes to a working backup configuration. For a homelab operator learning Docker: 1–2 hours including reading the documentation and debugging credentials. The documentation at https://offen.github.io/docker-volume-backup is thorough and covers common destinations with worked examples.


Pros and cons

Pros

  • Dead simple integration. It’s another service in your Compose file, not a separate backup platform to operate [README].
  • Genuinely lightweight. 25MB image with no runtime dependencies. Doesn’t affect your stack’s resource usage [README].
  • Storage backend flexibility. S3, Azure, WebDAV, Dropbox, Google Drive, SSH, local — and you can write to multiple destinations per backup run [README].
  • MPL-2.0 license. Less restrictive than GPL, compatible with commercial use. You can ship this in a product without calling a lawyer [README].
  • ARM support. Works on Raspberry Pi and ARM-based NAS devices — the most common homelab hardware [README].
  • Docker Swarm compatible. Not just for single-host Compose setups [README].
  • GPG encryption built-in. One env variable away from encrypted archives. For anyone storing backups off-site, this matters [README][2].
  • Backup rotation for non-AWS backends. The original jareware/docker-volume-backup relied on S3 lifecycle policies for rotation. This version handles it in the tool itself, so rotation works for MinIO, WebDAV, everything [README].
  • IAM instance profile auth. No static AWS credentials required in AWS environments [README].
  • Backed by a real company (offen.software) with a consulting offering — not an abandoned hobby project [README].

Cons

  • No UI. Zero visual interface. If you’re not comfortable with env files and logs, this is not the tool for you [README].
  • Full tarballs only. No incremental or deduplication-based backups. A 50GB volume produces a 50GB (or compressed equivalent) archive on every run. Storage costs and transfer time scale linearly [README].
  • Restore is manual. The documentation covers restoration, but it’s a Docker CLI operation (docker run to extract the tarball back into a volume). No automated restore, no point-in-time browsing [README].
  • Database consistency requires container downtime. Stopping a production database for a volume snapshot is acceptable for homelabs and small teams; it’s not a production pattern for high-availability databases [4][README].
  • Silent failures without notifications configured. The tool will attempt the backup and log the failure, but nothing wakes you up unless you’ve wired up notifications yourself [README].
  • No retention per destination. Retention is configured globally; you can’t say “keep 30 days on S3 but 7 days local” without separate backup service instances.
  • Third-party review coverage is sparse. This is a narrow infrastructure tool — it doesn’t generate the kind of comparison articles that tools with dashboards attract. Finding community reports on edge cases requires digging through GitHub issues.

Who should use this / who shouldn’t

Use Docker Volume Backup if:

  • You run services in Docker Compose and currently have no automated backup for your named volumes.
  • You want a set-it-and-forget-it tool that integrates into your existing stack without adding operational complexity.
  • You’re comfortable with environment variables and Docker Compose YAML.
  • You’re backing up homelab services (Nextcloud, Home Assistant, Gitea, Immich, Vaultwarden, etc.) where brief downtime during backup is acceptable.
  • You want to push to Backblaze B2 or Cloudflare R2 for cents per month rather than paying for a managed backup service.

Skip it if:

  • You’re running production databases where stopping the container during backup is not acceptable. Use database-native backup tools (pg_dump, mysqldump) and ship those archives to S3.
  • You need a visual interface to browse backup history, verify restores, or hand off to a non-technical team member.
  • Your volumes are measured in hundreds of gigabytes and you need incremental/deduplicated backups. Look at Restic or BorgBackup instead.
  • You’re on Kubernetes. Look at Velero for Kubernetes-native volume backup.
  • You need compliance-grade backup with audit logs, immutability, and certified retention. This is not that tool.

Alternatives worth considering

  • Restic — content-addressed, deduplicated, incremental backups. More complex to configure than docker-volume-backup but dramatically more storage-efficient for large or frequently-changed volumes. No native Docker integration; you run it from a script or wrapper. [xda-developers.com mentions Duplicati in similar context][1]
  • Duplicati — GUI-based, runs as a service, supports most cloud backends. Easier for non-technical operators, heavier resource footprint. One Duplicati user: “the free, open-source solution that’s secure, flexible, and just works” [1].
  • Repliqate — similar niche to docker-volume-backup, but uses Docker label discovery to find containers automatically rather than requiring explicit volume mounts in the Compose file. Simpler for large stacks, less flexible per-volume configuration [1].
  • BorgBackup / Borgmatic — chunked, deduplicated, encrypted. The serious option for large backup workloads. Steep learning curve compared to anything Docker-native.
  • Velero — the Kubernetes answer to this problem. Not relevant for Docker Compose setups.
  • Kopia — newer deduplication-based backup with a web UI. More operational overhead than docker-volume-backup but significantly better storage efficiency.
  • Commercial NAS backup software (Synology Hyper Backup, etc.) — if you’re on a Synology or QNAP, the built-in tools can back up Docker volumes via the bind-mount path. Less flexible, more integrated with the hardware you already have.

For a homelab operator running Compose stacks on a Linux VPS or bare-metal box, the realistic shortlist is docker-volume-backup vs. Restic. Use docker-volume-backup if setup time and simplicity matter. Use Restic if storage efficiency and incremental snapshots matter.


Bottom line

docker-volume-backup solves a specific, important, frequently-neglected problem: you set up a Docker Compose stack, it runs great for two years, and then a disk dies and you realize nothing was backed up. This tool closes that gap with minimum friction — add a service to Compose, point it at your volumes and a storage backend, set a cron schedule, configure failure notifications, and move on. It won’t replace a proper database backup strategy, it won’t give you a UI to browse snapshots, and it won’t deduplicate large volumes efficiently. But for the 80% case — homelab services, small team self-hosted stacks, volumes measured in gigabytes not terabytes — it does exactly what it says, stays out of your way, and costs nothing beyond cheap object storage. The backup problem for Docker Compose users is real and underappreciated. This is the least-friction solution to it.

If the setup and configuration is the blocker, that’s exactly what upready.dev handles for clients — one-time deployment, production-grade configuration, you own the infrastructure.


Sources

  1. Dhruv Bhutani, XDA Developers“This is how I keep my Docker backups safe with self-hosted backups” (Sep 10, 2025). https://www.xda-developers.com/self-hosted-docker-backup/
  2. David Peach“Backing up Docker volume data to Digital Ocean spaces with encryption” (Dec 5, 2023). https://davidpeach.me/tag/homelab/
  3. David Peach — Docker tag archive, homelab and Docker posts (2020–2023). https://davidpeach.me/tag/docker/
  4. Brandon Lee, Virtualization Howto“Home Assistant Docker Compose Design Decisions: Networking, Storage, Backups” (Jul 22, 2025). https://www.virtualizationhowto.com/2025/07/home-assistant-docker-compose-design-decisions-networking-storage-backups/
  5. Recursive Erudition — Posts on Docker architecture and volume design (2015–present). https://radianttiger.com/category/posts/page/2/

Primary sources: