$ source server -> run Borg backup -> export volumes from Borg -> create a tar stream or tar file on disk -> copy it to the target with scp target server -> receive the archive -> extract it -> restore database and config -> -weight: 500;">start the app -> verify before cutover
source server -> run Borg backup -> export volumes from Borg -> create a tar stream or tar file on disk -> copy it to the target with scp target server -> receive the archive -> extract it -> restore database and config -> -weight: 500;">start the app -> verify before cutover
source server -> run Borg backup -> export volumes from Borg -> create a tar stream or tar file on disk -> copy it to the target with scp target server -> receive the archive -> extract it -> restore database and config -> -weight: 500;">start the app -> verify before cutover
existing application data
+ Borg repository or backup cache
+ exported tar archive
+ optional compressed archive
+ partial transfer file
+ logs and temporary files
existing application data
+ Borg repository or backup cache
+ exported tar archive
+ optional compressed archive
+ partial transfer file
+ logs and temporary files
existing application data
+ Borg repository or backup cache
+ exported tar archive
+ optional compressed archive
+ partial transfer file
+ logs and temporary files
rsync -aHAX --numeric-ids --partial --partial-dir=.rsync-partial \ /srv/apps/myapp/volumes/ user@target:/srv/apps/myapp/volumes/
rsync -aHAX --numeric-ids --partial --partial-dir=.rsync-partial \ /srv/apps/myapp/volumes/ user@target:/srv/apps/myapp/volumes/
rsync -aHAX --numeric-ids --partial --partial-dir=.rsync-partial \ /srv/apps/myapp/volumes/ user@target:/srv/apps/myapp/volumes/
borg export-tar -> stdout -> FastFileLink CLI -> stdout on target -> tar extract
borg export-tar -> stdout -> FastFileLink CLI -> stdout on target -> tar extract
borg export-tar -> stdout -> FastFileLink CLI -> stdout on target -> tar extract
borg export-tar "$BORG_REPO::$ARCHIVE" - \ | "$FFL" - \ --name "$APP_NAME-volumes.tar" \ --e2ee \ --stdin-cache off \ --max-downloads 1 \ --pickup-code "$PICKUP_CODE"
borg export-tar "$BORG_REPO::$ARCHIVE" - \ | "$FFL" - \ --name "$APP_NAME-volumes.tar" \ --e2ee \ --stdin-cache off \ --max-downloads 1 \ --pickup-code "$PICKUP_CODE"
borg export-tar "$BORG_REPO::$ARCHIVE" - \ | "$FFL" - \ --name "$APP_NAME-volumes.tar" \ --e2ee \ --stdin-cache off \ --max-downloads 1 \ --pickup-code "$PICKUP_CODE"
"$FFL" download "$LINK" \ --pickup-code "$PICKUP_CODE" \ --e2ee \ --stdout \ | tar xvf - -C "$RESTORE_ROOT"
"$FFL" download "$LINK" \ --pickup-code "$PICKUP_CODE" \ --e2ee \ --stdout \ | tar xvf - -C "$RESTORE_ROOT"
"$FFL" download "$LINK" \ --pickup-code "$PICKUP_CODE" \ --e2ee \ --stdout \ | tar xvf - -C "$RESTORE_ROOT"
source server: no exported archive on disk
target server: no downloaded archive on disk
transfer path: data streams directly into the target layout
source server: no exported archive on disk
target server: no downloaded archive on disk
transfer path: data streams directly into the target layout
source server: no exported archive on disk
target server: no downloaded archive on disk
transfer path: data streams directly into the target layout
source app backup snapshot -> Borg export stream -> FastFileLink CLI sender -> FastFileLink CLI receiver -> tar extracts directly into the target app folder -> volumes land in place
source app backup snapshot -> Borg export stream -> FastFileLink CLI sender -> FastFileLink CLI receiver -> tar extracts directly into the target app folder -> volumes land in place
source app backup snapshot -> Borg export stream -> FastFileLink CLI sender -> FastFileLink CLI receiver -> tar extracts directly into the target app folder -> volumes land in place
Deploy.py migrate -> app bin/migrate -> ExportVolumes from Borg -> source streams into FastFileLink CLI -> target receives from FastFileLink CLI -> volumes extract directly into the target app folder -> database and privileges are restored -> backup history can also be transferred when desired -> target app is installed and started -> verification happens before cutover
Deploy.py migrate -> app bin/migrate -> ExportVolumes from Borg -> source streams into FastFileLink CLI -> target receives from FastFileLink CLI -> volumes extract directly into the target app folder -> database and privileges are restored -> backup history can also be transferred when desired -> target app is installed and started -> verification happens before cutover
Deploy.py migrate -> app bin/migrate -> ExportVolumes from Borg -> source streams into FastFileLink CLI -> target receives from FastFileLink CLI -> volumes extract directly into the target app folder -> database and privileges are restored -> backup history can also be transferred when desired -> target app is installed and started -> verification happens before cutover
source process is alive
target process is alive
target output is growing
logs show an accepted transfer path
source process is alive
target process is alive
target output is growing
logs show an accepted transfer path
source process is alive
target process is alive
target output is growing
logs show an accepted transfer path
1. Restore app files, volumes, database, and config on the target.
2. Start the target containers or services.
3. Run health checks.
4. Verify static files and uploaded files.
5. Verify database-backed pages and the login flow.
6. Check logs for path, permission, or environment-variable issues.
7. Only then schedule the DNS or load-balancer cutover.
1. Restore app files, volumes, database, and config on the target.
2. Start the target containers or services.
3. Run health checks.
4. Verify static files and uploaded files.
5. Verify database-backed pages and the login flow.
6. Check logs for path, permission, or environment-variable issues.
7. Only then schedule the DNS or load-balancer cutover.
1. Restore app files, volumes, database, and config on the target.
2. Start the target containers or services.
3. Run health checks.
4. Verify static files and uploaded files.
5. Verify database-backed pages and the login flow.
6. Check logs for path, permission, or environment-variable issues.
7. Only then schedule the DNS or load-balancer cutover.
borg export-tar repo::archive -
pg_dump mydb
mysqldump mydb
tar -cf - /large/folder
zfs send pool/dataset@snapshot
borg export-tar repo::archive -
pg_dump mydb
mysqldump mydb
tar -cf - /large/folder
zfs send pool/dataset@snapshot
borg export-tar repo::archive -
pg_dump mydb
mysqldump mydb
tar -cf - /large/folder
zfs send pool/dataset@snapshot
producer | ffl - --stdin-cache off --max-downloads 1
producer | ffl - --stdin-cache off --max-downloads 1
producer | ffl - --stdin-cache off --max-downloads 1
ffl download "$LINK" --stdout | consumer
ffl download "$LINK" --stdout | consumer
ffl download "$LINK" --stdout | consumer
read the backup stream
transfer it immediately
restore it directly on the target
read the backup stream
transfer it immediately
restore it directly on the target
read the backup stream
transfer it immediately
restore it directly on the target - prepare the source and target app layout
- invoke the app's own migrate and backup scripts
- export volumes
- dump and restore the database
- restore configuration
- -weight: 500;">start the target app and verify it - Did the sender actually begin reading stdin?
- Did the receiver really connect?
- Are we using a direct path or a fallback path?
- If both processes are still alive, is progress real or are we just stuck?
- Did the producer fail first, or did the transfer tool fail first? - stream volumes from Borg through FastFileLink CLI
- restore database dumps on the target
- restore database privileges
- optionally bring over the Borg backup repository
- pull the app image
- -weight: 500;">start the target container - migrating old or nearly full servers
- restoring production backups into development VMs
- disaster recovery transfers
- moving large user-upload archives
- transferring database dumps without leaving dump files behind
- temporary environments where prearranged SSH trust is inconvenient