Backing up a living system
Serve It up Hot
The tools and strategies you use to back up files that are not being accessed won't work when you copy data that is currently in use by a busy application. This article explains the danger of employing common Linux utilities to back up living data and examines some alternatives.
Tools to make backups of your files are plentiful, and the Internet is full of tutorials explaining how to use them. Unfortunately, most entry-level blogs and articles assume the user just wants to back up a small set of data that remains static throughout the backup process.
Such an assumption is an acceptable approximation in most cases. After all, a user who is copying a folder full of horse pictures to backup storage is unlikely to open an image editor and start modifying the files at random while they are being transferred. On the other hand, real-life scenarios often require backups to occur while the data is being modified. For instance, consider the case of a web application that is under heavy load and must continue operating during the backup.
Pitfalls of Common Tools
Traditional Unix utilities such as dd, cpio, tar, or dump are poor candidates for taking snapshots from a folder full of living data. Among these, some are worse than the others.
If a program that operates at the filesystem level tries to copy a file that is being written to, for example, it could deposit a corrupt version of the file in the backup storage (Figure 1). If the corruption affects a file whose format follows a simple structure (such as a text file), this might not be a big problem, but files with complex formats might become unusable.
Utilities that operate at the block device level are especially prone to issues when used on live filesystems. A program such as dump bypasses the filesystem interfaces and reads the contents physically stored by the hard drive directly. Although this approach has a number of advantages [1], it carries a big burden. When a filesystem is in use, write operations performed on it go into a cache managed by the kernel but are not committed to disk right away. From the user's point of view, when a file is written, the operation might look instantaneous, but the file itself will exist in RAM only until the kernel writes it to storage.
As a result, the actual contents stored in the hard drive will be chaotic, potentially consisting of half-written files waiting for the kernel to make them whole sometime in the future. Trying to read the drive's contents with dump or dd might then return incomplete data and therefore generate a faulty backup.
A Solution of Compromise
Venerable Unix solutions are mature and well tested, and sysadmins have good reasons not to let them go. The fact that they are not adequate for backing up folders subjected to heavy load should not be a show stopper, right?
If a backup tool cannot work reliably with a folder under load, the obvious option is to remove the load before backing the folder up. This is certainly doable on a desktop computer: You can just refrain from modifying the contents of your pictures folder while they are being archived by tar.
For servers, this approach is more complex. A hobby personal server can certainly afford to put a service offline for backup as long as such a thing is done at a time when no users are ever connected. For example, if your personal blog that takes 15 visits a day resides in /var/www, and its associated database resides in /var/mariadb, it might be viable to have a cronjob turn off the web server and the database, call sync, back up both folders, and then restart the services. A small website could take a couple of minutes to archive, and nobody will notice if you do it while your target audience is sleeping (Listing 1).
Listing 1
Backup Script for Personal Server
01 #!/bin/bash 02 03 # Proof of Concept script tested under Devuan. Fault tolerance code 04 # excluded for the sake of brevity. Not to be used in production. 05 06 # Stop the services which use the folders we want to backup. 07 08 /etc/init.d/apache2 stop 09 /etc/init.d/mysql stop 10 11 # Instruct the Operating System to commit pending write instructions to 12 # the hard drive. 13 14 /bin/sync 15 16 # Backup using Tar and send the data over to a remote host via ssh 17 # Public key SSH authentication must be configured beforehand if this 18 # script is to be run unattended. 19 20 /bin/tar --numeric-owner -cf - /var/www /var/mariadb 2 >> 21 /dev/null | ssh debug@someuser@example.org "cat - > backup_`date -I`.tar" 22 23 # Restart services 24 25 /etc/init.d/mysql start 26 /etc/init.d/apache2 start
On the other hand, for anything resembling a production server, stopping services for backup is just not an option.
Enter the COW
A popular solution for backing up filesystems while they are under load is to use storage that supports COW (Copy-on-write).
The theory behind copy-on-write is simple. When a file is opened and modified in a classic filesystem, the filesystem typically overwrites the old file with the new version of the file. COW-capable storage takes a different approach: The new version of the file is written over to a free location of the filesystem, and the location of the old file can still be registered. The implication is, while a file is being modified, the filesystem still stores a version of the file known to be good.
This ability is groundbreaking because it simplifies taking snapshots of loaded filesystems. The storage driver can be instructed to create a snapshot at the current date. If a file is being modified as the snapshot is being taken, the old version of the file will be used instead, because the old version is known to be in a consistent state while the new version that is being written might not be.
ZFS is a popular filesystem with COW capabilities. Coming from a BSD background, I tend to consider ZFS a bit cumbersome for a small Linux server. Whereas ZFS feels truly native in FreeBSD, it comes across as an outsider in the Linux world, despite the fact it is actually gaining much traction.
On the other hand, Linux has had a native snapshot tool for quite a few years: LVM (Logical Volume Manager). As its name suggests, LVM is designed to manage logical volumes. Its claim to fame is its flexibility, because it allows administrators to add more hard drives to a computer and then use them as an extension to expand existing filesystems. An often overlooked capability of LVM is its snapshoting function.
The main drawback of using LVM is that its deployment must be planned well in advance. Let's suppose you plan to deploy a database that stores application data in /var/pictures. In order to be able to take LVM snapshots from it in the future, the filesystem I intend to mount at /var/pictures
must be created in the first place. For such a purpose, a partition within a hard drive must be designated as a Physical volume within which an LVM container will exist, using pvcreate
. Then I must create a Volume group within it using vgcreate
(Figure 2). Finally, I have to create a Logical volume inside the Volume group using lvcreate
and format it (Figure 3).
Care must be taken to leave some free space in the Volume group to host snapshots in the future. The snapshot area need not be as large as the filesystem you intend to back up, but if you can spare the storage, it is advisable.
If one day you need to make a backup of /var/pictures
, the only thing you need to do is to create a snapshot volume using a command such as:
lvcreate -L 9G -s -n database_snapshot/dev/database_group/database_volume
A snapshot volume may then be mounted as a regular filesystem with mount under a different directory when you are ready:
mkdir /var/pictures_snapshot mount -o ro /dev/database_group/database_snapshot/var/pictures_snapshot
You may then copy the contents of the snapshot using any regular tool, such as rsync, and transfer them over to definitive backup storage. The files under /var/pictures_snapshot are immutable and can be copied over even if the contents of /var/pictures are being modified during the process.
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Halcyon Creates Anti-Ransomware Protection for Linux
As more Linux systems are targeted by ransomware, Halcyon is stepping up its protection.
-
Valve and Arch Linux Announce Collaboration
Valve and Arch have come together for two projects that will have a serious impact on the Linux distribution.
-
Hacker Successfully Runs Linux on a CPU from the Early ‘70s
From the office of "Look what I can do," Dmitry Grinberg was able to get Linux running on a processor that was created in 1971.
-
OSI and LPI Form Strategic Alliance
With a goal of strengthening Linux and open source communities, this new alliance aims to nurture the growth of more highly skilled professionals.
-
Fedora 41 Beta Available with Some Interesting Additions
If you're a Fedora fan, you'll be excited to hear the beta version of the latest release is now available for testing and includes plenty of updates.
-
AlmaLinux Unveils New Hardware Certification Process
The AlmaLinux Hardware Certification Program run by the Certification Special Interest Group (SIG) aims to ensure seamless compatibility between AlmaLinux and a wide range of hardware configurations.
-
Wind River Introduces eLxr Pro Linux Solution
eLxr Pro offers an end-to-end Linux solution backed by expert commercial support.
-
Juno Tab 3 Launches with Ubuntu 24.04
Anyone looking for a full-blown Linux tablet need look no further. Juno has released the Tab 3.
-
New KDE Slimbook Plasma Available for Preorder
Powered by an AMD Ryzen CPU, the latest KDE Slimbook laptop is powerful enough for local AI tasks.
-
Rhino Linux Announces Latest "Quick Update"
If you prefer your Linux distribution to be of the rolling type, Rhino Linux delivers a beautiful and reliable experience.