Ubuntu Server 18.04 – Implementing a backup plan

How to install Docker CE on CentOS 8

Creating a solid backup plan is one of the most important things you’ll ever do as a server administrator. Even if you’re only using Ubuntu Server at home as a personal file server, backups are critical. During my career, I’ve seen disks fail many times. I’ll often hear arguments about which hard disk manufacturer beats others in terms of longevity, but I’ve seen disk failures so often, I don’t trust any of them. All disks will fail eventually, it’s just a matter of when. And when they do fail, they’ll usually fail hard with no easy way to recover data from them. A sound approach to managing data is that any disk or server can fail, and it won’t matter, since you’ll be able to regenerate your data from other sources, such as a backup or secondary server.

There’s no one best backup solution, since it all depends on what kind of data you need to secure, and what software and hardware resources are available to you. For example, if you manage a database that’s critical to your company, you should back it up regularly. If you have another server available, set up a replication slave so that your primary database isn’t a single point of failure. Not everyone has an extra server lying around, so sometimes you have to work with what you have available. This may mean that you’ll need to make some compromises, such as creating regular snapshots of your database server’s storage volume, or regularly dumping a backup of your important databases to an external storage device.

The rsync utility is one of the most valuable pieces of software around to server administrators. It allows us to do some very wonderful things. In some cases, it can save us quite a bit of money. For example, online backup solutions are wonderful in the sense that we can use them to store off-site copies of our important files. However, depending on the volume of data, they can be quite expensive. With rsync, we can back up our data in much the same way, with not only our current files copied over to a backup target, but also differentials as well. If we have another server to send the backup to, even better.

At one company I’ve managed servers for, they didn’t want to subscribe to an online backup solution. To work around that, a server was set up as a backup point for rsync. We set up rsync to back up to the secondary server, which housed quite a bit of files. Once the initial backup was complete, the secondary server was sent to one of our other offices in another state. From that point forward, we only needed to run rsync weekly, to back up everything that has been changed since the last backup. Sending files via rsync to the other site over the internet was rather slow, but since the initial backup was already complete before we sent the server there, all we needed to back up each week was differentials. Not only is this an example of how awesome rsync is and how we can configure it to do pretty much what paid solutions do, but also the experience was a good example of utilizing what you have available to you.

Since we’ve already gone over rsync in Chapter 8, Sharing and Transferring Files, I won’t repeat too much of that information here. But since we’re on the subject of backing up, the --backup-dir option is worth mentioning again. This option allows you to copy files that would normally be replaced to another location. As an example, here’s the rsync command I mentioned in Chapter 8, Sharing and Transferring Files:

    CURDATE=$(date +%m-%d-%Y)
    export $CURDATE
    sudo rsync -avb --delete --backup-dir=/backup/incremental/$CURDATE /src /target  

This command was part of the topic of creating an rsync backup script. The first command simply captures today’s date and stores it into a variable named $CURDATE. In the actual rsync command, we refer to this variable. The -b option (part of the -avb option string) tells rsync to make a copy of any file that would normally be replaced. If rsync is going to replace a file on the target with a new version, it will move the original file to a new name before overwriting it. The --backup-dir option tells rsync that when it’s about to overwrite a file, to put it somewhere else instead of copying it to a new name. We give the --backup-dir option a path, where we want the files that would normally be replaced to be copied to. In this case, the backup directory includes the $CURDATE variable, which will be different every day. For example, a backup run on 8/16/2018 would have a backup directory of the following path, if we used the command I gave as an example:


This essentially allows you to keep differentials. Files on /src will still be copied to /target, but the directory you identify as a --backup-dir will contain the original files before they were replaced that day.

On my servers, I use the --backup-dir option with rsync quite often. I’ll typically set up an external backup drive, with the following three folders:

  • current
  • archive
  • logs

The current directory always contains a current snapshot of the files on my server. The archive directory on my backup disks is where I point the --backup-dir option to. Within that directory will be folders named with the dates that the backups were taken. The logs directory contains log files from the backup. Basically, I redirect the output of my rsync command to a log file within that directory, each log file being named with the same $CURDATE variable so I’ll also have a backup log for each day the backup runs. I can easily look at any of the logs for which files were modified during that backup, and then traverse the archive folder to find an original copy of a file. I’ve found this approach to work very well. Of course, this backup is performed with multiple backup disks that are rotated every week, with one always off-site. It’s always crucial to keep a backup off-site in case of a situation that could compromise your entire local site.

The rsync utility is just one of many you can utilize to create your own backup scheme. The plan you come up with will largely depend on what kind of data you’re wanting to protect and what kind of downtime you’re willing to endure. Ideally, we would have an entire warm site with severs that are carbon copies of our production servers, ready to be put into production should any issues arise, but that’s also very expensive, and whether you can implement such a routine will depend on your budget. However, Ubuntu has many great utilities available you can use to come up with your own system that works. If nothing else, utilize the power of rsync to back up to external disks and/or external sites.

Comments are closed.