Section 4.1. Summary of Important Features


4.1. Summary of Important Features

Let's start with a brief overview of Amanda's architecture. This will help you understand the most important concepts in Amanda functionality.

4.1.1. Client/Server Architecture Using Nonproprietary Tools

Amanda is designed to handle large numbers of clients and data, yet is reasonably simple to install and maintain. As a matter of fact, it takes more time to order a pizza than to configure an Amanda server with two Linux clients and one Windows client and to start a test backup. A white paper available at http://amanda.zmanda.com/quick-backup-setup.html provides detailed information about configuring Amanda backup in less than 15 minutes.

Amanda scales well up and down, so small configurationseven a single clientare possible. Many users back up just a single client that is also the Amanda server. On the other hand, many Amanda users back up hundreds and even thousands of filesystems (there could be multiple filesystems per protected system) to a large tape library with multiple drives.

The Amanda code is written in C (with some Perl and shell scripts), and the code is portable to any flavor of Linux and Unix including Mac OS X. Windows clients can be backed up today via Samba or via a Cygwin client, which is a Linux-like environment for Windows. The Amanda community is actively working on providing a native client for Windows. The new Windows client will take advantage of Microsoft technologies such as Volume Shadow Copy Service (VSS) that provide snapshots of a system's volumes, including snapshots of open files.

The biggest advantage of Amanda over any other backup software is that Amanda does not use any proprietary data formats. It uses standard operating system utilities such as dump and tar, or open-source utilities available in many operating systems such as GNU tar, smbtar, and Schily tar, and uses the same archive format on the media. Depending on which one is the best match for your filesystems, directories, and files, you can mix and match these utilities as you wish. Since you use standard utilities, you can be confident that they will always be available to you. Another advantage of using standard utilities is that in case of disaster recovery or any other emergency, you can recover your data even without Amanda. (We explain how to recover data without Amanda when we discuss Amanda restores.)

Because Amanda uses standard utilities, it provides the following:

  • Backup of sparse files

  • Backup of hard links

  • No changing of file timestamp during backup

  • Exclusions of files and directories

From the system-administrator perspective, it is very important that Amanda does not use any proprietary device drivers. Any device supported by an operating system works well with Amanda. In practical terms, this means that Amanda supports a wide range of tape storage devices, and new devices are usually not difficult to add. Many tape changers, stackers, jukeboxes, and tape libraries are supported by using special tape changer scripts to provide truly hands-off and lights-out backup. Basically, if you can read and write to your tape drive and move tapes in your tape library with standard operating system commands such as mt, Amanda will work with your tape library. Because Amanda doesn't use proprietary device drivers, another benefit is that you don't have to worry about breaking support for a device when upgrading to the latest version of Amanda.

To understand Amanda architecture and inner workings, let's take a look at a simplified Amanda configuration and review an example of a backup cycle, illustrated in Figure 4-2.

Figure 4-2. Amanda server with two backup clients


To simplify our discussion, let's assume that we have only two Amanda clients that run on two workstations: workstation Copper running Solaris and workstation Iron running Linux. Each workstation has two filesystems with users' data that we want to protect. Amanda server Quartz is installed on a different Linux host and, for simplicity's sake, we don't back up the Amanda server itself. (In your production and evaluation environments, you should always back up the Amanda server.) Let's also assume that we want to run a full backup once every four days and incremental backups between full backups.

Amanda is designed as a traditional client/server architecture. The Amanda server, also historically known as the tape host, is connected either directly or over the Storage Area Network to a tape drive or tape changer. Each client backup program is instructed to write to standard output, which Amanda collects and transmits to the tape server. The client/server architecture provides these benefits:

  • It ensures scalability of Amanda from environments with a single client and CD-ROM to environments with hundreds of clients and large tape libraries with multiple tape drives and hundreds of tapes.

  • It allows all configurations to be done on the Amanda server. Once the initial configuration of Amanda is done, you can easily add clients without worrying about breaking your tested backup procedures.

  • It allows some CPU-intensive operations such as compression or encryption to be done on a client before sending backup images to the Amanda server. However, in some situations, for example when Amanda clients are running within virtual machines, these CPU-intensive operations can also be done on the Amanda server.

Considering the ever-increasing importance of security for backup data from a privacy and compliance perspective, let's go over a brief overview of Amanda security.

4.1.2. Amanda Security

Amanda clients communicate with the Amanda server via its own network protocols on top of TCP and UDP. Amanda's client/server communications do not suffer from the security holes inherent in the traditional rmt approach used by dump, such as using an .rhosts file in root's home directory.

As in every other client/server setup, you should ensure that only your own and trusted Amanda server is able to communicate with Amanda clients. Amanda achieves that by using the file .amandahosts. You can see that in Figure 4-2, there are three .amandahosts files, one on the Amanda server Quartz and one for each Amanda client. On the client side, you have to add the name of the Amanda server (or Amanda servers if you prefer the same host to be protected by multiple Amanda servers) and the Amanda user that is allowed to back up the client. For example, the . amandahosts file for Linux client Iron in Figure 4-2 should have the following entry:

quartz.zmanda.com     amandabackup

That tells the Amanda client Iron to let Amanda server Quartz communicate with user amandabackup.

During restores, you need access to an Amanda server. For the configuration presented in Figure 4-2, the .amandahosts file on the tape server Quartz should have the following entries:

iron.zmanda.com       root copper.zmanda.com     root

These entries tell the Amanda server to allow the root user on each client to run restores. For security reasons, Amanda was designed to allow only the root user to restore data.

For stronger data transport security, Amanda can also use OpenSSH. This allows Amanda to protect the transfer of data between clients and backup servers with strong authentication and authorization mechanisms. Amanda also features an abstracted secure communication API that enables developers to easily add different communication plug-ins between backup server and client. Even with a single backup server, Amanda can use different communication mechanisms for different clients.

To protect data on the backup media itself, Amanda provides the ability to encrypt backup data with symmetric or asymmetric encryption algorithms (using either aespipe or gpg). Encryption is very expensive in terms of CPU utilization, which is why the Amanda encryption can be done either on the server or the client. (Do it wherever you have more CPU cycles available.) In addition to relieving the Amanda server CPU, client site encryption also ensures security of data on a wire, which could be important for backing up remote clients. Because of CPU constraints, you might choose to encrypt only certain data. Amanda is flexible enough to configure data encryption for a single directory or even for a single file. If aespipe and gpg don't match your encryption requirements, Amanda will work with your custom encryption utilities.

Amanda does not manage encryption keys. A system administrator should take care to safeguard the keys and make them available during restore.


Amanda works with Security-Enhanced Linux (SELinux), and it also works reasonably well with common types of firewalls between Amanda servers and clients as long as you select UDP and TCP port ranges during the initial setup. Please check installation and configuration details for firewall setup at http://wiki.zmanda.com.

To conclude this brief overview of Amanda security, we want to emphasize that the flexibility of the security configurations allows Amanda to fit well into the security policies and processes of most IT environments, including organizations with strict security requirements.

4.1.3. Holding Disk

You might recall that Amanda is actually an acronym, and D in Amanda stands for disk. To explain how Amanda moves data from the client to its final destination on tape or disk, you need to know the holding disk.

Figure 4-2 shows that the Amanda server Quartz has a holding disk attached. A holding disk is one or several directories on any filesystem that is accessible from the Amanda server. It could be as small as a single 10 GB directory on the Amanda server drive or as large as 5 to 10 TB on a fibre-attached RAID array. As the name suggests, the holding disk is used as a cache to store backup data from all Amanda clients. Each set of backup data from a client filesystem or a client directory is just a bunch of files on the holding disk. Later, an independent Amanda process flushes individual backup images from the holding disk to tape or virtual tape at the maximum throughput possible to keep the tape drive streaming. Using a holding disk as a staging area for backups has several benefits.

Modern tape drives are very fast. Even gigabit networks cannot feed backup data from a single client through the Amanda server to modern tape drives fast enough to avoid shoe-shining, which reduces throughput and shortens the life of both the media and the drive (see Chapter 9 for details on shoe-shining). The holding disk collects data from all clients and as soon as the first backup is complete, it starts feeding data to tape as fast as the Amanda server can push it. However, many users prefer to complete backups of all clients before they start flushing data to tape.

A holding disk can accept data streams from multiple clients in parallel to overcome the sequential nature of a tape. Instead of writing one backup to tape after another, you can configure multiple backups running in parallel and make full use of your available network bandwidth, thus reducing total backup time. If the network becomes your bottleneck for performance, you can reduce total backup time by adding another NIC to the backup server or dedicating a separate network for backups. The use of multiple holding disks can also improve overall backup performance.

Using a holding disk provides additional safety in case you have a bad or wrong tapeor no available tape at all. Your backup is complete even if you forget to insert a new tape before taking the day off. It also provides a backup when a media error occurs during a backup run or the backup media runs out of space.

Amanda supports different algorithms to move the data from the holding disk to the media. Of course, your chosen algorithm will impact the effective use of the tape.

Amanda supports multiple holding disks so that backup images from different clients can be sent to different holding disks. This increases the scalability of Amanda and provides better load balancing for I/O because holding disks can be on different controllers.

New Amanda users often ask how large the holding disk should be. In a typical "full and incrementals" backup cycle, most backups are small incrementals, so even a modest amount of holding disk space can provide better flow of backup images to a tape. A good rule of thumb is that there should be enough holding disk space for the two largest backup images at the same time, so that one image can be coming into the holding disk while the other is being written to tape. For example, if in Figure 4-2 the full backup of both filesystems for Copper is 50 GB and the full backup of both filesystems for Iron is 30 GB, the optimal capacity of holding disk on Quartz should be at least 80 GB. If that is not practical, any amount that holds at least a few of the smaller incremental backups is better than no holding disk at all. With today's low disk prices, a good-sized holding disk is well worth the investment.

On the other hand, some Amanda users have significantly larger capacities for their holding disks. For example, a very large Japanese manufacturing company has four Amanda servers running on Solaris and BSD protecting more than a hundred Amanda clients on BSD, Windows, Linux, HP-UX, and Solaris running Oracle. One of its holding disks is on a RAID array with total capacity of 4 TB. Fast arrays and Amanda servers with high I/O allow streaming throughput from holding disk to tapes at approximately 120 Mb per second.

The flexibility of Amanda allows configurations without a holding disk, but then backups can be written to tape only sequentially instead of in parallel to the holding disk. Obviously, the lack of holding disk significantly reduces backup performance.

If the holding disk is for temporary storage of backup files, how does Amanda decide what to send to the holding disk in the first place? Let's take a look at Amanda's unique way of scheduling backups.

4.1.4. Backup Scheduling

Most backup products provide basically the same backup scheduling. The system administrator configures software to perform a full backup on Sunday, every other Sunday, or the last day of the month, with different levels of incrementals between full backups. The biggest problem with this approach is that it does not provide any load balancing. You have to make sure that enough resources are available to manage peak demand for backup server CPU, network, and I/O during full backups. Since you perform full backups only once in a while, your resources are underutilized most of the time. More often than anybody wants to admit, the system administrator finds out on Monday morning that Sunday's full backup did not complete because there were not enough tapes available in a library. Other Mondays you might find that your full backups are still running, and users are calling you to kill all backups. Of course, you can figure out yourself how to achieve load balancing by instructing your backup software to distribute full backups among your clients throughout the week or month, but then you have to make sure that there are no changes in your environment; new clients break down your balancing schema.

Amanda's unique approach to scheduling optimizes load balancing of backups and simplifies your life. Instead of giving Amanda the exact instruction "Do a full backup every Sunday for clients A, B, and C, full backups on Wednesday for clients D, E, and F, and incrementals all other times," you just set up a few ground rules that control Amanda scheduling. For example, you might give Amanda the rule "Do at least one full backup within a 7-day period, and do incrementals all other days with a maximum time between full backups of 7 days." The maximum time between full backups is called the dump cycle.

For any dump cycle you specify, Amanda finds an optimal combination of full and incremental backups from all clients to make the total amount of backup data per backup run as small as possible and consistent from one backup run to another. To find such a balance, Amanda uses the following considerations:

  • The total amount of data to be backed up as reported by each client based on the amount of data changed since last backup

  • The maximum time between full backups (dump cycle) you specified

  • The size of backup media (tape or disk) available for each backup run

To calculate the optimal backup level, Amanda starts every backup run with an estimate phase. Every Amanda client runs a special process to determine which files have changed and the total size of all changed files. The estimate phase can take some time, especially with many clients and filesystems. If some filesystems are not very dynamic and files don't change much, you can tell Amanda that, thus saving time during the estimate phase. After collecting data from all clients, Amanda goes into the planning phase and calculates the optimal combination of full and incremental backups for all clients.

Figure 4-3 shows how Amanda schedules backups for the clients from Figure 4-2, assuming that each home directory is 100 GB, the data change rate is 15 percent, and the dump cycle is 4 days. For simplicity, let's assume that Amanda writes each backup run to a new tape labeled DailySet1 through DailySet4 and that all incrementals are level 1 (level 0 is usually defined as a full backup), meaning everything that changed since the last full backup.

Figure 4-3. An example of Amanda scheduling


For each run, Amanda schedules a full backup for the total amount of data divided by the number of days in the dump cycle. Since the dump cycle is 4 days, for DailySet1, Amanda does the full backup for 1/4 of the data, in this case /home1. For DailySet2, Amanda does a full backup for another 1/4 of data, in this case /home2, and an incremental backup for /home1 which is 15 GB (15 percent of 100 GB). For DailySet3, Amanda does a full backup of /home3 and incrementals for /home1 and / home2. After the initial startup period of four days, Amanda runs a full backup for one of the /home directories and incremental backups for all the others. Let's calculate the total amount of data on each DailySet tape (see Table 4-1).

Table 4-1. Example backup sizes (GB)
DirectoryDailySet1DailySet2DailySet3DailySet4DailySet1DailySet2DailySet3DailySet4
/home1100.015.027.7540.84100.015.027.7540.84
/home2 100.015.027.7540.84100.015.027.75
/home3  100.015.027.7540.84100.015.0
/home4   100.015.027.7540.84100.0
Total100.0115.0142.75183.59183.59183.59183.59183.59


It is trivial to calculate the total amount of data for DailySet1 and DailySet2. For the third backup run of /home1, we have to consider that 15 percent of data was backed up on DailySet2, which means 15 percent of 100 GB has changed again (see Figure 4-4).

Figure 4-4. Overlap in data in subsequent backup runs


To avoid double counting, we have to subtract the small overlap area from 30 GB. So for DailySet3, the size of the incremental for /home1 is 30 GB (15 GBx15 percent) or 27.75 GB. Following the same logic, for DailySet4 the incremental for /home1 is not 45 GB. It's 45 GB (27.75 GBx15 percent) or 40.84 GB. This example is admittedly a mathematical oversimplification. In reality, Amanda uses all nine levels of incremental backups to optimize the total amount of data on tape.

In addition to a traditional schema with full backups and incrementals in between, Amanda also supports:

  • Periodic archival backup, such as taking full backups off-site

  • Incremental-only backups with full backups done outside of Amanda, such as very active areas that must be taken offline or no full backups at all for areas that can easily be recovered from vendor media (such as the installation CD for an operating system)

  • Always doing full backups of databases that change completely between each run or any other critical areas that are easier to deal with during an emergency if they are a single-restore operation

It's easy to support multiple configurations on the same Amanda server, such as doing traditional full backups and incrementals on a weekly basis and also doing additional monthly full backups for off-site storage. Multiple configurations can run simultaneously on the same tape server if there are multiple tape drives.

When choosing the length of your dump cycle, remember that shorter dump cycles such as three to four days make restores easier because there are fewer incrementals, but they use more tape and require more time to back up. Longer dump cycles allow Amanda to spread the load better over multiple tapes but may require more steps during a restore. More information about how to choose a reasonably balanced dump cycle depending on amount of data, tape drive capacity, and so on is available at http://wiki.zmanda.com.

Now, let's take a look at Amanda tape management.

4.1.5. Tape Management

Each tape should be labeled before use with the command amlabel. There is a default template for labels, and you can define your own label templates. Labeling prevents overwriting of tapes with valid backup images and allows the Amanda server to keep track of all tapes that were labeled. Amanda starts a new tape for each backup run (for example, each nightly backup) and does not provide a mechanism to append a new run to the same tape as a previous run.

Based on your backup retention policy, Amanda keeps track of the expiration date for each labeled tape, and Amanda reuses that tape for new backups after it has expired. However, you can configure Amanda not to reuse specific tapes. You might choose to never expire some backup images and use Amanda for creating archives. (Amanda's support for optical media is very useful for archiving.)

For backups of large amounts of data, Amanda supports using multiple tapes in a single backup run. For example, backups from clients A, B, and C can be written on one tape, and backups from clients E, F, and G can be written to another tape.

In the past, Amanda could not span multiple tapes for a single backup image and system administrators had to break large filesystems into smaller chunks, such as several directories. As of version 2.5, Amanda can span multiple tapes. The size of the backed-up images is no longer restricted to a single tape, and there is no need for the system administrator to artificially segment data into parts that can fit into a single tape.

4.1.6. Device Management

Amanda does not use any proprietary drivers for tape or optical devices. You have to make sure your tape devices are configured as nonrewinding devices (for example, /dev/nst0 and /dev/nst1). You also have to select the tapetype definition specific to your tape-drive technology. Many default tape definitions are provided with Amanda. Here is an example of tapetype definition for LTO-3:

define tapetype LTO3-400-HWC {      comment "LTO Ultrium 3 400/800, compression on"      length 401408 mbytes       filemark 0 kbytes      speed 74343 kps }

Note that Amanda does not use the length of the tape value. It tries to write to the tape until it gets an error.

You have to select a tape changer script for your tape changer. Examples of tape definitions for most commonly used tape drives and details about configuring tape drives and tape changer scripts are available at http://wiki.zmanda.com.

For a long time, Amanda has provided the ability to use disk as the target media for backup. Dedicated directories are used as virtual tapes called vtapes. You work with vtapes exactly the same way you work with real tapes. For example, you have to label vtapes before they can be used by Amanda. You might use vtapes and backup to disk as a way to test and evaluate Amanda before you decide to invest in an expensive tape library. Furthermore, backup to disk is now a viable option for production from a cost perspective. You get all the benefits of having a backup of your data without the challenges of managing finicky tape drives.

A most interesting scenario is the use of tapes and disk at the same time. Amanda provides a functionality called RAIT (Redundant Array of Inexpensive Tapes). Initially RAIT was designed to increase redundancy. This is the same technology as RAID, where data is striped over several disks. Amanda supports RAIT with two-, three-, and five-tape sets.

A three-drive RAIT set writes two data streams and one parity stream and gives you twice the capacity, twice the throughput, and the square of the failure rate (for example, a 1/100 failure rate becomes 1/10,000 because you lose data only if two tapes are faulty or not available). Similarly, a five-drive RAIT set gives you four times the capacity and four times the throughput. A two-drive RAIT set duplicates the output stream, and each output stream can have either the same or different media targets. If you have the same media targets (for example, two tape drives), you get exact copies of your backup data, called clones. You can keep one clone on-site for occasional restores and take another clone off-site for disaster recovery.

If you have different media targets, you can keep your backup data on disk for two to three weeks for occasional restores. For long-term retention, you have a copy on tape. Most restores happen within 10 days after a file has been lost, and the ability to restore data quickly from disk becomes very important.




Backup & Recovery
Backup & Recovery: Inexpensive Backup Solutions for Open Systems
ISBN: 0596102461
EAN: 2147483647
Year: 2006
Pages: 237

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net