Backup and Restore Tools


There are a variety of methods of performing backups with SLES. They include the general-purpose command-line tools included with every Linux distribution, such as tar, dd, dump, and cpio. Newer Linux distributions, such as SLES 9, include some text-based utilities, such as AMANDA (Advanced Maryland Automatic Network Disk Archiver) and taper. These utilities are designed to add a more user-friendly interface to the backup and restore procedures. GUI-based utilities are available as well, such as the System Backup and Restore modules in YaST. Finally, many commercial backup utilities are also available, such as BRU, CTAR, ARCserve, Legato NetWorker, and System Backup Administrator. Any one of these backup solutions can provide protection for your valuable data.

CAUTION

When you are selecting a backup utility, ensure it supports the filesystem types that you are using. For instance, Legato NetWorker 7.2 for Linux supports ext2/ext3, ReiserFS, and JFS (Journaled File System), but not XFS.


When deciding on a backup solution, you need to consider the following factors:

  • Portability Is backup portability (that is, the ability to back up data from your SLES server and restore it to another server running a different Linux distribution or implementation of Unix) important to you? For example, can you port the backup from SLES 9 to HP/UX? If so, you'll probably want to choose one of the standard command-line tools such as tar, dd, or cpio, because you can be reasonably sure that such tools will be available on any Linux/Unix system.

  • Unattended backup Is the ability to automate backups so that they can be performed at regular intervals without human intervention important to you? If so, you will need to choose both a tool and a backup medium that support such a backup scheme.

  • Ease of use Is a user-friendly interface important to you? If so, you will likely want to choose a tool that provides either a text- or GUI-based interface. Commercial products may provide the easiest interfaces as well as added technical support.

  • Remote backups Do you require the ability to start backups and restores from a remote machine? If so, you'll probably want to choose one of the command-line tools or text-based utilities instead of the GUI-based utilities (unless you have a reasonably fast network connection and the ability to run remote X sessions).

  • Network backups Is performing backups and restores to and from networked hosts important to you? If so, you'll probably want to use one of several of the command-line utilities (such as tar) that support network access to backup devices, or a specialized utility such as AMANDA or one of the commercial products.

  • Media type support Backups can be stored on a variety of media, such as tape, an extra hard drive, ZIP drives, or rewritable DVDs. Consider cost versus reliability, storage capacity, and transfer speed and select a backup application that supports your chosen device type.

TIP

Often, even if your selected tool doesn't have a built-in scheduler to automate and run backups unattended, you may be able to automate such backups by using the cron facilities.


In the following sections, we discuss methods for performing backups and restores using the following tools:

  • tar

  • dump and restore

  • cpio

  • dd

  • rsync

  • AMANDA

  • YaST's System Backup and Restore modules

Making Tarballs

The tar (tape archive) utility is probably the most commonly used application for data backup on Linux/Unix systems. Why? Because as with vi or ls, you can be guaranteed that any Linux/Unix system will have tar. Furthermore, this tool has been ported to a wide range of hardware platforms and operating systems. Therefore, if you need your backups to be portable across different versions of SUSE, other Linux distributions, to Unix platforms (such as HP/UX or AIX), other operating systems (such as Windows), or even to mainframes, tar would be an excellent choice.

tar was designed to create a tape archive (a large file that contains, or "archives," other files). In addition to file contents, an archive includes header information to each file inside it. This header data can be used when extracting files from the archive to restore information about the files, such as file permissions, ownerships, and modification dates. An archive file can be saved to disk (and later copied to tape or transferred to another storage medium), written directly to tape, or transmitted across the network while it is being created.

NOTE

The tar archive file is officially referred to as a tarfile. However, it is often (affectionately) called a tarball instead. Frequently, source codes for Linux/Unix-related applications are available as tarballs on the Internet.

By convention, tarfiles use .tar as their filename extension. You will also encounter .tar.gz or .tgz extensions, which identify tar archives that have been compressed using gzip.


Although many command-line options are available with tar, Table 10.5 shows a list of the most commonly used option switches.

Table 10.5. Commonly Used Options for tar

OPTION

DESCRIPTION

-c

Creates a new archive. This option implies the -r option.

-exclude file

Excludes named files from being backed up or restored.

-f devicename

Specifies the output device for the tarfile. If the name is -, tar writes to stdout or reads from stdin, whichever is appropriate. Thus, tar can be used as the head or tail of a command pipeline. If -f is omitted, tar uses /dev/rmt0. It also checks to see what the TAPE environment variable is set to if no /dev/rmt0 exists.

-j

Filters the archive through the bzip2 program, which is quite a bit better than gzip at compressing text but is quite a bit slower.

-p

Extracts all permission information.

-r

Appends files to a tarball.

-t

Lists the contents of an archive. You can add the -v option to get additional information for the files. The listing is similar to the format produced by the ls -l command.

-u

Updates an archive. The named files are added to the tarfile if they are not already there or have been modified since last written to that tarfile. This option implies the -r option.

-v

Turns on verbose mode. Normally, tar works silently. This option causes tar to show the name (including path) of each file processed.

-V label

Adds a (logical) volume label to the archive for identification.

-W

Verifies the archive after writing it.

-x file

Extracts, or restores, from an archive. The named files are extracted from the tarball and written to the current directory. If a named file matches a directory whose contents have been written into the tarball, this directory is (recursively) extracted. The owner, modification time stamp, and mode are restored (if possible). If the filename is not given, the entire content of the archive is extracted.

Uses the file or directory's relative pathname when appropriate; otherwise, tar will not find a match.

-z

Filters the archive through the gzip program.

-Z

Filters the archive through the compress program.


CAUTION

The -r and -u options cannot be used with many tape drives due to limitations in those drives, such as the absence of the backspace or append capability.


NOTE

Some of the tar switches have mnemonic equivalence so the switch is more intuitive. For instance, instead of -x, you can use --extract or --get. Refer to the tar man pages for more details.


The general syntax for the tar command is as follows:

 tar [options] filename 

Following are some examples of the use of tar in backing up and restoring files:

  • Copies all files in /home and below to the archive file called home-directory-backup.tar in the current directory; the verbose mode is on:

     tar -cvf ./home-directory-backup.tar /home 

  • Copies all files in /usr/lib and below to a tarball on a tape drive; verbose mode is on:

     tar -cvf /dev/st0 /usr/lib 

  • Reads the table of contents from tape drive /dev/st0:

     tar -tvf /dev/st0 

  • Extracts all files from the tarball located on the tape drive:

     tar -xvf /dev/st0 

  • Extracts all files from the tarball located on the tape drive and places them in /home/temp:

     tar -xvf /dev/st0 -C /home/temp 

  • Extracts only the file called chapter.10 (located in the SLES directory) from the archive located on the tape drive (note that a relative path is used):

     tar -xvf /dev/st0 SLES/chapter.10 

  • Duplicates the contents from the directory /home/peter to the current working directory; file permissions and ownerships are preserved:

     (cd /home/peter; tar -cpf - *) | tar -xf - 

    The parentheses in the command instruct the shell to execute the commands inside them first before piping the output to the second tar command.

TIP

You can use the following handy tar command in a script (and have it executed by a cron job) that backs up your entire system onto the tape drive (/dev/st0). The /tmp directory, /proc pseudo-filesystem, any mounted filesystems in /mnt, as well as Squid proxy server's cache files, and the log file for the tar job are excluded from the backup; insert additional --exclude parameters for other directories to be excluded from backup:

 tar -cvpf /dev/st0 \ -V "full system backup on 'date'" \ -directory / -exclude=mnt -exclude=proc \ -exclude=tmp -exclude=var/spool/squid \ -exclude=home/root/tar.logfile . > /home/root/tar.logfile 

A logical volume label with the date and time at which the tar command was executed is included in the tarball to aid with identification. (If you have many files to exclude, you can place them in a text file, one name per line, and use the -X file switch.)


tar has a built-in incremental backup option. It uses an ASCII file to keep track of files and directories that were backed up. To use this feature, do the following:

1.

Create a full backup of the desired directory or directories using the -g option. For example,

 tar -czpf /backup/home_full_backup.tgz \ -g /backup/home_tracking_file /home 

2.

Create daily incremental backups using the following commands:

 tar -czpf /backup/home_monday_backup.tgz \ -g /backup/home_tracking_file /home tar -czpf /backup/home_tuesday_backup.tgz \ -g /backup/home_tracking_file /home (and so on for other days) 

Because you are using the same "tracking" file, tar is able to tell what files were previously backed up and when. Subsequent tarballs will contain only files that have been modified or created since the last backup (as recorded in the tracking file).

WARNING

The tar man page describes a -P (or --absolute-paths) switch, which you use to not strip the leading / from pathnames. You should not exercise this option as the default mode (to use relative paths) to protect you from accidentally overwriting critical files during a restore operation when you didn't mean to. If you use it, instead of the files going into your current working directory, they are written to the original location!


Archiving Data with cpio

The cpio (copy in-out) program is similar to tar in that it is a general-purpose utility for copying file archives. However, it can use archive files in many different formats, including tarballs. A cpio archive can span multiple tapes or disks, and this capability is a big plus for dealing with large files and filesystems.

cpio operates in one of three modes:

  • Copy-out mode cpio -o reads from stdin to obtain a list of filenames and copies those files to stdout together with pathname and status information. Output is padded to a 512-byte boundary by default or to the user-specified block size or to some device-dependent block size, where necessary (as with certain types of tapes).

    TIP

    A typical way to generate the list of filenames for the copy-out mode is to use either the ls or find command.


  • Copy-in mode cpio -i extracts files from an archive, or the standard input (stdin), that is assumed to be the product of a previous cpio -o operation. Only files with names that match patterns are selected. Extracted files are conditionally created and copied into the current directory tree based on the specified command-line switches. The permissions of the file are those of the previous cpio -o command. Owner and group are the same as the current user unless the current user is root. If this is the case, owner and group are the same as those resulting from the previous cpio -o command. Note that if cpio -i tries to create a file that already exists and the existing file is the same age or newer, cpio displays a warning message and does not overwrite the file. (The -u option can be used to force an overwrite of the existing file.)

  • Copy-pass mode cpio -p reads from stdio a list of filenames and copies these files from one directory tree to another, essentially combining the copy-out and copy-in steps without actually using an archive.

Similar to tar, cpio uses many command-line switches. Table 10.6 shows a list of the most commonly used options.

Table 10.6. Commonly Used Options for cpio

OPTION

DESCRIPTION

-A

Appends to the existing archive. The archive must be specified using either -F or -O. Valid only in copy-out mode.

-B

Sets the I/O block size to 5,120 bytes instead of the default 512 bytes. -B is meaningful only with data directed to or from a character block device such as /dev/st0; thus, it is meaningless in the copy-pass mode. It cannot be used in conjunction with -C.

-C bufsize

Sets I/O block size to bufsize instead of the default 512 bytes. Like -B, -C is meaningful only when using devices such as /dev/st0. It cannot be used in conjunction with -B.

-f

Copies only files that do not match any of the given patterns.

-F file

Uses file for the archive instead of stdin or stdout.

-i

(Copy-in mode) Extracts files from stdin.

-I file

Reads the archive from the specified file instead of stdio. This option is valid only in the copy-in mode.

-o

(Copy-out mode) Reads filenames from stdin and copies those files to stdout.

-O file

Directs output to file instead of stdout. This option is valid only in the copy-out mode.

-p

(Copy-pass mode) Reads filenames from stdin and copies those files to stdout. This option is used mainly to copy directory trees.

-t

Prints a table of contents of the input. No files are created (mutually exclusive with -V).

-u

Unconditionally replaces all files, without asking whether to replace existing newer files with older ones.

-v

Turns on verbose mode. Lists the files processed. When this option is used with the -t option, the table of contents looks like the output from the ls -l command.

-V

Turns on special verbose mode. Prints a dot for each file processed. This option is useful to assure the user that cpio is working without printing out all the filenames.


The general syntax for the cpio command is as follows:

 cpio [options] [filename] 

Following are some examples of the use of cpio in backing up and restoring files:

  • Copies the files in the current directory to a cpio archive file called newfile:

     ls | cpio -VoO newfile 

  • Prints out the table of contents from the archive file:

     cpio -tvF newfile 

    or

     cpio -itvI newfile 

  • Extracts all the files from the archive file into the current directory, overwriting any existing files:

     cpio -ivuI newfile 

  • Using the copy-pass mode (-p switch), copies or links (the -l option) all the files and directories from /home/carol to the newdir directory located in the current path:

     (find /home/carol -depth -print | cpio -pdlmv newdir) 2>cpio.log 

    The -d switch tells cpio to create directories as necessary, -m says to retain the modification times, and -v turns on the verbose mode. All log messages from cpio are redirected to the cpio.log file in the current directory. Notice that stderr redirected as stdout is a valid output path, so cpio logs messages to stderr instead. Newdir must exist; otherwise, the cpio command will fail.

The choice between using cpio or tar to perform backups is largely a matter of preference. However, because of the simpler command syntax and wide availability on other operating systems, tar seems to be the more popular choice.

Converting and Copying Data Using dd

The dd program is another oldie but goldie that does data conversion and transfers. It was originally designed for importing mainframe data to Unix systems. On the mainframe, the data is transferred to tape using the EBCDIC character encoding scheme. To use such data on most Unix machines, dd was used to read the tapes and change the coding to ASCII. However, with the availability of TCP/IP on mainframes, dd is no longer needed because FTP and other IP-based protocols can do the same job over the network (and eliminate the need for tapes).

dd can strip file headers, extract parts of binary files, and write into the middle of floppy disks; it is even used by the Linux kernel makefiles to create boot images. It can be used to copy and convert magnetic tape formats, convert between ASCII and EBCDIC, swap bytes, and force upper- and lowercase conversions.

WARNING

Because dd works with volume headers, boot records, and similar system data areas, its misuse can potentially trash your hard disks and filesystems. As a result, some people refer to dd as "Destroy Disk" or "Delete Data" because if it is misused, accidentally or otherwise, a disk partition or output file can be trashed very quickly.


One common use of dd today is to create disk images of your filesystems or to rip CD or DVD contents to an ISO image that you can later access (without having to use the CD or DVD again) by mounting the images.

Unlike most of the Linux/Unix commands that use command-line switches, dd uses a keyword=value format for its parameters. This was allegedly modeled after IBM System/360 JCL (Job Control Language), which had an elaborate DD (Dataset Definition) specification for I/O devices.

Most of the time, you need to use only two keywords: if=infile and of=outfile. Input defaults to stdin and output defaults to stdout if these two keywords are not specified. For instance, to copy one file to another, use the following:

 Athena:/home/admin # dd if=/etc/passwd of=passwd.backup 4+1 records in 4+1 records out 

By default, dd copies files in 512-byte records. The preceding output (4+1 records) indicates that four full 512-byte records plus one partial 512-byte record were read and then written. (In this case, /etc/passwd is 2,185 bytes in size.) You can modify the buffer size used, as in this example:

 Athena:/home/admin # dd if=/etc/passwd of=passwd.backup bs=3000 0+1 records in 0+1 records out 

The following are some additional sample uses of the dd command:

  • To create a disk image of a 1.44MB floppy disk (the bs= specifies the standard geometry of a 1.44MB-formatted floppy disk: 18 sectors of 512 bytes, 2 heads, and 80 cylinders, for a total of 1,474,560 bytes; this results in a single 1,474,560-byte read request to /dev/fd0 and a single 1,474,560-byte write request to /tmp/floppy.image):

     dd bs=2x80x18b if=/dev/fd0 of=/tmp/floppy.image 1+0 records in 1+0 records out 

  • To write the same disk image back onto a floppy disk:

     dd bs=2x80x18b if=/tmp/floppy.image of=/dev/fd0 1+0 records in 1+0 records out 

  • To make a complete copy of a partition:

     dd if=/dev/sda1 of=/backup/boot.partition.image 

  • To make a backup copy of your Master Boot Record (MBR), which is the first block on the disk:

     dd if=/dev/sda of=/backup/mbr_backup count=1 

  • To make an ISO image of a DVD disk (assuming the DVD drive is /dev/hdc):

     dd if=/dev/hdc of=/backup/dvd.image 

    To mount an ISO image created using dd for access, use mount -o loop /path/image.name /mountpoint, for instance,

     mount -o loop /backup/dvd.image /mnt 

    See man mount for additional information.

TIP

Depending on the file sizes involved, it may be advantageous to use a larger buffer size because doing so reduces the number of system calls made and performance improvement may be significant. If the input and output devices are different (say, from a file to a tape), you can use ibs= and obs= to set different buffer sizes for the reads and writes, respectively; bs= sets the same size for both reads and writes.


You can obtain a complete listing of all keywords available by using the dd --help command.

Using dump and restore

The dump program performs an incremental filesystem save operation. It can back up all specified files (normally either a whole filesystem or files within a filesystem that have changed after a certain date or since the last dump operation). The output (known as a dump image or archive) can be a magnetic tape, floppy disks, or a disk file. The output device can be either local or remote. The restore program examines the dumps to recover individual files or an entire filesystem.

NOTE

On some versions of Unix, dump is referred to as ufsdump, whereas restore may be called restore or ufsrestore. On HP/UX, fbackup performs similar functions to dump.


CAUTION

dump works only with ext2/ext3 type filesystems.


dump: THE GOOD, THE BAD, AND THE UGLY

A simplistic and primitive tool, dump was designed to work at the inode level, but it does come with a brilliant feature for incremental archiving. It identifies newly created or modified files after the previous backup and efficiently stores them to the dump image very quickly.

NOTE

Every Linux/Unix file has an inode. Inodes are data structures that hold descriptive information about a file, such as file type, permissions, owners, time stamps, size, and pointers to data blocks on disk. (They act like the Directory Entry Table, or DET, entries found in NetWare or the File Allocation Table, or FAT, entries found in Windows.)


For example, suppose a file titled foobar was backed up during the last archiving and removed afterward. On the next incremental archiving, dump puts the record in the archive as "Hey, there used to be a file foobar at inode xxx, but it was removed." During a full filesystem restore process, deleted files are not resurrected. If you use, for example, tar for your regular incremental backup tasks and attempt a full restoration one day, you may run out of disk space by trying to restore a large number of files that had been previously removed. With dump, you will never face such a problem because of the way it handles incremental backups.

Incremental backups by dump are controlled by assigning a dump level to a particular backup. There are 10 dump levels, ranging from 0 through 9. When a dump of a certain level N is performed, all files that have changed since the last dump of level N1 or lower are backed up. For instance, if a level 2 dump was done on Monday, followed by a level 4 dump on Tuesday, a subsequent level 3 dump on Wednesday would contain all files modified or added since the level 2 (Monday) backup.

NOTE

A level 0 dump would back up the entire filesystem.


The main advantage of dump is that it can simplify your backup strategy because it looks directly at the filesystem rather than from user space (like tar or cpio). For example, you don't have to mess around with scripts that try to figure what has changed since your last backup; therefore, implementing an incremental backup solution is much easier. Another benefit is that you don't have to worry about file permissions or ownerships being lost in the backup and restore process, not to mention the creation time or last-modified time of a file, because this information is included when dump scans the filesystem. The restore program is also simple to use whether you're trying to fully restore a filesystem or just pulling out an important OpenOffice document file that one of your coworkers deleted.

There are a few things that might make dump a poor choice for a backup utility in your environment, however. You should keep these factors in mind before deciding to deploy dump for your setup:

  • Ext2/ext3 filesystem types only Because dump is closely tied to the filesystem, it is designed for ext2fs and ext3fs filesystems only. Unfortunately, the default filesystem type used by SUSE is ReiserFS. So, unless you changed the filesystem type when creating your filesystem, dump will be unable to help you. Also, dump works with local filesystems only.

  • Version dependency dump is very much a version-dependent program. Sometimes dump is not backward compatible with itself, which means that if you want to restore a dump image that was made with dump-0.4b35, you need to have a copy of the restore binary from that same version available. In an environment where you may be frequently upgrading an operating system (such as to apply security patches) or the versions of software available for it, dump probably is not an ideal solution. One way around this problem is to keep a backup copy of your backup utilities handy on a CD-ROM or a vendor/OS-neutral archive on tape (like tar or cpio).

  • Filesystems should be inactive during backup The filesystems should be inactive when dump is performing their backup; otherwise, the dump output might be inconsistent (as files may be changed during backup), and restore can get confused when doing incremental restores from dump tapes that were made on active filesystems. It is strongly recommended that a level 0 (and perhaps even a level 1) dump be made when the filesystems are inactive, while the other levels may be made with the system in multiuser mode. If you decide to perform a dump in single-user mode, you must choose between convenience or data integrity.

    NOTE

    A filesystem is considered inactive when it is unmounted or when the system is in single-user mode or at run level 1. (Refer to Chapter 3 for details about run levels.) Of course, if the filesystem is unmounted, you can access it for backup. Furthermore, you cannot unmount the root filesystem; otherwise, you wouldn't have a running system! If you need to back up the root filesystem using dump, boot your system using a rescue disk and run dump from there.


  • Difficult to exclude files and directories from being dumped Because dump works directly with filesystems, it is labor intensive to exclude certain files and directories on a filesystem from being backed up by dump.

    TIP

    You can exclude a file or directory from being dumped in three ways. First, you can use the -e switch to specify the list of inode numbers of the files or directories (determined using stat filename) to be excluded. Second, you can place the inode numbers in a file and pass it to dump using the -E switch. Third, you can manually flag the files and directories to be excluded with the d attribute using chattr.


  • Know thy mt commands dump doesn't have any built-in capability to manipulate tape storage media like commercial backup software utilities do. This means that if you're backing up to tape, you'll need to become familiar with the mt (magnetic tape) commands and mtx commands if you have a tape changer.

NOTE

The mt utility enables you to manipulate tape drives. With mt, you can rewind, forward, and position the tape, as well as check the drive status. It is a must-have tool if you want to use dump and restore with tape drives. If possible, it is a good idea to prepare a tape for training purposes and practice using it. Some commands of mt are drive dependent, so you should check the tape drive's manual carefully to find out which commands are available for your drive.


Now that you know the good and the bad about dump, we'll briefly discuss how to use dump and restore.

BACKING UP USING dump

The man page for dump includes several command-line switches. Table 10.7 shows a brief summary of the important ones you should be familiar with when using dump.

Table 10.7. Noteworthy dump Command-Line Options

OPTION

DESCRIPTION

-[0-9]

Tells dump what level of backup to perform. Full filesystem dumps are level 0 by default, and you can specify nine more levels (1 through 9) that can be used to determine which files are to be backed up. Whenever a dump is made, it will back up only the files that have changed since the last lowest level of dump. The default is -9.

-a

Tells dump to try to autosize the dump image. It bypasses all tape length calculations and writes until the device returns an "end-of-media" signal. Normally, you would use this option when you have a tape changer and want to have your dump span multiple tapes, you want to append to an existing dump on a tape, or you're dumping to stdout (so you can pipe it to another command-line utility). This option is also useful when the tape drive has hardware compression.

-A file

Archives the table of contents to file so it can be used by restore to determine whether a file that is to be restored is in the dump image.

-f file

Tells dump where the dump image goes. It defaults to /dev/tape, and it also checks to see what the TAPE environment variable is set to if no /dev/tape exists. This option is useful if you would like to send the dump to a remote device (/dev/rmt*), a file on the local filesystem (or an NFS mount), or pipe the results back to standard input. To output to stdout, specify a filename of -.

-L label

Specifies a label for the dump image. Unfortunately, you have only 16 characters to work with here, so you need to be careful what you use. One suggestion is to use the device name (such as a label of /dev/sda6).

-q

Makes dump abort immediately if it encounters a situation that would normally require user input. This option is useful for scripting automated backups because dump reads its input straight from the tty instead of from stdin, which prevents you from using something like yes no | dump -0au /dev/hda1 in your script. This option did not appear in dump until version 0.4b24.

-u

This option is very important if you are planning to make incremental backups of a filesystem. This option tells dump to update the /etc/dumpdates file, which essentially records the date and time you ran the dump and the level the dump was set to run at. If you forget this, you will continuously be doing full level (level 0) backups.

-z[compression_level]

Compresses every block to be written on the tape using the zlib library. This option works only when dumping to a file or pipe or when dumping to a tape drive if the tape drive is capable of writing variable length blocks.

The (optional) parameter specifies the compression level zlib will use. The default compression level is 2. If the optional parameter is specified, there must be no whitespace between the option letter and the parameterfor example, -z3.

You need at least the 0.4b22 version of restore to extract compressed tapes. Tapes written using compression are not compatible with the BSD tape format.


NOTE

dump sends all of its output through stderr, so a command such as the following is valid:

 dump -0af /tmp/backup /home 2>/tmp/backup.log 

This example sends the dump image to /tmp/backup and causes the stderr output from dump to go to /tmp/backup.log.


Running dump may seem complicated, but it is actually fairly straightforward. A typical command looks like this:

 Athena:~ # dump -0au -L "/dev/hda1: /boot" -f /dev/st0 /boot 

When dump is invoked normally, backup proceeds with some messages printed on the console, as shown in Listing 10.1. It is a good idea to leave this session in the foreground rather than to send it to the background until you are convinced everything works fine. If dump reaches the end of tape or if some error occurs, you will be requested to make some choices in the interactive session.

Listing 10.1. Sample dump Log Messages
 DUMP: Date of this level 0 dump: Sat Jan 29 17:03:55 2005 DUMP: Dumping /dev/hda1 (/boot) to /dev/st0 DUMP: Added inode 21 to exclude list (journal inode) DUMP: Label: /dev/hda1: /boot DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 11000 tape blocks. DUMP: Volume 1 started with block 1 at: Sat Jan 29 17:03:55 2005 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: Closing /dev/st0 DUMP: Volume 1 completed at: Sat Jan 29 17:03:56 2005 DUMP: Volume 1 10940 tape blocks (10.68MB) DUMP: Volume 1 took 0:00:01 DUMP: Volume 1 transfer rate: 10940 kB/s DUMP: 10940 tape blocks (10.68MB) on 1 volume(s) DUMP: finished in 1 seconds, throughput 10940 kBytes/sec DUMP: Date of this level 0 dump: Sat Jan 29 17:03:55 2005 DUMP: Date this dump completed: Sat Jan 29 17:03:56 2005 DUMP: Average transfer rate: 10940 kB/s DUMP: DUMP IS DONE 

The log messages in Listing 10.1 also indicate whether any files (inodes) are skipped. In the example, the log shows inode 21 was excluded from the backup. The reason is that inode 21 turns out to be the journal for the ext3 filesystem; therefore, it doesn't need to be backed up and can be excluded from future backups.

When the dump session finishes properly, you can dump another filesystem if enough tape capacity remains.

TIP

It is always a good idea to first check the filesystem's integrity prior to doing a dump, especially when it is a full dump. To do this, after you have entered single-user mode, unmount the filesystems one by one and check them using e2fsck:

 Athena:~ # umount /usr/home; e2fsck -afv /dev/sdd1 

Check all the filesystems that are to be backed up, preferably after unmounting them. Because you cannot unmount the root filesystem, you may want to remount it as read-only to prevent its data (thus the inodes) from being modified and check it with e2fsck:

 Athena:~ # mount -r -n -o remount / ; e2fsck -afv /dev/hda1 

After all the checks are done, remount it again with read-write so that dump can log the backup information:

 Athena:~ # mount -w -n -o remount / 


A simple incremental backup strategy is shown in Table 10.8.

Table 10.8. A Simple Incremental Backup Strategy Using dump

DAY

DUMP LEVEL

Day 1

Level 0

Day 2

Level 1

Day 3

Level 2

Day 4

Level 3

Day 5

Level 4

Day 6

Level 5

Day 7

Level 6

Day 8

Level 7

Day 9

Level 8

Day 10

Level 9

Day 11

Level 0 (and the sequence repeats)


After Day 2, dump backs up updated files only. And on Day 11, you make a complete backup again and the sequence repeats.

Some people use a more elaborate method, using the Tower of Hanoi sequence, for the dump-level scheduling. This method may employ the following sample sequences:

 Rotation 1: 0, 3, 2, 5, 4, 7, 6, 9, 8 Rotation 2: 1, 3, 2, 5, 4, 7, 6, 9, 8 Rotation 3: 1, 3, 2, 5, 4, 7, 6, 9, 8 

In this case, you start with a level 0 dump and then perform daily incremental backups based on the sequence. The pattern repeats itself after 27 days.

Note that under this scenario, each level 0 dump should be made on a new tape and stored at a safe place. For level 1 backups, use two different tapes (or as many as the number of rotations). For other dump levels, one tape per level is sufficient, and they can be reused from one rotation to another.

CAUTION

If you are considering the Tower of Hanoi sequence for your dump-level schedule, pay attention to the section "A Few Words of Caution About Dump Levels" later in this chapter.


RECOVERING DATA USING restore

There are two main methods of restoring files from a dump archive: either interactively (when the -i switch is specified) or through a full restore. You point restore to a dump image, and the selected files are extracted into the current directory. Table 10.9 shows some of the important restore command-line switches.

Table 10.9. Noteworthy restore Command-Line Options

OPTION

DESCRIPTION

-C

Compares the contents of the archive with the current filesystem and no restore takes place. You can use this option to check what has changed since your last backup.

-f file

Specifies the archive file. Like dump, it also defaults to /dev/tape and will check the TAPE environment variable if that fails.

-h directory

Extracts only the directory when specified, rather than its contents.

-i

Performs interactive restoration of specified files. This option also allows you to browse the dump image and flag files or directories to be extracted later.

-r

Allows you to rebuild the filesystem from the dump image. Use this option if you need to do bare-metal recovery from a tape. It expects a clean filesystem (that is, it expects you to have just run mke2fs). This operation is not interactive and should not be done unless you intend to actually restore the filesystem itself rather than just extract a couple of files.

-R

Resumes interrupted full restoration.

-s fileno

Specifies the position of the archive on the tape. This capability is useful when there are multiple dump images on the same media.

-t file

Lists filenames on the backup archive but does not restore them.

-T

Specifies the temporary directory.

-v

Produces verbose output.

-x file -X fileset

Extracts only the named files from the archive. The uppercase -X instead reads a list of files to restore from a flat ASCII text file.

-y

Does not query on error.


TIP

A piped combination of dump and restore can duplicate the contents of one filesystem onto another filesystem.


The restore utility is not too difficult to use but can be rather tricky when you're dealing with multiple dump images on the same media. Therefore, you should be familiar with it before you really have to use it. The following are two sample restore commands:

  • Retrieves all the files from the archive in tape media and writes them under the current directory:

     restore -rf /dev/st0 

  • Performs an interactive restoration from the third dump image on the tape media:

     restore -is 3 -f /dev/st0 

TIP

When you are performing a partial restore interactively, it is recommended that you do not restore the files directly to the target directory. Instead, you should first restore the files to an empty temporary directory and then move them to their final locations. That way, you do not run the risk of accidentally overwriting any existing files.

For a full filesystem restore, you should mount a formatted disk first, move to that mount point, and then invoke the restore command.


CAUTION

You should be careful about the sequence of the archives to restore. When restoring the full filesystem from the archives with the -r switch, you must start with the level 0 archive.


A FEW WORDS OF CAUTION ABOUT DUMP LEVELS

You can save some trouble during a full filesystem restore if your dump backups were made of staggered incremental dump levels. The online man page for dump suggests the following method of staggering incremental dumps to minimize the number of tapes.

First, you start with a level 0 backup and then daily dumps of active filesystems using a modified Tower of Hanoi algorithm. The suggested sequence of daily dump levels is as follows:

 3 2 5 4 7 6 9 8 9 9 ... 

One thing you need to realize with the dump level logic is that an archive with some level becomes ineffective if a smaller level dump is taken after that. For instance, in the preceding sequence, the level 3 archive becomes "ineffective" when the next level 2 dump is taken because a level 2 dump includes all files backed up under level 3. Similarly, an existing level 5 archive is ineffective after the next level 4 dump. In the extreme case, a new level 0 dump makes all existing archives with level 1 through 9 ineffective.

Therefore, on a full restore, you should skip these ineffective archives. When restoring from dumps made using the sequence presented in the preceding example, you should choose the restoration sequence as follows to obtain the latest status of the filesystem:

 0 2 4 6 8 

If you ignore this rule and try to restore the archives following the Tower of Hanoi sequence (that is, 0, 3, 2, 5, 4, and so on), you will encounter the Incremental tape too high error on the restoration of your level 2 archive and then Incremental tape too low errors after that. After you encounter one of these errors, you cannot complete the full restore by any means, and you must restart that restoration from the first step.

NOTE

The generation of ineffective archives by no means diminishes the usefulness of the Tower of Hanoi sequence. It is still an excellent way to preserve many snapshots of the filesystem for a long period with less backup media.


Data Mirroring Using rsync

The rsync (remote synchronization) utility is a replacement for rcp (remote copy), which has many more features. rsync is intended to create copies of complete directory trees across a network to a different system but also works locally within the same machine; it uses a special algorithm (adeptly called the "rsync algorithm") that provides a very fast method for bringing remote files into synchronization. It does this by sending just the differences in the files across the network link, without requiring that both sets of files are present at one of the ends of the link beforehand.

Features of rsync include the following:

  • Can update whole directory trees and filesystems

  • Optionally preserves symbolic links, hard links, file ownership, permissions, devices, and creation/modification times

  • Does not require special privileges to install

  • Uses internal pipelining to reduce latency for multiple files

  • Can use rsh, ssh, or direct sockets as the transport

  • Supports "anonymous rsync," which is ideal for public data mirroring (such as distributing file changes to FTP mirror sites serving open source programs)

rsync supports a large number of command-line switches, but the ones listed in Table 10.10 are used most frequently.

Table 10.10. Most Frequently Used rsync Switches

OPTION

DESCRIPTION

-a

Puts rsync in archive mode. This is equivalent to specifying all of these options: -r (recursive copying), -l (copy symbolic links), -p (preserve permissions), -t (preserve times), -g (preserve group), -o (preserve owner), and -D (preserve devices).

-c

Performs a checksum on the data.

--delete

Deletes files that don't exist on the sender side.

--delete-exclude

Deletes excluded files on the receiver end.

-e command

Specifies a remote shell program (such as ssh) to use for communication between the local and remote copies of rsync.

--exclude=pattern

Excludes files matching pattern.

--exclude-from=file

Excludes patterns listed in file.

-n

Performs a "dry run"shows what would have been transferred, but no actual transfer takes place.

--progress

Shows progress during data transfer.

--stats

Gives some file transfer statistics.

-v

Turns on verbose mode.

-z

Compresses data during transfer.


The following are some sample rsync command usages:

  • Mirrors all home directories to a backup filesystem:

     rsync -acv --stats /home /backup 

    Notice that in the preceding command, there is no trailing slash after /home. If you specify the source path with a trailing slash (such as /home/) as in

     rsync -acv --stats /home/ /backup 

    all data in the /home directory will be mirrored to /backup but not the directory itself.

  • Mirrors root's home directory (/home/admin) to a remote system, Pollux, using ssh as the transport:

     rsync -azcve ssh /home/admin/ root@pollux:/backup/home/admin 

    root@pollux specifies the username (root) and the host (Pollux) to log in to, and /backup/home/admin is the path for the remote directory that rsync will mirror the files with.

  • Copies root's home directory data back from the remote system:

     rsync -azcve ssh root@pollux:/backup/home/admin/ /home/admin 

The sample script in Listing 10.2, found at http://rsync.samba.org/examples.html, backs up a number of important filesystems to a spare disk; this extra disk has the capacity to hold all the contents of the main disk. The first part does the backup on the spare disk, and the second part backs up the critical parts to daily directories.

Listing 10.2. A Sample Backup Shell Script Using rsync
 #!/bin/sh export PATH=/usr/local/bin:/usr/bin:/bin LIST="rootfs usr data data2" for d in $LIST; do      mount /backup/$d      rsync -ax --exclude fstab --delete /$d/ /backup/$d/      umount /backup/$d done DAY=`date "+%A"` rsync -a --delete /usr/local/apache /data2/backups/$DAY rsync -a --delete /data/solid /data2/backups/$DAY 

For more details, consult the rsync man page, as well as the detailed documentation found at http://rsync.samba.org, home of rsync.

TIP

You can find an excellent article titled "Easy Automated Snapshot-Style Backups with Linux and Rsync" at http://www.mikerubel.org/computers/rsync_snapshots.


TIP

rsync is also available for different operating systems, including NetWare ( http://forge.novell.com/modules/xfmod/project/?rsync) and Windows ( http://www.cygwin.com).


YaST's System Backup and Restore Modules

The YaST Backup module allows you to create a backup of your data files. The backup created by the module does not comprise the entire system, but only saves information about changed packages and copies of critical storage areas (such as the MBR), configuration files (such as those found under /etc), and user files (such as /home). Furthermore, it does not provide any incremental backup features. The Backup module basically provides you with a GUI front end to tar.

To access the Backup module, from the YaST Control Center, select System, System Backup; or from a terminal session, use yast2 backup or yast backup. At the YaST System Backup screen, you can select Profile Management, Add to create different backup profiles that store different backup settings. For example, you can create a profile called MBR Backup that is used exclusively to back up the Master Boot Record and another profile called User Home Directories that will back up all user files.

Figure 10.1 shows the Archive Settings screen where you can specify the name of your tarfile and the archive type (such as a tarball compressed using gzip). Clicking Next takes you to the File Selection dialog box. Here, select the desired option(s) and click Next. (If you want to back up your MBR, click the Expert button for additional settings.) The last screen specifies the Search Constraints. Here, you select the directories and filesystems that you want to be excluded from backup. Click Finish to save the profile settings.

Figure 10.1. The Archive Settings dialog box.


NOTE

If your screen resolution is 800x600, the File Name edit box may be partially hidden if you have the task panel displayed.


If you need to do a one-off backup, you can select the Backup Manually button and walk through the same setup screens described in the preceding paragraphs, but not save the settings in a profile.

To perform a backup, simply select one of the existing profiles and then click Start Backup. YaST takes a few minutes to search through the system for files matching your selection and then creates the resulting archive (see Figure 10.2). The tarball created by YaST has the following "structure":

 Athena:/home/admin # tar -tvf backup.tar -rw------- root/root   143 2005-01-14 06:07:42 info/files -rw------- root/root   136 2005-01-14 06:07:42 info/packages_info.gz -rw------- root/root     6 2005-01-14 06:07:35 info/hostname -rw------- root/root    17 2005-01-14 06:07:35 info/date -rw------- root/root     0 2005-01-14 06:07:35 info/comment -rw------- root/root   127 2005-01-14 06:07:35 info/complete_backup -rw------- root/root 18754 2005-01-14 06:07:42 info/installed_packages -rw------- root/root  1679 2005-01-14 06:07:42 NOPACKAGE-20050114-0.tar.gz 

Figure 10.2. The Backup summary screen.


Instead of the tarfile holding the backed-up files, those files are actually placed in another tarball within the archive, and additional identification information is added. As a result, the YaST System Restore module does not accept a "standard" tarball created by tar unless you have packaged the files into the same structure.

The System Restore module enables restoration of your system from a backup archive. To access the Restore module, from the YaST Control Center, select System, System Restore; the Archive Selection dialog box is then shown (see Figure 10.3). (From a terminal session, you can use yast2 restore or yast restore.) First, specify where the archives are located (removable media, local hard disks, or network file systems). A description and the contents of the individual archives are then displayed, letting you decide what to restore from the archives. There are two dialog boxes for uninstalling packages that were added since the last backup and for the reinstallation of packages that were deleted since the last backup. These two steps let you restore the exact system state at the time of the last backup.

Figure 10.3. The Archive Selection dialog box.


Because it does not support incremental or differential backups, the YaST System Backup module is of limited use. However, it is adequate for quick-and-dirty backups or for small test servers. Also, it has a built-in scheduler (which you access by selecting Profile Management, Automatic Backup), so you can run regular backups at predetermined times in the background.

CAUTION

If your screen resolution is 800x600, the Start Backup Automatically check box is hidden if the task panel is displayed. This makes all the selections on the Automatic Backup Options dialog box inaccessible (grayed out). You need to hide the task panel to access the check box.


Getting to Know AMANDA

AMANDA, the Advanced Maryland Automatic Network Disk Archiver, was originally written by James da Silva while at the University of Maryland at College Park in 1997. This backup system allows a network administrator to set up a single master backup server and back up multiple hosts to a single, large-capacity tape drive. AMANDA uses native dump and/or GNU tar facilities and can back up a large number of workstations running multiple versions of Linux/Unix. Recent versions can also use Samba to back up Microsoft Windows hosts; no support is available for Macintosh systems at the time of this writing.

NOTE

The current main website for AMANDA is http://sourceforge.net/projects/amanda, where you can find the latest version and its source code.


AMANDA provides its own network protocols on top of TCP and UDP instead of using the standard rsh, rdump, or rmt protocols. Each client agent writes to stdout, which AMANDA collects and transmits to the tape server host. This allows AMANDA to insert compression and encryption and also gather a catalog of the image for recovery. Multiple clients are typically backed up in parallel to files in one or more holding disk areas as cache buffers. A separate tape-writing process keeps the tape device streaming at maximum possible throughput. AMANDA can also run direct to tape without holding disks, but with reduced performance.

Either the client or tape server may do software compression, or hardware compression may be used. When enabled on the client side, software compression reduces network traffic. On the server side, it reduces client CPU load. Software compression may be selected on an image-by-image basis. If Kerberos is available, clients may use it for authentication and dump images may be encrypted. Instead of Kerberos, .amandahosts authentication files (similar to .rhosts) can be used instead, or AMANDA may be configured to use .rhosts (even though the r*-utilities, such as rlogin, are not themselves used). AMANDA is friendly with security tools like TCP Wrappers (ftp://info.cert.org/pub/network_tools) and firewalls.

AMANDA uses standard software for generating dump images and software compression. Consequently, if you have an AMANDA-created tape, but AMANDA is not readily available, you can readily use the normal Linux/Unix tools such as mt, dd, and gzip/uncompress to recover a dump image from the tape. When AMANDA software is available, using the catalog, it locates which tapes are needed and finds images on the tapes. There is an FTP-like restore utility for administrators to make searching online dump catalogs easier when recovering individual files.

There is no graphical interface available for AMANDA; it is totally command line based. As a matter of act, AMANDA consists of a set of command-line utilities. Consequently, you can easily set up cron jobs to run the backup automatically.

AMANDA has configuration options for controlling almost all aspects of the backup operation and provides several scheduling methods. A typical configuration does periodic full dumps with partial dumps in between. There is also support for the following:

  • Periodic archival backup, such as taking full dumps so you can send them to offsite storage.

  • Incremental-only backups where full dumps are done outside AMANDA, such as very active areas that must be taken offline, or no full dumps at all for areas that can easily be recovered from vendor media.

  • Always doing full dumps, such as for databases that can change completely between each run or critical files that are easier to deal with during an emergency if they are a single-restore operation.

Scheduling of full dumps is typically left up to AMANDA. They are scattered throughout the dump cycle to balance the amount of data backed up during each run. Because there is no human intervention to determine when a full dump may take place, AMANDA keeps logs of where backup images are located for each filesystem. For instance, the Friday tape will not always have a full dump of /home for Athena. Therefore, system-generated logs are essential. The scheduling of partial backup levels is also left to AMANDA. History information about previous levels is kept, and the backup level automatically increases when sufficient dump size savings are realized.

A simple but efficient tape management system is employed by AMANDA. It protects itself from overwriting tapes that still have valid dump images and from writing on tapes that are not allocated to the configuration. Images may be overwritten when a client is down for an extended period or if not enough tapes are allocated, but only after AMANDA has issued several warnings. AMANDA can also be told not to reuse specific tapes.

As you can see, many of the features in AMANDA rival those of many of the commercial backup products, and it's free! If you need to back up multiple systems, take a look at AMANDA. It is included as part of the SLES 9 software.

TIP

You can find an excellent reference on configuring and using AMANDA in Chapter 4 of W. Curtis Preston's book Unix Backup & Recovery (ISBN 1-56592-642-0).


Scheduling Backups

As already discussed, backups should be made on a regular basis to be of any use in case of an accident or other emergency. Most, if not all, commercial backup products have a built-in scheduler. If you have elected to use one of the Linux-supplied tools, however, they do not come with schedulers. In this case, you can use cron to execute your backup commands at preset times.

NOTE

While each performs the same task, there are many variants of cron. The version of cron included with SLES 9 is Vixie cron, which is a rather full-featured cron implementation based on AT&T System V's cron. Each user has his or her own crontab file (placed in /var/spool/cron/tabs) and is allowed to specify environment variables (such as PATH, SHELL, HOME, and so on) within that crontab. The actual access to the cron service is controlled by the /var/spool/cron/allow and /var/spool/cron/deny files. Unlike the other cron variants, this version also offers support for the SELinux security module ( http://www.nsa.gov/selinux/papers/module-abs.cfm) and PAM. It supports fewer architectures than Dcron (Dillon's cron), but more than Fcron.


System jobs are controlled via the contents in /etc/crontab and the files located in /etc/cron.d. For backup jobs, it is probably best to place the necessary commands in /etc/crontab. The following is a sample crontab enTRy that performs a backup of /home/project every day at 3:00 a.m. using tar:

 #mm hh dd mm ww command  00 03  *  *  * (tar -czpvf /backup/project.tgz /home/project > /home/peter/tar.log) 

Refer to man 5 crontab for details about the format of cron commands.

Sometimes you may need to run a (backup) command just once, but at a later time. For this purpose, you can use the at command instead of cron.

The at command requires one argument (the time) and accepts a number of options. The command syntax is as follows:

 at [-q queue] [-bdlmrv] [-f file] time [date | +increment] 

The output from the commands in file (as specified via -f file) is emailed to the user who submitted the job; at executes the command file using /bin/sh. The following is a sample at command that will make the backup1 script run 10 minutes from the time it was submitted:

 Athena:/home/admin # at -f backup1 now+10 minutes warning: commands will be executed using /bin/sh job 5 at 2005-1-19 14:24 

The following example will execute the backup1 script at 3 p.m., three days from today:

 Athena:/home/admin # at -f backup1 3pm + 3days warning: commands will be executed using /bin/sh job 6 at 2005-1-22 15:00 

Similar to cron, at uses allow (/etc/at.allow) and deny (/etc/at.deny) files to control which of the nonroot users may submit at commands.

NOTE

The at service (atd) is not started by default. You can check to see whether it is currently running by using the command ps aux | grep atd and looking for the atd process. If it is not running, you can manually start it with /usr/sbin/atd. Similarly, you can use ps aux | grep cron to ensure the cron service is running. If it is not, you can start it with /usr/sbin/cron.


For more information about the at commands, especially its time syntax, see its man page.

Commercial Backup Products

Other than the tools and free utilities included with SLES, a number of commercial backup products are also available. Some notable mentions follow:

  • CTAR The Compressing Tape Archiver is an unattended backup/preconfigured command scheduler. It is based on the nonproprietary industry standard tar format. CTAR has been around since 1989 and supports a wide range of Linux and Unix systems. Visit UniTrends Software ( http://www.unitrends.com) for more information.

  • BRU The Backup and Restore Utility family of products is strongly based on tar but adds many more features, including both command-line and GUI interfaces. It runs a daemon that manages the backup schedule. BRU supports full, incremental, and differential backups, as well as catalogs, and can save the archive to a file as well as a wide range of storage devices. Like CTAR, BRU has been around for a long time (since 1985), so it has a proven track record. Visit TOLIS Group at http://www.tolisgroup.com for more information.

  • BrightStor ARCserve Backup for Linux ARCserve is an easy-to-use, high-performance, comprehensive data management tool for enterprise networks. Utilizing a browser-based Java GUI, ARCserve makes managing the backup of large servers and heterogeneous networks simple. Full automation of the data management process, including tape rotation, is made easy by the built-in Auto Pilot feature. A number of add-on agents and modules are available to support hot backup of databases (such as MySQL) and applications (such as Apache Web Server). Visit Computer Associates at http://www.ca.com for more information.

  • Legato NetWorker for Linux NetWorker utilizes a centralized console for configuring and managing all backups and restores, thus reducing the amount of training required across your system. Wizard-driven configuration tools help simplify such tasks as filesystem and device configuration. You can readily implement flexible, automated, calendar-based scheduling, and design separate browse and retention policies at client or defined backup levels to enable more flexibility in managing your data. The user console allows your users to perform ad hoc backups and even browse and recover their own files, eliminating the need for administrator intervention. NetWorker also provides automatic event notification via pager, email, or optional SNMP module. For heavily scripted environments or users wishing to integrate with other applications, NetWorker provides extensive command-line capabilities. A number of modules are also available, such as NetWorker Module for Oracle, which enables you to perform online database backups. Visit EMC Legato at http://www.legato.com for more information.

  • System Backup Administrator System Backup Administrator (SBA) provides a graphical interface for administration of various types of backups. The rich feature set in SBA allows you to use a single backup product for daily incremental backups, raw partition backups, and full-system backups, along with the necessary boot media. SBA also has some unique system recovery features that are not found in other similar products. For instance, when you are reinstalling a system using new hardware, the configuration is tailored to work with your new hardware configuration. In addition, you can completely recustomize your system during the system installation process by changing filesystem types, adding software RAID devices, converting to LVM partitions, and much more. Visit Storix, Inc., at http://www.storix.com for more information.

  • Arkeia Server and Network Backup Arkeia offers two Linux backup solutions: Arkeia Server Backup (ASB) and Arkeia Network Backup (ANB). ASB is specifically designed for businesses and enterprise departments that rely on local backups to a single tape drive, and ANB was designed to protect heterogeneous networks with options for backing up to disks and/or tape libraries. Arkeia users can take advantage of a fully integrated wizard for easy setup of basic operations, and the GUI interface provides ready access to various functions and features, including scheduling and email notification. Plug-ins are available to perform online, hot backups of widely used database applications, such as Lotus Notes and MySQL. Visit Arkeia Corporation at http://www.arkeia.org for more information.



    SUSE LINUX Enterprise Server 9 Administrator's Handbook
    SUSE LINUX Enterprise Server 9 Administrators Handbook
    ISBN: 067232735X
    EAN: 2147483647
    Year: 2003
    Pages: 134

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net