Backup Tips and Tricks


One of the most common maintenance tasks is performing system backups of the servers on your network. Backups are also the single most important maintenance task in the enterprise. Backups can often be a frustrating task but there are ways to make the process smoother and faster.

Improving Performance With a Dedicated Backup VLAN

In a perfect world there would be no users accessing data during the backups. Bandwidth would be unlimited and you wouldn't need things like routers or Access Control Lists to prevent traffic from spilling all over the network. Often, due to constraints in the network, it is necessary to put servers into separate networks. This often results in traffic having to cross switch backplanes , switch trunks, or even routers. One way to avoid this situation is to implement a dedicated network for the backups. By adding an additional NIC to each server and addressing it out of an address space unknown to the production network you are able to segment all backup traffic. This also enables users to access data without affecting the bandwidth available to the backup system. If you're using a chassis switch, all the NICs for the backup network should attach to the same card. This maximizes available bandwidth because network traffic does not have to cross the backplane.

Also consider running the backup server with a network interface card, or NIC, that is faster than the NICs in the target servers. Remote backups rarely take full advantage of the available bandwidth on a system due to the spooling of large numbers of small files. The backup server, on the other hand, can handle multiple conversations at once and spool data to multiple backup devices.

Additional NIC

If the network is still supporting WINS, ensure that the additional NIC is not registering itself in WINS. This could result in an unreachable address being resolved by a client that was trying to reach the server. Similarly, the NIC for the backup network should not be dynamically registering itself in DNS.


BEST PRACTICE: Profiling Your Backup Network

In switched 100MB networks, experiment with duplex settings on the switch and the servers. Very often, older hardware will perform its backups as much as twice as fast when forced to half duplex. Although this is counter-intuitive where full duplex should be twice as fast as half duplex, many older hardware adapters or network adapter drivers are not optimized at full duplex, therefore multiple retries for information transmission makes improperly configured network set systems run slower than expected.


Spool to Disk and Later to Tape

In some backup situations the window of opportunity to perform a backup can be very small. Databases are often stopped during their backup and need to be back up and running as soon as possible. One of the simplest ways to get the database backed up quickly is to first back up to disk and later spool it off to tape. The fastest tape technologies on the market still can't keep up with a good disk. Many enterprise-level backups first run a job that uses a large SAN or NAS as the media and then runs a secondary job that writes the data from the SAN or NAS to tape. Because the database is only stopped for the fast disk copy the tape system has the rest of the day to commit the data to tape.

A less expensive variation on that theme is to have the server perform a local backup to another locally attached disk. The tape-based backup simply backs up this local backup file. It requires a two-step restore but the reduced impact on the database often makes this worth while.

Just Basic Old Disk

Although SAN and NAS are excellent technologies for spooling data to disk before spooling to tape, less expensive technologies like JBOD (Just Basic Old Disk) are also very viable choices.


The additional benefit to spooling to disk first is that statistically most restores are last night's data. By performing the restore from disk instead of tape the system can be brought back into a usable state much more quickly. In addition to the backup and restore processes being faster, the step of verifying the contents of the backup is often 10 times faster.

Grandfather, Father, Son Strategies and Changers

Almost every administrator is familiar with the daily task of swapping tapes. Keeping a large array of labeled tapes, carefully placing them in order, and determining which tape needs to be moved into the fireproof safe is a managed process. Deciding which tape needs to go offsite, determining how to put the tapes that came back from storage back into rotation--these are all familiar and tedious tasks. Many companies have grown past this method and moved into the world of tape changers. Tape changers range from a single drive unit with a six-tape cartridge to backup devices that are literally the size of a conference room. These can have dozens and dozens of drives with rack after rack of tapes. Multiple robots perform a veritable ballet as they scan tapes with their barcode readers, pluck them from their homes and place them into drives. They carefully record which slot holds which tape for which job. They know when to place a tape into a mail slot to be picked up for storage. They take over a tremendous amount of work for backups.

These changers all have one thing in common. They use software that allows them to make dynamic decisions about tape management. It's a common occurrence for a system administrator to come in Monday morning and find their backup server asking for a second tape. Imagine the horror of finding out that your mail server crashed at 2 a.m. on Monday morning and the backup server has been asking for a second tape since Friday night. Through the use of tape changer hardware, you can create a group of tapes known as a scratch pool that the system can use if it runs out of space. This greatly reduces the possibility of a job not finishing on time for lack of tapes.

The concept of GFS (Grandfather, Father, Son) is commonly used with changers to simplify tape rotation management. The GFS strategy is a method of maintaining backups on a daily, weekly, and monthly basis. GFS backup schemes are based on a seven-day weekly schedule, beginning any day of the week. A full backup is performed at least once a week. Most organizations do the full backup on Friday night. All other days, full, partial, or no backups are performed. The daily backups are the Son. The last full backup in the week (the weekly backup) is the Father. The last full backup of the month (the monthly backup) is the Grandfather.

By default, you can re-use daily media after six days. Weekly media can be overwritten after five weeks have passed since it was last written to. Monthly media are saved throughout the year. These can and should be taken off-site for storage. You can change any of these media rotation defaults to suit your particular environment.

Service Level Agreements

If offsite storage services are being used, don't forget to include their Service Level Agreements (SLA) in your own. If the offsite storage vendor only offers four- hour service response to pull and return a stored tape, you must factor that into your own SLA.


The primary purpose of the GFS scheme is to suggest a minimum standard and consistent interval at which to rotate and retire the media.

Use the Appropriate Agents

Backup software often comes with specialized agents. These agents might deal with connectivity to proprietary systems, file compression, performance enhancements, or file locking. Agents that deal with proprietary systems like SQL or Exchange are absolutely critical to the backup and restore of these systems. Failure to use these agents will often leave you with backups that cannot restore the system to a usable state. Agents that deal with compression are mostly used in situations where there is limited bandwidth between the backup server and the remote server. By compressing the data, the total amount of data to be removed is reduced. This results in a faster transfer. The tradeoff is an increased CPU load on the server performing the encryption. To ensure a complete backup of data files the use of an Open File agent is very important. Without such an agent, files might get skipped during the backup. This could be an unfortunate situation if one of these files needs to be restored. Finally, agents that are built for accelerating backups can be very helpful in situations where large numbers of small files are being backed up. Due to the constant seeking of individual files, the disk subsystem is often underused in this situation. These acceleration agents prepackage the files into larger chunks to better use the disk subsystem.

Hardware Compression Is More Efficient

Hardware compression at the tape device is almost always more efficient than software compression. Greater throughput can be had by allowing the server to spool data without pausing to compress it locally. If network connectivity is slow, it can be a worthwhile tradeoff to compress locally in order to reduce the traffic on the network


What to Include and Exclude in a Backup

Always be aware of the files on your system and make intelligent decisions about what files will be backed up. As stated earlier, the System State of a Windows 2003 system is critical for the ability to restore data. Other files can safely be excluded. The swap file, for example, will not get backed up properly and is not required for a system restore. Often you will overlook the fact that data is getting backed up twice. If you are backing up an application like Exchange with a dedicated agent there is probably no need to try to back up the .edb files at a file level. Similarly if a network is using DFS it is not necessary to back up each replica of the data. If DFS data needs to be restored you can simply re-create the replica and allow File Replication Services to re-create the information. In the meantime users will be accessing another replica anyway.

By examining the logs from the backups you can determine which files are always locked in use and can't be backed up. Based on this information you can determine if they need an agent or if this data can safely be skipped.

NTFS-level Flagging

Be aware of any NTFS-level flagging performed by your backup software. Some backup packages modify an NTFS field on the file to indicate that the file has been backed up. Other technologies such as DFS can see this change and believe that the file was modified and that it needs to be replicated. This can result in a large and unintended load on the network.

Newer versions of tape backup software will not flag the file as being accessed and can minimize the unnecessary replication of files.


Bootstrap Portion of ASR

Microsoft wrote the bootstrap portion of ASR to be extensible. This means that third-party backup solutions can leverage this functionality to provide their own bare metal restore.

If your third-party backup solution does not support ASR it might be a good idea to create an ASR backup via the built-in NTBackup utility that comes with Windows Server 2003 after you have the server configured with its applications. This will give you a head start if you need to do a ground-up restore later.




Microsoft Windows Server 2003 Insider Solutions
Microsoft Windows Server 2003 Insider Solutions
ISBN: 0672326094
EAN: 2147483647
Year: 2003
Pages: 325

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net