DDT: Not the Pesticide, the Other DDT


Toward the middle of the spectrum are two "flavors" of a topology for data protection nicknamed "DDT" for Disk-to-Disk-to-Tape. As shown in Figure 9-12, DDT describes a topology for data protection that builds upon the multi- tier architecture described at the outset of this chapter.

Figure 9-12. Disk-to-disk-to-tape.

graphics/09fig12.gif

Depending on the vendor, this secondary tier of disk may serve any of several roles (see Figure 9-13):

  • A "virtual tape" cache for a back-end library that enables tape media emulation, so that backup data can be aggregated and parsed to fill each tape media cartridge fully;

  • A cache for low-latency data restoral on a local basis, with tape used as a medium for remote data recovery;

  • A location for "scrubbing" backup datasets using correlation engines that prevent duplicate data from being recorded to backup media;

  • A location for "cleaning data" using anti-virus software and exclusion engines that remove unwanted file types from the backup data set; and/or

  • A location for applying a data naming scheme such as the one discussed in the previous chapter. (While not specifically relevant to data protection, this is a potential solution for the problem of applying a naming scheme to the large quantities of data that may have been generated prior to the implementation of such a scheme.)

Figure 9-13. Possible uses for second disk tier in DDT.

graphics/09fig13.gif

Vendors often augment the value proposition for this secondary disk storage tier by claiming that, in addition to reducing the risk of data loss in a disaster, the extra disk also saves the customer money. The basic argument is that secondary-tier storage can use inexpensive IDE/ATA or SATA drives , since access to this layer is expected to be less frequent and less demanding than access to primary storage arrays.

Some vendors hold that the secondary-tier array can be comprised of older storage platforms that the organization already possesses, thereby prolonging the life of existing storage investments. A few vendors offer that "D2" storage arrays can also serve as initiators of 3rd Party Data Mover processes such as Network Data Management Protocol (NDMP) that enable back-end transfers of data to tape without involving production servers (so-called "Server-less" backup).

DDT actually comes in two distinct "flavors": tape emulating and "disk as disk." In the first variation, tier-two disk is used as virtual tape, as described earlier in the chapter. The strategy, enabled by tape virtualization software, seeks to capitalize on the throughput of disk to shorten backup times. Traditional tape vendors have preferred this approach because it requires virtually no change in the backup software already installed in customer shops . The data streamed to the tape-emulating disk may be offloaded to actual tape as a secondary process in some configurations.

The other flavor of DDT treats disk as disk. Data is not streamed to the second disk tier, but is copied or mirrored. Advocates of this approach claim that disk-based replication enables fail-over in an emergency and highly granular file-by-file restores : advantages over tape for mission-critical applications. One of the best implementations of this approach is Avamar Technologies' Axion product.

Axion features "secret sauce" technology called commonality factoring that finds and eliminates redundant sequences of data as it creates copies of enterprise systems for regular data protection and archive. Intelligent client agents installed on enterprise systems identify replicated data sequences in files and across systems before sending data over networks, reducing strain on congested local or wide area networks. In actual customer environments, Axion sends and stores 100 times less data than conventional backup and restore solutions. [6]

Like EMC's Centera offering, Axion provides fault tolerance using a sophisticated Redundant Array of Independent Nodes (RAIN) [7] architecture. As shown in Figure 9-14, Axion RAIN distributes data and critical fault-tolerance information across storage nodes (compared to disks in a RAID array), allowing Axion to operate through and recover gracefully from node failures.

Figure 9-14. Avamar Technologies' Axion Solution in a Rack-Mount RAIN Configuration. ( Source: Avamar Technologies, 1A Technology Drive, Irvine, CA 92618, www.avamar.com .)

graphics/09fig14.jpg

Axion supports RAIN-5, [8] a method for providing fault tolerance across nodes within a location at a fraction of the storage penalty required by solutions that support mirroring only. In addition to fault tolerance within a single location, Axion RAIN technology can also be configured to provide fault tolerance across geographical locations with Remote RAIN-1 (mirroring across two sites) or Remote RAIN-5 (efficient fault tolerance across three or more sites), protecting critical business assets from site disaster.

All RAIN implementations are active-active, which allows data to be stored and retrieved from all sites simultaneously . Axion RAIN eliminates all single points of failure in a properly configured and deployed system, ensuring high availability and system reliability. Finally, Axion can be configured to stream restores to external devices for environments that require offsite data archival on removable media.

Ultimately, technology like Avamar's Axion and EMC's Centera, both aim to reinvent disk storage by providing enhanced intelligence for data migration, fault-tolerant provisioning, and in the case of Axion, a sophisticated kind of data compression. Both solutions also, as of this writing, require the purchase of all storage components used in the solution solely from the vendor, though Avamar spokespersons claim that they will shortly announce compatibility of their technology with virtually any hardware array.

In addition to multi-tier or RAIN configurations, DDT could also be implemented as a network-attached storage "appliance" that enables it to be deployed wherever it makes sense within the organization's IT infrastructure. Figure 9-15 illustrates the concept: a network-attached storage (NAS) "head" (a thin server optimized for storage I/O and network attachment), with front-end or network- facing support for IP-based file system protocols such as Network File System or Common Internet File System (NFS/CIFS) and the burgeoning IP-based block storage protocol, Small Computer Systems Interface over Internet Protocol (iSCSI). The NAS head can be attached to a back-end switched Fibre Channel fabric or IP SAN comprising two (or more) tiers of disk, and also to a tape solution. Essentially, this is DDT in a box.

Figure 9-15. DDT in a box.

graphics/09fig15.gif



The Holy Grail of Network Storage Management
The Holy Grail of Network Storage Management
ISBN: 0130284165
EAN: 2147483647
Year: 2003
Pages: 96

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net