Part II: Computer Forensics Evidence and Capture

 < Day Day Up > 

Part II: Computer Forensics Evidence and Capture


Chapter 4: Data Recovery
Chapter 5: Evidence Collection and Data Seizure
Chapter 6: Duplication and Preservation of Digital Evidence
Chapter 7: Computer Image Verification and Authentication

 < Day Day Up > 

 < Day Day Up > 

Chapter 4: Data Recovery


Computers systems may crash. Files may be accidentally deleted. Disks may accidentally be reformatted. Computer viruses may corrupt files. Files may be accidentally overwritten. Disgruntled employees may try to destroy your files. All of these can lead to the loss of your critical data. You may think it’s lost forever, but you should employ the latest tools and techniques to recover your data.

In many instances, the data cannot be found using the limited software tools available to most users. The advanced tools that you utilize should allow us to find your files and restore them for your use. In those instances where the files have been irreparably damaged, your computer forensics expertise should allow you to recover even the smallest remaining fragments.

With this in mind, data recovery is, of course, of potential interest to anyone who has lost data to the ravages of time, malice, or carelessness. But in forensic computing or analysis, it takes on a new meaning—suddenly what other people have thrown away can become an important component in understanding what has happened in the past, as burglary tools, data files, correspondence, and other clues can be left behind by interlopers.

This chapter covers the ins and outs of data recovery as it relates to computer forensics. But, before delving into the ins and outs, what is data recovery?

 < Day Day Up > 

 < Day Day Up > 


Data recovery is the process in which highly trained engineers evaluate and extract data from damaged media and return it in an intact format. Many people, even computer experts, fail to recognize data recovery as an option during a data crisis. Yet it is possible to retrieve files that have been deleted, passwords that have been forgotten, or to recover entire hard drives that have been physically damaged.

As computers are used in more important transactions and storage functions, and as more important data is stored on them, the importance of qualified data recovery experts becomes clear. Perhaps your information has been subjected to a virus attack, suffered damage from smoke or fire—maybe your drive has been immersed in water—the data recovery experts can help you. Or, perhaps your mainframe software has malfunctioned, or your file allocation tables are damaged—data recovery experts can help you.

So, what would happen to the productivity of your organization in the event of a system-wide data center failure? For most companies, the loss would be catastrophic. Hundreds, perhaps thousands, of employees would be rendered unproductive. Sales transactions would be impossible to complete. And customer service would suffer. The cost of replacing this data would be extraordinary—if it could be replaced at all.

 < Day Day Up > 

 < Day Day Up > 


We live in a world that is driven by the exchange of information. Ownership of information is one of the most highly valued assets of any business striving to compete in today’s global economy. Companies that can provide reliable and rapid access to their information are now the fastest growing organizations in the world. To remain competitive and succeed, they must protect their most valuable asset—data.

Fortunately, there are specialized hardware and software companies that manufacture products for the centralized back-up and recovery of business-critical data. Hardware manufacturers offer automated tape libraries that can manage millions of megabytes of backed-up information and eliminate the need for operators charged with mounting tape cartridges. Software companies have created solutions that can back-up and recover dozens of disparate systems from a single console.

So why then, do industry experts estimate that over 45% of the data in client/ server networks is still not backed-up on a regular basis? It is often due to organizations’ ever-shrinking back-up windows, inadequate network infrastructure, and a lack of system administration. Compounding the problem is an overall lack of experience in defining the proper features necessary for a successful back-up application. And finally, there is often a shortage of in-house expertise needed to implement sophisticated, enterprise-level back-up applications.

However, there are obstacles to backing-up applications. Let’s look at a few.

Back-up Obstacles

The following are obstacles to backing-up applications:

  • Back-up window

  • Network bandwidth

  • System throughput

  • Lack of resources


The back-up window is the period of time when back-ups can be run. The back-up window is generally timed to occur during nonproduction periods when network bandwidth and CPU utilization are low. However, many organizations now conduct operations seven days a week, 24 hours a day—effectively eliminating traditional back-up windows altogether.


Many companies now have more data to protect than can be transported across existing LAN and WAN networks. If a network cannot handle the impact of transporting hundreds of gigabytes of data over a short period of time, the organization’s centralized back-up strategy is not viable.


There are three I/O bottlenecks commonly found in traditional back-up schemes. These are:

  1. The ability of the system being backed-up to push data to the back-up server

  2. The ability of the back-up server to accept data from multiple systems simultaneously

  3. The available throughput of the tape device(s) onto which the data is moved[i]


    Any or all of preceding bottlenecks can render a centralized back-up solution unworkable.


Many companies fail to make appropriate investments in data protection until it is too late. Often, IT managers choose not to allocate funding for centralized data protection because of competing demands resulting from emerging issues such as e-commerce,[ii] Internet/intranet applications, and other new technologies.

These are just a few of the impediments that make implementation of an enterprise back-up and recovery solution a low priority for some organizations. Fortunately, there have been major advances in hardware and software technologies that overcome many or all of the traditional obstacles faced by IT professionals as they attempt to develop a comprehensive data-protection plan. In addition, companies such as StorNet[iii] provide specialized expertise in the deployment of complex, integrated storage solutions.

The Future of Data Back-up

Successful data back-up and recovery is composed of four key elements: the back-up server, the network, the back-up window, and the back-up storage device (or devices). These components are highly dependent on one another, and the overall system can only operate as well as its weakest link. To help define how data back-up is changing to accommodate the issues described earlier, let’s take a look at each element of a back-up and recovery design and review the improvements being made.


The back-up server is responsible for managing the policies, schedules, media catalogs, and indexes associated with the systems it is configured to back-up. The systems being backed-up are called clients. Traditionally, all managed data in an enterprise that was being backed-up had to be processed through the back-up server. Conversely, all data that needed to be restored had to be accessed through the back-up server as well. This meant that the overall performance of a back-up or recovery was directly related to the ability of the back-up server to handle the I/O load created by the back-up process.

In the past, the only way to overcome a back-up server bottleneck was to invest in larger, more powerful back-up servers, or data back-up and recovery, and divide the back-up network into smaller, independent groups. Fortunately, back-up-software developers have created methods to work around these bottlenecks. The most common workaround is to create tape servers that allow administrators to divide the back-up tasks across multiple systems, while maintaining scheduling and administrative processes on a primary or back-up server. This approach often involves attaching multiple tape servers to a shared tape library, which reduces the overall cost of the system. Figure 4.1 is an example of a back-up configuration such as this.[iv]

click to expand
Figure 4.1: A back-up using a shared tape library. (©Copyright 2002, StorNet. All rights reserved).

The newest back-up architecture implements a serverless back-up solution that allows data to be moved directly from disk to tape, bypassing the back-up server all together. This method of data back-up removes the bottleneck of the back-up server completely. However, the performance of serverless back-up is then affected by another potential bottleneck—bandwidth. Figure 4.2 is an example of a serverless back-up.[v]

click to expand
Figure 4.2: A serverless back-up system. (©Copyright 2002, StorNet. All rights reserved).


Centralization of a data-management process such as back-up and recovery requires a robust and available network data path. The movement and management of hundreds or thousands of megabytes of data can put a strain on even the best-designed networks. Unfortunately, many companies are already struggling with simply managing the existing data traffic created by applications such as e-commerce, the Internet, e-mail, and multimedia document management. Although technology such as gigabit Ethernet and ATM can provide relief, it is rarely enough to accommodate management of large amounts of data movement.

So, if there is not enough bandwidth to move all the data, what are the options? Again, it was the back-up-software vendors that developed a remedy. An enterprise-class back-up solution can distribute back-up services directly to the data source, while at the same time centralizing the administration of these resources. For example, if there is a 600GB database server that needs to be backed-up nightly, a tape back-up device can be attached directly to that server. This effectively eliminates the need to move the 600GB database across the network to a centralized back-up server. This approach is called a LAN-less back-up, and it relies on a remote tape server capability. Figure 4.3 demonstrates how this approach is configured.[vi]

click to expand
Figure 4.3: A LAN-less back-up using remote tape server. (©Copyright 2002, StorNet. All rights reserved).

Another option is the installation of a network path dedicated to the management and movement of data. This data path can be SCSI, Ethernet, ATM, FDDI, or Fibre Channel. Creating a dedicated data path is the beginning of a Storage Area Network (SAN).[vii] SANs are quickly dominating the back-up landscape, and applications such as serverless and LAN-less back-up will continue to push this emerging technology forward. Figure 4.4 shows an example of a dedicated SAN topology.[viii]

click to expand
Figure 4.4: A storage area network using serverless back-up. (©Copyright 2002, StorNet. All rights reserved).


Of all the parameters that drive the design of a back-up application, one remains an absolute constant, and that is time. A back-up window defines how much time is available to back-up the network. Time plays an important role in choosing how much server, network, and resource support needs to be deployed. Today, most companies are currently managing too much data to complete back-ups during these ever-shrinking back-up windows.

In the past, companies pressured by inadequate back-up windows were forced to add additional back-up servers to the mix, and divide the back-up groups into smaller and smaller clusters of systems. However, the back-up–software community has once again developed a way to overcome the element of time by using incremental back-ups, block-level back-ups, image back-ups, and data archiving.

Incremental Back-up

Incremental back-ups only transfer data that has changed since the last back-up. On average, no more than 5% of data in a file server changes daily. That means an incremental back-up may only require 5% of the time it takes to back-up the entire filesystem. Even then, a full back-up had to be made regularly, or restoration of the data would take too long. Fortunately, there are now back-up applications that combine incremental back-ups together, thereby creating a virtual complete back-up every night without actually necessitating a full back-up during the limited back-up window.

Block-Level Incremental Back-up

Block-level incremental back-ups provide similar benefits as incremental back-ups, but with even more efficiency. Rather than backing-up entire files that have been modified since the last back-up, only the blocks that have changed since the last back-up are marked for back-up. This approach can reduce the amount of incremental data requiring back-up nightly by orders of magnitude.

However this benefit comes at a price. Often the filesystem of the client must be from the same vendor as the back-up software. Also, there are databases such as Oracle that allow block-level back-ups to be done, but the CPU requirements to do so may render the approach ineffective. Nevertheless, block-level back-ups may be the only viable option for meeting your back-up window.

Image Back-ups

Image back-ups are quickly gaining favor among storage administrators. This type of back-up creates copies or snapshots of a filesystem at a particular point in time. Image back-ups are much faster than incremental back-ups and provide the ability to easily perform a bare bones recovery of a server without loading the operating systems, applications, and the like. Image back-ups also provide specific point-in-time back-ups that can be done every hour rather than once a day.

Data Archiving

Removing infrequently accessed data from a disk drive can reduce the size of a scheduled back-up by up to 80%. By moving static, infrequently accessed data to tape, back-up applications are able to focus on backing-up and recovering only the most current and critical data. Static data that has been archived is easily recalled when needed, but does not add to the daily data back-up requirements of the enterprise. This method also provides the additional benefit of freeing up existing disk space without adding required additional capacity.


In many cases, the single most expensive item in a back-up project is the back-up storage device itself. Therefore, it is important that the technical specifications of the storage device provide adequate capacity and performance to accommodate existing and planned data. Determining the tape format, number of tape drives, and how many slots are required is predicated on many variables. Back-up windows, growth rates, retention polices, duplicate tape copies, and network and server throughputs all affect which back-up storage device is best for your needs. Table 4.1 compares several different tape technologies.

Table 4.1: Comparison Of Various Tape Technologies.


Drive Capacity*

Data Transfer Rate*

4mm DDS-3 DAT

12 GB

1.0 MB per second

8mm Exabyte Mammoth

20 GB

3.0 MB per second

Sony AIT

25 GB

3.0 MB per second

Quantum DLT 7000

35 GB

5.0 MB per second

Quantum DLT 8000

40 GB

6.0 MB per second

IBM 3590 Magstar

10 GB

9.0 MB per second

StorageTek 9840

20 GB

10.0 MB per second

*native capacity

Tape libraries are sized using two variables: the number of tape drives and the number of slots; manufacturers of tape libraries continue to improve each of these elements. Tape libraries today are available with 5 to 50,000 slots and can support anywhere from 1 to 256 tape drives. Additionally, there are tape libraries available that support multiple tape formats.

When designing a centralized data back-up, take particular care selecting the right back-up storage device. Make sure that it can easily scale as your data rates increase. Verify that the shelf life of the media meets your long-term storage needs. Calculate the required throughput to meet your back-up window and make sure you can support enough tape drives to meet this window.


Today’s global economy means that applications such as e-mail, relational databases, e-commerce, and ERP systems must be accessible and on-line 24 hours a day. Therefore, these applications cannot be shut down to perform administrative tasks such as back-up. A back-up vendor should provide agents for the most common database and e-mail applications that allow these databases to be backed-up without shutting down applications.

Data Interleaving

To back-up multiple systems concurrently, the back-up application itself must be able to write data from multiple clients to tape in an interleaved manner. Otherwise, the clients must be backed-up sequentially, which takes much longer.

Remote Back-up

Many remote systems are exposed to unrecoverable data loss. Off-site locations are often not backed-up at all due to the cost of deploying hardware and software remotely, and the lack of administrative support in these remote locations. Laptop computers are especially vulnerable to data loss. A back-up application should have a method to back-up systems across a WAN or over dial-up connections.

Global Monitoring

Companies are deploying applications that can be managed and monitored from any location in the enterprise. Back-up applications also need to be able to be accessed and administered from multiple locations. A robust back-up application should be able to support reporting and administration of any back-up system, regardless of location. It should also interface easily with frameworks such as Tivoli and Unicenter.


An enterprise back-up application should be able to benchmark back-up data rates exceeding one terabyte per hour. These benchmarks show that back-up performance is limited to the hardware and network and not to the application itself.

Now, let’s explore some of the issues in the role of back-up in data recovery and some of the technologies that are available today. Then, let’s take a look at what is still missing in the race to address fast recovery of these exponentially growing data repositories.

[i]Derek Gamradt, “Data Backup + Recovery,” StorNet, Corporate Headquarters, 7074 South Revere Parkway, Englewood, CO 80112, 2001. (©Copyright 2002, StorNet. All rights reserved), 2001.

[ii]John R. Vacca, Electronic Commerce: Online Ordering and Digital Money, 3/E, Charles River Media, 2001.

[iii]StorNet, Corporate Headquarters, 7074 South Revere Parkway, Englewood, CO 80112, 2001. (©Copyright 2002, StorNet. All rights reserved), 2001.




[vii]John R. Vacca, The Essential Guide to Storage Area Networks, Prentice Hall, 2002.

[viii]StorNet, Corporate Headquarters, 7074 South Revere Parkway, Englewood, CO 80112, 2001.(©Copyright 2002, StorNet. All rights reserved), 2001.

 < Day Day Up >