| < Day Day Up > |
|
With ever-larger information sets being kept on-line, how quickly can data be restored and brought back into production? It is in addressing this requirement that the demand has arisen for new and more sophisticated data management and data-recovery techniques.
Back-up has never really been the problem. There have been a variety of back-up tools (both hardware and software) available for a number of years. Although data back-ups would seem to offer an effective shield against these threats, back-ups do not always provide comprehensive data protection. That is because the data back-up plans developed by many companies are not fully realized or, worse yet, not followed. What is more, individuals often fail to test the restore capabilities of their back-up media. If the back-ups are faulty, a simple data loss can quickly become a data disaster. Finally, even if back-ups are successful, they only contain data collected during the most recent back-up session. As a result, a data loss can potentially rob you of your most current data, despite your back-up attempts.
Data back-up and recovery has become the “killer application” of storage area networking.
The ability to move data directly from disk to tape at 100 MB/second will offer performance levels that are unprecedented by today’s standards.
SAN-based back-up offers benefits such as higher availability, increased flexibility, improved reliability, lower cost, manageability, improved performance, and increased scalability.
IT professionals face a number of challenges in today’s marketplace. Whether your business is retail, health care, banking, manufacturing, public utility, government agency, or almost any other endeavor, one thing is common: your users expect more and more from your systems.
Computer systems have grown to manage much of the business world. How does this growth affect your daily management of the data? What challenges do you face? If a problem occurs, how do you get your applications back to normal in the most efficient manner possible?
Some files (especially on Linux systems) can and should be recovered with very little effort or time. And while it can take a great deal of time to actually recover something, wizardly skill is not really required.
Ultimately, however, your odds at getting anything useful from the grave (dead storage) is often a question of personal diligence—how much is it worth to you? If it’s important enough, it’s quite possibly there.
The persistence of data, however, is remarkable. Contrary to the popular belief that it’s hard to recover information, it’s actually starting to appear that it’s very hard to remove something even if you want to.
Indeed, when testing forensic software on a disk that had been used for some time on a Windows 95, 98, 2000, or XP machine, then reinstalled to be a firewall using Solaris, and finally converted to be a Linux system, files and data from the prior two installations are clearly visible. Now that’s data persistence!
Forensic data is everywhere on computers. You need to continue the examination, for you have simply scratched the surface.
The following is a provisional list of actions for data recovery. The order is not significant; however, these are the activities for which the research would want to provide a detailed description of procedures, review, and assessment for ease of use and admissibility. A number of these data-recovery topics have been mentioned in passing already:
Disasters really do happen! Floods, tornadoes, earthquakes, and even terrorism can and do strike. You must be ready.
To be ready, you must have a plan in place. Taking periodic image copies and sending them off-site is the first step.
Performing change accumulations reduces the number of logs required as input to the recovery, which saves time at the recovery site. However, performing this step consumes resources at the home site.
You should evaluate your environment to decide how to handle the change accumulation question.
Even with a plan, you need to check to make sure you can implement it.
Checking your assets to make sure they’re ready should be part of your plan.
Building recovery JCL is tricky, and you need to get it exactly right. Data integrity and your business rely on this task.
Cleaning your RECON data sets can take hours if done manually, and it’s an error-prone process. When your system is down, can you afford to make mistakes with this key resource?
Test your plan. Even with this simplistic example, there’s a lot to think about. In the real world, there’s much more. Make sure your plan works before you are required to use it!
You must deal with issues of increased availability, shrinking expertise, and growing complexity, failures of many types, and the costs of data management and downtime.
| < Day Day Up > |
|