< Day Day Up > 


It is useful to present some fundamental requirements of a forensic data collection system before considering how these can be securely protected. These requirements were chosen to reflect the experience of computer forensic investigators. Other forensic experts may argue against some or all of them:

  1. Forensic data collection should be complete and non-software specific—thus avoiding software traps and hidden partitioning.

  2. In operation, it should be as quick and as simple as possible to avoid error or delay.

  3. It should be possible for anyone to use a forensic data collection system with the minimum amount of training.

  4. Necessary costs and resources should be kept to a minimum.[vii]

To meet the conditions specified in items 2, 3, and 4 in the preceding list, the DIGITAL INTEGRITY VERIFICATION AND AUTHENTICATION protocol must be tailored to suit. For the collection phase to remain quick and simple, the DIGITAL INTEGRITY VERIFICATION AND AUTHENTICATION protocol must not add significantly to the time required for copying, nor should there be additional (possibly complex) procedures.

The time and effort required to introduce links with key management agencies, trusted third parties, key distribution centers, and similar paraphernalia of digital signatures and document authentication is not necessary. It would add to the cost and complexity with little increase to security. It might mean, for example, that only investigators issued with a valid digital signature would be able to complete copies. Who is to issue these? Where are they to be stored? How will each individual remember his or her own key? How can misuse of keys be detected?

The DIGITAL INTEGRITY VERIFICATION AND AUTHENTICATION protocol described in the next section is virtually a self-contained system. Quite obviously, a truly self-contained encryption system cannot be cryptographically secure. However, within the DIGITAL INTEGRITY VERIFICATION AND AUTHENTICATION protocol, alternative channels of security are used to provide a truly secure system, but at much lower cost in time and consumables.

[vii]“DIVA Computer Evidence – Digital Image Verification And Authentication,” Computer Forensics UK Ltd, Third Floor, 9 North Street, Rugby, Warwickshire, CV21 2AB, UK, 2002.

 < Day Day Up > 

 < Day Day Up > 


The emphasis here is on a practical application of proven technology, such that a minimum amount of reliance is placed on the technical ability of the operator/ investigator. It must be understood that during the copying process, procedures are implemented to trap and handle hardware errors, mapping exceptions where necessary. It must also be understood that procedures are implemented to verify that information is copied correctly.

Within the current DIBS® system, as well as the raw data content of the suspect disk drive, a copy is also taken of the high section of conventional memory (to include any on-board ROM areas) and the CMOS contents via port access. This information is stored on each cartridge within a copy series.

Also stored on each cartridge is a reference area containing copy-specific information—such as CPU type and speed; hardware equipment indicators; copying drive serial number; cartridge sequence number; exhibit details and reference comments; operator name together with a unique password; and the real date and time as entered by the operator. The remainder (in fact the bulk) of each cartridge contains the information copied from the suspect drive on a sector by sector basis.

For the purposes of the DIGITAL INTEGRITY VERIFICATION AND AUTHENTICATION protocol, the cartridge is divided into blocks of an arbitrarily chosen size. Blocks may contain reference, ROM, CMOS, or disk data depending on their location on the cartridge.

A prespecified area of each cartridge is set aside to store integrity verification information for the blocks on that cartridge. Using the analogy of a bank vault and safety deposit boxes—the storage area containing the integrity verification information pertaining to each block is referred to as a safe box. Also, the whole of the prespecified area where the safe boxes are stored is referred to as the vault.

Safe Boxes and the Vault

As each block is copied and verified, a hash value is generated such that a single bit change anywhere within the block would produce a different hash. The result is stored in the relevant safe box and copying proceeds to the next block.

Once all the blocks relevant to a particular cartridge have been copied and treated in this way, the whole group of safe boxes, collectively referred to as the vault, are treated as an individual block and a vault hash value is generated and stored in the final safe box. The vault is then copied to another area of the cartridge and this second copy is encrypted.

The vault hash value for each cartridge is stored in a separate area in memory and the operator is prompted to insert a new cartridge until the copy is completed. The final cartridge will contain similar information to the others in the series and in addition will have the accumulated vault hash values from all other cartridges in the series.

Once the final cartridge has been copied, the operator is prompted to insert a preformatted floppy disk into the drive used to start the DIBS® process. All of the accumulated vault hash values are then written to a floppy disk together with the reference details of the whole copy procedure. At least two (identical) floppy disks are created in this manner, although the operator may elect to generate more if he or she wishes.

The floppy disks are then sealed in numbered, tamperproof bags and both numbers are written on both envelopes. These are then shown to the owner of the computer or his or her legal representative and he or she chooses one of them. The computer owner is then given his or her chosen floppy and the other is placed in secure storage. The tamperproof envelopes are printed with instructions on their use and storage, such that the computer owner is aware of the protection that he or she is being given. If the computer owner or his or her legal representative is not available, then both disks are placed in secure storage.

Finally, for computer forensics investigators and senior technology strategists, IT security with regards to image verification and authentication is a study in contrasts. On the one hand, it’s a topic that chief information officers (CIOs) repeatedly cite as one of their most important issues, if not the most important. Yet, despite the 9-11-2001 terrorist attacks, CIOs and senior IT executives still suggest that their non-IT colleagues simply do not share their sense of urgency. Perhaps that’s because relatively few security breaches have hit their organizations—and most of those are of the nuisance variety, which doesn’t cost a lot of hard dollars. Unfortunately, security is like insurance: You never know when you’ll need it. With that in mind, let’s take a look at why there isn’t a sense of urgency in implementing image verification and authentication security considerations.

Security Considerations

Day after day, in every company, university, and government agency, a never-ending parry and thrust with those who threaten the security of their networks continues. Ultimately, with everything changing, the struggle for security is a constant battle. Everything has to be updated as new and different threats are received.

Organizations must be constantly vigilant. New technologies in the areas of image verification and authentication bring new vulnerabilities. And computer forensics investigators are constantly discovering vulnerabilities in old image verification and authentication products. It is expected that the number of vulnerabilities reported in 2002 will be triple last year’s number.

As a result, CIOs are devoting more money and time to image verification and authentication security. The costs will continue to grow as the world becomes more interconnected and as the cleverness of those who would cause harm increases. In 1989, CERT/CC counted fewer than 200 security incidents (i.e., the Melissa virus, and everything that resulted from it, counts as one incident). In 2001 alone, CERT/CC recorded more than 28,000 incidents.


The CERT® Coordination Center (CERT/CC) is a center of Internet security expertise, at the Software Engineering Institute, a federally funded research and development center operated by Carnegie Mellon University. They study Internet security vulnerabilities, handle computer security incidents, publish security alerts, research long-term changes in networked systems, and develop information and training to help you improve security at your site.

No one is immune. When 30 computer security experts involved in a spare-time endeavor called The Honeynet Project hooked a typical computer network to the Internet to see what hackers would do, the network was probed and exploited in 15 minutes. Such hackers are intelligent adversaries who are going to find your weak points. That’s what makes the Honeynet Project different from other kinds of risk management.

Network managers must balance security against the business advantages new technology (e.g., image verification and authentication security) brings. The biggest issue most companies have is how to allow users to do everything they need to do to be efficient from a business standpoint without opening the door to an attack.

Most CIOs do understand the key role employees play in security. Some take precautions when employees are being dismissed—quickly removing their network access, for example. Everyone knows that education is key. Employees can definitely cause problems if they don’t do the right thing.

Employees can also be allies in the battle. Most CIOs view their staff as a strength of their overall information security program. Staff members are the ones who make sure viruses don’t come in and holes aren’t created in the firewall. They have to understand that most business is built on trust, and their role in maintaining that trust is critical.

It’s also necessary to win support of non-staff members. Usually it’ not an issue, because, in the case of a virus outbreak, everybody is affected. It’s very visible, and anything that is done is appropriate as far as the non-staff employees are concerned. Some esoteric things, such as VPN hardware or encrypting outside communications, are a little harder to sell. The non-staff employees want to know what it’s going to cost and what the risk is. Non-staff employees can understand it on a gut level, but after all, companies are not the Defense Department—they don’t make nuclear arms; they roll and distribute stainless steel or other products.

For example, the CIO should create a team, consisting of him- or herself, the company’s security officer, and the internal auditor, that meets regularly to review risks and then makes recommendations on spending. CERT/CC encourages technical and business-side people to work together because they will bring different perspectives to the matter. This collaboration should be part of an organization-wide, comprehensive plan to prevent companies from focusing on one aspect of security and overlooking others. Security is a mindset and a management practice as much as it is a technology.

Nevertheless, there is always some difficulty in enlisting the support of senior executives at some point. It’s more political than technical. Full communication and credibility are important. You should not try to deal with problems unless there really are problems. You should also avoid affecting the user any more than is absolutely necessary.

Indeed, financial institutions have had to conform to the Gramm-Leach-Bliley Act of 1999, which governs these institutions and the privacy of their customer information.[viii] Privacy is critical by law, and it’s the security that enables privacy. Financial institutions have to prove that they are securing their members’ information. Financial institutions are also subject to at least an annual review. In the past it was “Show me your vaults, show me your cameras, show me your paper-shredders.” Now it’s “Show me your password policy, show me your firewall.

In the end it’s difficult, perhaps impossible, to measure the return on investment in security. But perhaps that’s the wrong way to think about it. It’s difficult to determine whether CIOs are overspending or underspending. You can’t overspend, really. You have to protect your data. It only takes one time—one hacker getting in and stealing all your financial data. It would be irresponsible on a CIO’s part to not have the toughest image verification and authentication security possible.

[viii]John R. Vacca, Net Privacy: A Guide to Developing & Implementing an Ironclad ebusiness Privacy Plan, McGraw-Hill Professional Publishing, 2001.

 < Day Day Up >