7.7 Scientific Evidence


7.7 Scientific Evidence

In addition to challenging the admissibility of digital evidence directly, tools and techniques used to process digital evidence have been challenged by evaluating them as scientific evidence. Because of the power of science to persuade, courts are careful to assess the validity of a scientific process before accepting its results. If scientific process is found to be questionable, this may influence the admissibility or weight of the evidence, depending on the situation.

In the United States, scientific evidence is evaluated using four criteria developed in Daubert v. Merrell Dow Pharmaceuticals, Inc., 1993. These criteria are:

  1. whether the theory or technique can be (and has been) tested;

  2. whether there is a high known or potential rate of error, and the existence and maintenance of standards controlling the technique's operation;

  3. whether the theory or technique has been subjected to peer review and publication;

  4. Whether the theory or technique enjoys "general acceptance" within the relevant scientific community.

Thus far, digital evidence processing tools and techniques have withstood scrutiny when evaluated as scientific evidence. However, testing techniques or tools and determining error rates is challenging, not just in the digital realm. Although many types of forensic examinations have been evaluated using the criteria set out in Daubert, the testing methods have been weak. "The issue is not whether a particular approach has been tested, but whether the sort of testing that has taken place could pass muster in a court of science." (Thornton 1997). Also, error rates have not been established for most types of forensic examinations, largely because there are no good mechanisms in place for determining error rates. Fingerprinting, for example, has undergone recent controversy (Specter 2002). Although the underlying concepts are quite reliable, in practice, there is much room for error. Therefore, errors are not simply caused by flaws in underlying theory but also in its application. This problem applies to the digital realm and can be addressed with increased standards and training.

One approach to validating tools is to examine the source code. However, as noted earlier, many commercial developers are unwilling to disclose this information. When the source code is not available, another form of validation is performed - verifying the results by examining evidence using another tool to ensure that the same results are obtained. Formal testing is being performed by the National Institute of Standards and Technology (NIST) and some organizations and individuals perform informal tests. However, given the rate at which computer technology is changing, it is difficult for testers to keep pace and establish error rates for the various tools and systems. Additionally, tool testing does not account for errors introduced by digital investigators through misapplication or misinterpretation. Therefore, the most effective approach to validating results and establishing error rates is through peer review - that is to have another digital investigator double check findings using multiple tools to ensure that the results are reliable and repeatable.




Digital Evidence and Computer Crime
Digital Evidence and Computer Crime, Second Edition
ISBN: 0121631044
EAN: 2147483647
Year: 2003
Pages: 279

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net