|< Day Day Up >|| |
There are without doubt some very good experts in the field of Computer Forensics Investigations; however, there is a rise in the number of people purporting to be experts or specialists who produce flawed opinions or take actions which are just plain wrong.
The reasons for these errors are manifold but range from peer or management pressure, restricted timescales, and problems with software, to sheer lack of knowledge. Most investigations are basically the same in that they are either proving or disproving whether certain actions have taken place. The emphasis depends on whether the work is for the accuser or the accused.
In many companies, the forensic computer examiner is king because they have more knowledge of the subject than their peers. However, they are still subject to management pressures to produce results, and at times this can color their judgement. Time restrictions can cause them take short cuts that invalidate the very evidence they are trying to gather; and, when they do not find the evidence that people are demanding (even if it isn’t there), they are subject to criticism and undue pressure.
Many of these specialists are well meaning, but they tend to work in isolation or as part of a hierarchical structure where they are the computer expert. The specialists management does not understand what they are doing (and probably don’t want to admit it!), and often they are faced with the question: Can’t you just say this.....? It takes a very strong-minded person to resist this sort of pressure, and it has been obvious that this has had an adverse effect in a number of cases.
This sort of pressure comes not only from within the organizations, but also from external sources. When you reply with: “I’m sorry it’s just not there or no the facts do not demonstrate that,” you frequently end up with lengthy high-pressure discussions with the client, which appear to be designed to make you doubt your own valid conclusions.
Working in isolation is a major problem; apart from talking to yourself (first sign of madness), many people have no one else to review their ideas and opinions. This is where having recourse to a team of investigators, software engineers, hardware engineers, and managers who understand (not always a good thing, depending on your point of view!) any doubts or unusual facts, can be fully discussed and investigated to ensure that the correct answer is found.
Putting in technical safeguards to spot network intruders or detect denial-of-service attacks at e-commerce servers is a prudent idea. But if your staff doesn’t have the time or skills to install and monitor intrusion-detection software, you might consider outsourcing the job.
Intrusion detection is the latest security service to be offered on an outsourced basis, usually by the types of ISPs or specialized security firms that have been eager to manage your firewall and authentication. Although outsourcing security means divulging sensitive information about your network and corporate business practices, some companies say they have little choice but to get outside help, given the difficulty of hiring security experts.
For example, Memorial Care of Los Angeles operates a private T-1 network for its five hospitals and gives doctors network access from their homes or offices using a VPN connection. Memorial Care hired Pilot Network Services to provide Internet access, VPN, router and firewall support, antivirus content filtering, plus intrusion detection.
Pilot apprises Memorial Care of all attacks occurring on their network address and any type of attempted intrusion through daily reports or notification. They see 100-plus incidents on a daily basis, mostly kiddy hacker attacks using the available tools. Although not sophisticated, someone is still rattling the doorknob.
Memorial Care outsourced this security guard function to Pilot primarily because it’s hard to find skilled technicians with specialized knowledge about intrusion detection who will work round-the-clock. Outsourcing security costs Memorial Care less than six figures each year. Pilot doesn’t monitor Memorial Care’s internal network, but the hospital system has deployed its own homegrown intrusion-detection software on critical servers to issue alarms about unauthorized access attempts.
Allowing managed-security services deep into the network remains controversial. For example, Metromedia Fiber Network (MFN) opted not to go to managed security because you’re forced to give away the keys to the castle in some respects. Sensitive information might include which employees or trading partners are allowed to use the intranet and where critical corporate data is stored. Instead of outsourcing, MFN is looking at deploying ISS intrusion-detection software, called RealSecure, on its intranet—thus staffing a round-the-clock data monitoring center on its own.
Although still in the intrusion-detection software business, ISS last year branched out into managed-security services by opening data centers in Atlanta and Detroit to provide managed firewall, VPN, and intrusion-detection services. ISS has centers in Sweden, Italy, and Brazil, and plans to open a center in Japan.
For the intrusion-detection managed service, corporations have to deploy the ISS RealSecure Network Sensor software in their internal network to remotely monitor traffic across LANs or behind the firewall. Each sensor’s output, once encrypted, is transmitted across the Internet to consoles within the ISS data center, where employees watch for reports of suspicious activity or denial-of-service attacks.
ISS also plans to add host-based monitoring of servers, which would require users to buy into the RealSecure Server Sensor software. ISS often partners with telephone companies and Web-hosting providers, such as Exodus and network integrators or consultants, such as PricewaterhouseCoopers, to market its managed security services. Prices per month typically are between $1,800 to $3,000 per sensor. ISS claims to have 2,600 customers using its managed security services, which accounted for almost 11% of ISS’ $192 million in revenue in 2000.
MyCIO.com, the Network Associates, Inc. (NAI) application service provider (ASP) division for antivirus and firewall software, has quietly begun managing its customers’ intrusion-detection systems because the demand was there. The ASP technical staff is not going onto the customer site to manage only NAI’s CyberCop intrusion-detection software, but also to manage competing products from companies such as ISS and Cisco.[v] It’s a growth area for them. Also, outsourcing services typically cost a few thousand dollars per month.
The Yankee Group projects that managed-security services (of which intrusion detection is the latest phenomenon) more than doubled from $200 million in 1999 to $450 million in 2000. By 2006, the market is expected to reach $3.7 billion, fueled by the trend toward outsourcing internal LAN security to professional security firms as virtual employees.
Counterpane Internet Security, the managed-security services firm founded in 2000 by cryptography expert Bruce Schneier, has a distinctly different approach to intrusion monitoring than ISS. Counterpane built a black box device, called Sentry™, that it installs in the customer’s network to aggregate data output from routers, servers, firewalls, and intrusion-detection software—including that from
ISS and Tripwire. Cisco routers and Unix servers are all very chatty, producing megabytes of information each day. Sentry collects all that information. Sentry then transmits that datastream in encrypted form to the Counterpane data center, which is staffed around the clock. Counterpane now has two centers in Mountain View, California, and Chantilly, Virginia. Counterpane does need information about the customer’s network for this. If you went through a network expansion, Counterpane needs to know there’s a change. Otherwise, they don’t know what’s happening in the network.
Counterpane then contacts the corporation to report suspicious findings or an attack in progress, but doesn’t take further actions, such as shutting down access. Those steps are up to the corporation because Counterpane is there as a 24-7 remote monitoring alarm service. Cost typically runs $11,000 per month, and Counterpane services are offered through ISPs and Web host providers, including Exodus and Loudcloud.
Although the big fish, such as AT&T and IBM, are known to offer managed-security services that include intrusion detection, the water is becoming more populated with minnows. Sunnyvale, California, start-up eNet Secure provides intrusion detection for PBX equipment, charging $6,000 per month, with the Air Force and NASA as its first two customers.
Now, let’s briefly look at digital evidence collection. For example, insurance companies that are interested in reducing fraudulent claims by discovered digital evidence can benefit from this computer forensic service.
Perhaps one of the most crucial points of your case lies hidden in a computer. The digital evidence collection process allows you to not only locate that key evidence, but also maintains the integrity and reliability of that evidence. Timing during this digital evidence collection process is of the essence. Any delay or continuous use of the suspect computer may overwrite data prior to the forensic analysis and result in destruction of critical evidence (see sidebar, “Evidence Capture”). The following are some helpful tips that you can follow to help preserve the data for future computer forensic examination:
Do not turn on or attempt to examine the suspect computer. This could result in destruction of evidence.
Identify all devices that may contain evidence:
Off-site computers (laptops, notebooks, home computers, senders and recipients of e-mail, PDAs, etc.)
Removable Storage Devices (Zips, Jaz, Orb, Floppy Diskettes, CDs, Sony Memory Sticks, Smart Media, Compact Flash, LS-120, Optical Disk, SyQuest, Bernouli, microdrives, pocketdrives, USB disk, firewire disk, PCMICA.)
Network storage devices (RAIDs, servers, sans, nas, spanned, remote network hard drives, back-up tapes, etc.)
Quarantine all in-house computers:
Do not permit anyone to use the computers.
Secure all removable media.
Turn off the computers.
Disconnect the computers from the network.
Consider the need for court orders to preserve and secure the digital evidence on third party computers and storage media.
Forensically image all suspect media.[vi]
One of the fundamental principles of computer investigation is the need to follow established and tested procedures meticulously and methodically throughout the investigation. At no point of the investigation is this more critical than at the stage of initial evidence capture. Reproducibility of evidence is the key. Without the firm base of solid procedures that have been strictly applied, any subsequent antirepudiation attempts in court will be suspect and the case as a whole likely to be weakened.
There have been several high-profile cases recently where apparently solid cases have been weakened or thrown out on the basis of inappropriate consideration given to the integrity and reproducibility of the computer evidence. There are several reasons why this may happen. Lack of training is a prime culprit. If the individuals involved have not been trained to the required standards, or have received no training at all, then tainted or damaged computer evidence is the sad but inevitable result.
Another frequent cause of is lack of experience—not only lack of site experience but also inappropriate experience of the type of systems that might be encountered. One of the most difficult skills on-site is knowing when to call for help. It is essential that a sympathetic working environment is created such that peer pressure or fear of loss of status and respect does not override the need to call for help. Easier said than done perhaps, but no less essential for that reason.
Finally, sloppiness, time pressure, pressure applied on-site, tiredness, or carelessness have all been contributory factors in transforming solid computer evidence into a dubious collection of files. These totally avoidable issues are come down to individual mental discipline, management control and policy, and selecting appropriate staff to carry out the work. There are issues with which there is no sympathy. This is bad work, plain and simple.
Ultimately, anytime the collection of computer evidence is called into question, it is potentially damaging to everyone who is a computer forensic practitioner; it is ultimately in everyone’s best interest to ensure that the highest standards are maintained. To use a rather worn phrase from a 1980s American Police Series (“Hill Street Blues”), Let’s be careful out there!
Next, let’s briefly look at drafting a comprehensive and effective computer forensics policy. This type of computer forensics service is used by countless organizations (banks, insurance companies, law firms, local governments, retailers, technology firms, educational institutions, charitable organizations, manufacturers, distributors, etc.).
Often overlooked, detailed policies on the use of computers within an organization are an ever-increasing necessity. Corporations and government agencies are racing to provide Internet access to their employees. With this access, a Pandora’s box of problems is opened. Paramount is loss of productivity; workers can easily spend countless hours on-line daily entertaining and amusing themselves at their employer’s expense. A hostile workplace environment can be created through pornography, potentially exposing the organization to civil liability.
Although protecting your organization from outside threats is clearly important, protecting the organization from internal threats is at least as important, if not more so. According to the 2000 Computer Crime and Security Survey conducted by the Computer Security Institute and the FBI, 56% of the respondents reported unauthorized access to information by persons inside the organization, compared to just 31% who reported intrusions by outsiders. A quarter reported theft of proprietary information and 70% reported theft of laptop computers. Ninety-one percent (91%) reported virus contamination and a staggering 98% reported systems abuse by insiders (pornography, pirated software, inappropriate e-mail usage, etc.). According to Sextracker, an organization that tracks the on-line pornography trade, 71% of on-line pornography viewing occurs during the 9–5 workday.
Your computer forensics policy manual should, therefore, address all manners of computer-related policy needs. The content should be based on your corporation’s years of experience in employment-related investigations, computer crime investigations, civil litigation, and criminal prosecutions.
Approximately half of the manual should consist of detailed discussions on each of the policy topic areas; the other half should be sample policies that can be readily customized for your organization. The discussions should include topics such as why policies are needed, potential liability, employee productivity considerations, and civil litigation. Safeguarding critical and confidential information should be discussed in detail. The policies should directly address the problems that you would typically find in organizations of all sizes.
Now let’s look at another computer forensics service: Litigation support and insurance claims. As the risk increases, so will the interest in policies and the cost of premiums and litigation.
Since its inception, cyberinsurance has been billed as a way for companies to underwrite potential hacking losses for things technology cannot protect. The concept of insuring digital assets has been slow in catching on because the risks and damages were hard to quantify and put a price tag on.
The 9-11-2001 terrorist attacks quickly elevated corporate America’s interest in cyberinsurance, as industry magnates looked for ways to mitigate their exposure to cyberterrorism and security breaches. At the same time, it has become harder to find underwriters willing to insure multimillion-dollar cyberspace policies. For carriers willing to sell such paper, the premiums have skyrocketed.
Prior to 9-11-01, the focus, when it comes to information security, has been on critical infrastructure. Post-9-11-01, the focus has shifted to homeland defense and trying to understand whether financial institutions and other critical infrastructure such as telecommunications are vulnerable to cyberterrorism.
Insurance stalwarts such as Lloyd’s of London, AIG, and Zurich now offer policies for everything from hacker intrusions to network downtime. The breadth of cyberinsurance policies is growing, from simple hacker intrusion, disaster recovery, and virus infection to protection against hacker extortion, identity theft, and misappropriation of proprietary data.
While the market was already moving to provide policies to cover these risks, many executives viewed cyberinsurance as a luxury that yielded few tangible benefits. Many risk managers buried their heads in the sand, believing they would never need anything like cyberinsurance.
There was a naivete on the part of senior management. IT managers were not willing to admit they had to fix something of that magnitude, because they are afraid to go ask for the money.
The aftermath of the 9-11-01 attacks illustrate the interconnectedness of all systems; financial services, information and communications, transportation, electrical power, fire, and police. They all relate in profound ways we are only now beginning to understand. Businesses are starting to think about what type of recovery position they would be in if something similar to the World Trade Center attack happened to them.
While the cyberinsurance market may reap growth in the wake of the 9-11-01 tragedy, carriers are tightening the terms and conditions of policies. Premiums are going up significantly, and underwriters are hesitating to sign big policies.
In the past, companies seeking a $25 million policy could find someone to cover them. Now, it’s much more difficult. Underwriters who didn’t blink at $5 million or $10 million policies, would rather insure $1 million policies. The marketplace is in transition, and there’s undoubtedly a hardening of trading conditions for both traditional property and casualty insurance, as well as the emerging new e-commerce products.
Premiums on cyberinsurance are an easy mark for price hikes because there’s little historical data on which to set premiums. It’s difficult to pinpoint the losses if data is corrupted, a network is hacked, or system uptime is disrupted. The fear of bad publicity keeps many companies mum on hacking incidents, which makes it more difficult to collect data for projecting future losses.
To develop robust cyberinsurance, two major developments need to take place. First, sufficient actuarial data needs to be collected. Second, insurance carriers need to gain a better understanding of the IT systems in use and how they interact with other information and automated systems.
Industry analysts predict underwriters will push any changes in cyberinsurance offerings and the systems used by policyholders. The first indication of this trend came earlier in 2001, when J.S. Wurzler Underwriting Managers tacked a 5 to 15% surcharge on cyberinsurance premiums for users of Windows NT on IIS servers, citing their poor security track record, which makes them more expensive to insure. The underwriters are going to force the issue by saying: Look, if you lose your whole business, if things like that happen, you can expect to pay a higher premium.
Now, let’s look at the computer forensic process improvement techniques. These techniques identify the threat to your systems by researching the apparent source of an attack.
[v]John R. Vacca, Planning, Designing, and Implementing High-Speed LAN/WAN with Cisco Technology, CRC Press, 2002.
[vi]“Computers,” Rehman Technology Services, Inc., 18950 U.S. Highway 441, #201, Mount Dora, Florida 32757, 2001.
|< Day Day Up >|| |