Incident-Response Nightmare

Dave Armstrong was a system administrator supporting the intranet for First Fidelity Bank of Anacanst County. Late one Monday night, Dave watched as a hacker gained full control of all 200+ systems and began wandering through them at will, collecting passwords and perusing data.

Unfortunately, Dave did nothing but watch as he tried to figure out who was on his system in the middle of the night. Although First Fidelity had written policies and procedures for most other situations, there weren't any formal incident-response guidelines. Because Dave had no specific instructions, he spent a full three days trying to identify the hacker without success before escalating the call to the bank's security team.

Just imagine, for a moment, a hacker roaming unchecked through your own bank's network for three days, collecting names and account numbers, possibly even modifying data, transferring funds, or destroying records. Thinking about changing banks? I would be!

How does a situation like this arise? In this case, Dave configured a software server so that it was trusted by the other systems. Trust in this sense meant that all the systems on the network were granted remote root access to the software server without first requiring a password (a web-of-trust among systems). Several hundred systems trusted the software server.

Although this arrangement makes it easy to distribute new software, it can be risky, especially when the risk and vulnerabilities associated with supporting trust are not clearly understood in the first place. If a system must be configured as a trusted server (no other practical options can be applied), the trusted server absolutely must be secured. Otherwise, any hacker who breaks into the trusted server has immediate root access no password required to every system that trusts that server.

That's what happened on First Fidelity's intranet. Hundreds of systems in the intranet trusted the software server. As a result, the server provided a tempting target for any hacker seeking entry into the bank's computer network. Dave had no idea that the system was at risk and unable to withstand attack. It never occurred to him (or his manager) that a single unsecured system would open the door to the rest of the network.

For First Fidelity, the web-of-trust was spun far into the depths of their intranet (200+ systems). With hundreds of systems trusting the software server, the server should have been protected with proper security controls. The server, however, was lacking security altogether. It was wide open, just waiting for a hacker to walk right in.

And, that's exactly what happened. When the hacker gained full access to the trusted server, remote root access to all the systems on the network was granted. This hacker didn't have to work very hard to gain control of the entire network.

Let's take a closer look at the details of this break-in and what happened during the days that followed.

Day 1: Unauthorized Access

Dave discovered the hacker's presence at 11:45 Monday night, while doing a routine check of the network. He noticed that some unusual processes were running and that CPU utilization was much higher than normal for such a late hour. This unusual activity sparked Dave's curiosity, so he investigated further. By checking logins, he discovered that Mike Nelson, a member of the bank's security team, was logged onto the system. Mike was a legitimate user, but he shouldn't have logged on without first alerting someone in Dave's group. Was this a hacker masquerading as Mike? Or, was it Mike working on a security problem? If it was Mike, had he forgotten about the prior-notification protocol, or had he deliberately neglected to notify anyone? Dave had no idea. Even worse, he had no idea who to call or what to do.

What happened next? The same thing that happens to most people the first time they suspect a hacker has broken into their system. Dave experienced a rush of adrenaline, a sense of excitement mixed with fear, and confusion about what kind of action to take. He was alone. It was the middle of the night. If he hadn't been working late, it's possible no one would ever have known of this attack. He decided that since he was responsible for the system, he should do something to regain control. He kicked the user off the system, then rendered the account useless by disabling the user's password. Dave had control of the system again. Thinking his mission was accomplished, Dave went home.

Unfortunately, Dave didn't realize that action was a short-term response to the situation. Kicking an unauthorized user off the system often means merely that he's off for the day. It doesn't mean he won't be back. Once a hacker gets into a system, he often leaves back doors that allow for easy access next time. Dave's action left him with a false sense of security. Dave assumed that he had solved the problem by simply throwing the hacker off the system. But, the security problem that let the hacker on in the first place had not been addressed. Dave may have scared the burglar out of the house, but the door was still unlocked.

Day 2: Problem Fixed

Tuesday morning, Dave described his midnight adventure to his manager and two other system administrators. They discussed the incident for a while, but still had no idea whether the system had been invaded by an unknown hacker or by Mike from security. At any rate, they considered the problem fixed the suspect account had been disabled, and there were no new unauthorized users on the system. So, they dropped the issue and went back to work. As on most support days, time flew by.

At the end of his shift, Dave logged into the software server just because he was curious. He noticed only one other login, from Ed Begins, the system administrator who ran backups on the servers at night. That seemed normal, even expected. The system was running fine. So, with another 12-hour day under his belt, Dave logged out and went home.

Day 3: Security Is Breached Again

Dave slept in. After all, it was only Wednesday morning and he had already worked 24 hours that week. When he returned to the office that afternoon, he noticed that Ed hadn't logged out of the server the night before. That was odd. Ed worked the graveyard shift and wasn't usually around during the day. Given the unexplained login from Monday, Dave paged Ed to verify his activity on the system. Ed responded to the page instantly. He informed Dave that he had not run any backups the night before, and he wasn't using the system currently. It began to look as though a hacker was masquerading as Ed.

Upon further investigation, Dave discovered that the phony "Ed" was coming from Mike's system. What's more, this user was not only checking to see who else was logged on, but also running a password sniffer. Dave thought that Mike was playing around on the system and currently had access to the system by masquerading as Ed. (Dave never seriously considered the possibility that there was an unknown hacker on his system stealing data.) Dave was seriously annoyed by now. He figured that Mike was causing him to run around in circles and waste his time. Dave's tolerance level was low. He kicked "Ed" off the system, disabled his password, and reported the new development to his manager.

The manager called Mike to ask if he was logged onto the system and using a password sniffer, and to question him about Monday night's activities. Mike emphatically insisted that the mysterious user was not him. Mike also claimed that no hacker could have logged onto his system because he was certain it hadn't been compromised. Mike's opinion was that the hacker must be spoofing that is, pretending to come from Mike's system but actually originating from somewhere else.

At this point, the situation degenerated to finger-pointing. The system administrators continued to believe that Mike was playing around on the network. Mike continued to insist that the break-in was a spoof and that he was being falsely accused. Everyone lost sleep and wasted more time trying to pin down what had actually happened.

Days 4 to 7: Escalating the Incident

On Thursday, Dave's manager escalated the problem to the bank's security manager and the internal audit department. Several days went by while all parties the security team, the audit department, and the system administrators waited for the hacker to reappear.

But the hacker never came back. The internal audit manager was left wondering if there had really been a hacker in the first place. Did kicking him off the system a couple of times discourage any further attacks? Had Mike been hacking around for the fun of it and stopped when he realized that everyone was on to him?

Day 8: Too Late to Gain Evidence

A full week after the break-in, the internal audit department contacted Dave and asked for the technical data he had captured that demonstrated the hacker's activity on the server. Since the bank didn't have a security expert on staff, the audit manager hired me. My job was to review the technical data and determine who broke into the server.

Day 9: Who Was the Bad Guy?

When I arrived, I discussed the case with the audit manager and reviewed the data. Several days had passed since the second break-in, and the hacker had never returned. Unfortunately, I couldn't provide the answer the auditor was looking for, because it was impossible to trace the hacker using the data they had gathered. The information did tell me that the intruder had used a free hacking tool (esniff) that is easily available on the Internet, masqueraded as several legitimate system users, gathered a bunch of passwords, and appeared to be coming from Mike's system. But there wasn't enough data to tell whether the hacker was an outsider, Mike, or someone else in the company.

When Dave kicked Mike off the system, there was no way to trace back to the source. Any answer I gave would have been pure guesswork. Interviewing the staff wasn't helpful. Plenty of fingers pointed to Mike, but no one had any evidence. Lacking that, the best I could do was advise the audit manager to have the company develop and implement incident-response procedures right away.

If it was a hacker, it was possible that back doors into the system were left behind. In the corporate world, a week might not seem very long. But in investigating the scene of a computer crime (yes, breaking into systems is a crime!), it's an eternity. When so much time passes between a break-in and an audit, valuable information is modified, lost, and sometimes impossible to track.

I pointed out that the break-in was made possible by the lack of security on the trusted software server, and that the vulnerabilities needed to be corrected. Furthermore, it was impossible to know how the hacker broke into the server, because there were several vulnerabilities the hacker could have exploited to gain root access. Old password accounts existed, excessive file permissions existed, security patches weren't installed, and so on. The hacker had his pick of approaches.

I told the audit manager that the facts were staring everyone in the face. One unsecured trusted server opened up the entire network. Since the system could have been breached by a real hacker, Dave needed to reinstall the server, add adequate security controls to protect the server, and consider other technical solutions for updating software on their intranet.

I also discussed with the auditor the importance of having a security team you can trust, focusing on the need to thoroughly screen security personnel before hiring. I explained that proper procedures for the security team to follow should be in place, and that all employees should be expected to follow those procedures. Just because they are members of a top-notch security team doesn't mean that they should be able to roam around all of the systems without proper notification. In this case, since a security team member was a suspected culprit, it would have been helpful to have a procedure in place for routing the investigation around the security team to higher management. This type of contingency should be covered under the conflict-of-interest section in the incident-response policy.

Summary: Attacks from the Inside

These two break-ins caused a number of bank staff members to spend a lot of their work time investigating the hacker problem instead of doing their actual jobs. Dave took the problem into his own hands and made important decisions that could have placed the systems and data on his network at risk. He also decided that he was dealing with Mike from the security group without proper evidence to back up his accusation.

Although we'll never know whether Dave was right or wrong in accusing Mike, he was definitely right to recognize that hackers can come from within your network as well as from the outside. As Figure 1-1 clearly illustrates, insiders are a serious risk. Of course, knowing that insiders are a risk and doing something about it are two different things. To protect your data, you need policies, procedures, and training. To many employers, protecting data from their own employees sounds ludicrous. Just remember to look at the 1's and 0's in that data as real money ($$$). Banks don't think twice about implementing adequate controls on the storage of cash. For example, they don't leave the safe wide open so that anyone who works for the bank or any customer who strolls into the bank can walk in and take some of that cash. When data is considered to have the same value as money, security controls on data become a requirement, not an afterthought.

Figure 1-1. Types of Attack or Misuse Detected in the Last 12 Months (by percent)

graphics/01fig01.gif

This time, First Fidelity was lucky. With unrestricted access to the network for three days, the hacker could have destroyed data, shut down systems, or even changed hardware setups. Part or all of the network could have been rendered useless. System administrators could have faced days or even weeks of work just getting the systems running again assuming that current backups existed.

Hackers can cover their tracks quickly, making it very difficult and far too often impossible to trace them back to their starting points. If you don't act right away, you may never even know if data was stolen, modified, or destroyed. For this reason alone, anyone who owns and maintains a computer network must develop clear, specific incident-response procedures.



IT Security. Risking the Corporation
IT Security: Risking the Corporation
ISBN: 013101112X
EAN: 2147483647
Year: 2003
Pages: 73

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net