SOURCE ADDRESSES

 < Day Day Up > 



A source address is an IP address embedded in the header of an IP packet. When the packet is received, the source address becomes the destination address in the reply packet. If you spoof your source address, reply packets wind up going to the address that you’ve spoofed, and you don’t see the results. Worse, spoofing your source address is a lousy technique for anonymity, as most application protocols require a completed TCP connection before exchanging any information.

Something similar to source address spoofing occurs whenever a firewall is between you and the destination network. Most firewalls translate internal addresses into external addresses, most commonly through Network Address Translation (NAT). Another way to rewrite the source address is to connect to a proxy and ask it to connect to the server you want to visit. This capability is built into Web browsers, which permits you to specify the IP address and the proxy you wish to use. If you’ve configured your Web browser to use a proxy, the Web server sees the address as the source. The proxy relays for you transparently.

Of course, whoever maintains the proxy or firewall has logs of your activity. And the owner of the Web server still has information about you-for example, the type of Web browser you’re using, the source IP address, the URL requested, any referring page, as well as the source OS, and sometimes the type of PC.

The Web site’s operator can go farther still: A Web designer can include Javascript to collect more information about your browser and OS, attaching that to any form data that you return. This information may include your system’s real source IP address, which is accessible to Javascript programs.

Routing to the Rescue

Surprisingly, the U.S. Navy has researched network anonymity. This research formed the basis for the Freedom Network, and may show up in other systems for anonymity as well.

Suppose that you proxy your Web requests through a third party who promises to keep its logs a secret. You connect to this server via Secure Sockets Layer (SSL) so that anyone sniffing the connection can only see that you’re visiting an anonymizer, and not your final destination site, which is encrypted. Sounds like a reasonable solution, but it hasn’t worked in the past.

In the early 1990s, a site in Finland, anon.penet.fi, provided an anonymous re-mailer. Anonymous e-mailers strip away revealing information from e-mail headers before resending it to your intended destination. That works well as long as the software manages to remove all the headers, and you don’t include revealing information in the e-mail you send (for example, including an automatic signature file at the end of your e-mail that, consequently, identifies you).

Penet also supported using aliases, so that the person receiving your e-mail could reply to you without learning your identity. Therefore, Penet had to keep track of the mapping between your anonymous e-mail address and your real one. Penet worked well until authorities stepped in and demanded that Johann Helsingius, Penet’s operator, disclose the mapping of a particular e-mail address because it involved information copyrighted by the Church of Scientology.

If the proxy doesn’t even know your real source address, how can it successfully relay for you? There have been several approaches to this problem, and one of the most recent (as previously discussed) is Onion Routing.

In Onion Routing, instead of having a single proxy for relaying, there’s a network of proxies. Each of these proxies runs the same software, which not only relays your packets but also encrypts them. The first Onion Router chooses a route for your connection, then encrypts your data several times, each time using the public key for one of the routers in its network of routers.

This is where the “onion” comes in. Each layer of encryption resembles the skin of an onion: The Onion Router you’ve connected to first encrypts your data using the key of the last router in its list of routers—this makes up the innermost layer of the onion. Once this layer of encryption is removed, the packet is sent to its real destination. Then, the first Onion Router adds another layer of encryption. This layer includes the address of the last router in the list, and gets encrypted with the second to last router’s key. The next layer gets added, with the address of the second to last router’s address, but using the third to last router’s key, and so on. There should be at least six routers to ensure confidentiality.

Onion Routing is even more effective if you run one of the routers. Your Onion Router must also be a full participant in the network, so that other Onion Routers can use it. Otherwise, packets coming from your Onion Router will only contain packets from your network, and can reveal your approximate source, even with the content still encrypted.

Onion Routers present another potential problem. An aggressive attacker could monitor the network traffic of every participating Onion Router. This attacker (or snoop) can then track traffic patterns. For example, you send off a request to http://www.fbi.gov via your Onion Router. The snoop sees traffic leaving your Onion Router, bound for another Onion Router, with a certain packet size. The next router sends off a slightly smaller packet and so on, until the final router sends the plaintext packet directly to the real destination. Then the snoop can deduce that this packet came from your network, based on the sizes and the timing of the packets between routers.

Onion Routing defeats this by delaying packets slightly, as well as batching data from several packets. Thus, a snoop cannot make simple deductions about the size and timing of packets. The end user does experience greater latency (delay), but this is the price for greater security.

Onion Routing is only one approach to the problem of network anonymity. AT&T Research (http://www.research.att.com) tried a different approach called “Crowds.” The concept behind Crowds is that “Anonymity Loves Company,” so the more participants the better. Each Crowd proxy is called a “jondo” (think “John Doe”). Unlike Onion Routing, which relies on layers of encryption, jondos employ secret key encryption with one key per route. This speeds up processing by reducing the amount of time required to handle encryption. As with Onion Routing, state information is required so that the entry and exit points of a route know where to send packets. This information is discarded at the end of each connection, but could be used to track users.

The Freedom Network used an approach similar to Onion Routing. You could either add a plug-in to Internet Explorer or patch your Linux kernel so that your system actually becomes an entry point in the network, with sites other than the one run by ZeroKnowledge participating as routers. The Freedom Network claims that it decided in spring 2001 to discontinue its service because it wasn’t paying for itself.

As of this writing, the Anonymizer (http://www.anonymizer.com) is still up and running today, but functions as a proxy; it also strips identifying information from your requests. Although you can use this service for free, your request will be delayed so that you can read ads encouraging you to pay for the service.

You can also acquire software that acts as a local proxy for Web requests. This software removes the USER-AGENT line, and strips away cookies, which can also be used to track your use of a Web site.

Who Needs It?

The Onion Routing project closed down in January 2000, after processing over 30 million requests. Its home page contains an interesting disclaimer, essentially saying that anyone using the Navy’s network should expect their traffic to be monitored—a very chilling statement when one considers the alleged intent of Onion Routing.

Still, government agencies form one of the largest groups of anonymizer users. Anonymizers allow law enforcement to visit Web sites without giving away their identity, or military analysts to collect data without revealing their areas of interest. Such uses of anonymizers are legitimate and actually of value to national security. If only the military and law enforcement used a particular anonymizer, then any visits from that anonymizer would immediately be of interest to someone worried about being investigated.

Anonymizers also have a place for nongovernmental users. While an anonymizer has the potential for misuse-for example, by hiding the identity of visitors to a pornographic site with illegal content—anonymizers have historically had more important and legitimate uses. For example, someone with AIDS could feel free to search the Web without revealing his or her identity. A person on the verge of committing suicide could ask for help, while remaining anonymous, which was one of the actual uses of the original Penet re-mailer. One can only hope that the rush to embrace national security in the United States doesn’t have additional casualties—especially ones that actually enhance national security.

Portscans, Probing, and Denial of Service

Pity the poor Intrusion Detection System (IDS)—it has the reputation of an irritating snitch and the track record to prove it. Perhaps no other security device has done its job so well and then been reviled so roundly for doing it.

Designed to sniff out and warn system administrators when hackers are trying to exploit network vulnerabilities or launch Denial-of-Service (DoS) attacks, the original IDSs did their job all too well. That was both bad and good news.

True to vendor promises, first-generation IDSs generated information-traffic patterns on network segments, aberrations in host log files, and so forth, which could indicate whether their systems had been hit with any of the attacks hackers that use to break into critical network resources. This required placing IDSs at key locations on the network, such as at firewalls, switches, routers, Web servers, databases, and other back-end devices further into the enterprise—a straightforward process.

But those IDSs were also overly chatty boxes, renowned for generating mountains of data on traffic passing through networks and on host systems. They cried “Wolf!” too often, reporting false alarms by the droves. Consequently, many systems administrators, overwhelmed by tons of information they couldn’t digest or didn’t understand, simply dumbed them down or shut them off entirely.

A Bad Idea

The IDS products on the market are now bigger, better, and faster, and offer much more to those charged with protecting network resources. Vendors have, for instance, developed new intrusion-detection methods that go beyond the pattern, or signature-matching, technology that plagued the earlier products with all those false alarms. They have also increased the performance of their devices, which can now keep up with 100Mbit/sec networks. Vendors are also shipping appliance-like IDSs, which simplify their deployment and management. And, they’ve begun delivering products that combine the best of the two principal types of IDSs into a single offering.

Just as importantly, the number of attacks on networking systems is growing. It’s a jungle out there, and network managers need to keep the predators at bay with a variety of security devices, including the IDS.

For example, the nonprofit CERT Coordination Center received reports on 11,071 security incidents in 2001 (the most recent year its incident totals are available). That’s comparable to the 10,960 it received in 2000, and the 9,859 incidents logged for 1999.

In June 2001, CERT reported a rise in the number of attempted intrusions using the SubSeven, a Trojan Horse that hackers can install on user PCs to gain complete control over system resources. This activity is most likely related to a new worm that seeks out previously compromised systems that have the SubSeven Trojan Horse installed.

In January 2001, the federal government’s National Infrastructure Protection Center (NIPC) warned of a related threat, the W32-Leaves.worm, also thought to permit a remote computer to gain complete control of an infected machine, typically by using Internet Relay Chat (IRC) channels for communications.

The most virulent threat to emerge from the hacker jungle, though, is clearly DoS and Distributed DoS (DDoS) attacks, the number and variety of which have increased dramatically, according to security organizations. Hackers target DoS attacks at devices and networks with Internet exposure, especially e-commerce sites,[vii] according to the NIPC. The goal of such attacks is to incapacitate a device or network with bandwidth (devouring traffic so that external users can’t access those resources)—this without hacking password files or stealing sensitive data.

In March 2001, NIPC began investigating a series of organized hacker activities that specifically targeted e-commerce and on-line banking sites. NIPC identified 50 victims in 30 U.S. states that were attacked by organized groups in Eastern Europe (particularly Russia and the Ukraine), that took advantage of vulnerabilities in servers running an unpatched version of Microsoft’s Windows NT operating system. Once the Eastern European hackers gained access, they downloaded a variety of proprietary data—mostly customer databases and credit-card information. In this case, the intruders didn’t use the information maliciously, per se, because they didn’t attempt to make purchases with the stolen cards. They did, however, make veiled extortion threats by offering to furnish paid services that would “fix” the unpatched systems.

A Second Look

It’s thus time for network professionals who gave up on the IDS a few years ago to go looking again. And, indeed, market research numbers indicate that more and more of them plan to deploy IDSs in the coming years.

Frost & Sullivan, for example, predicts that the market for intrusion detection software will increase from $73.4 million in 2000 to $375.5 million in 2002 and $547.2 million in 2003. Another research house, IDC (http://www.idc.com ), paints a slightly rosier picture, saying that the IDS market stands at $461 million in 2002 and will grow to $554.6 million by 2003.

Several developments have moved the IDS back into prominence. These include the fact that IDSs can now keep up with the high-speed transport technologies found in today’s networks, the emergence of IDS “appliances,” new intrusion-detection methods, better management tools, and a hybrid approach that combines the monitoring of the network- and host-based systems, the two basic types of IDSs, with a single console.

The charge is led by many of the usual vendor suspects—Cisco Systems,[viii] Internet Security Systems (ISS), Intrusion.com , NFR Security, and Symantec-as well as numerous newcomers. The latter list includes CyberSafe, Entercept Security Technologies, and Enterasys Networks.

The market has also spawned a growing number of Managed Security Services Providers (MSSPs) with outsourced offerings that include intrusion-detection capabilities. In this area are Activis, Exodus Communications, OneSecure, NetSolve, RedSiren Technologies, Riptech, and Ubizen.

Moving to Anomaly Tracking

As noted, the developments driving the IDS marketplace are improving organizations’ ability to monitor and secure against unwanted attacks, whether intrusions or DoS/DDoS strikes. Arguably, the most critical is the growing use of anomaly-based intrusion detection by vendors of network-based IDSs.

The traditional network-based IDS discovers malicious traffic by detecting the presence of known patterns, a process usually called “signature matching.” These systems work much like an antivirus software package (detecting a known “bad” pattern generates an alarm) and effectively discover known patterns.

On the downside, signature-based network IDSs can suffer on two principal accounts. One, they can’t see inside encrypted packets—the encryption essentially hides the packet’s contents from the IDS, leaving it blind to assaults. Two, hackers often mutate the nature of their attacks, rendering pattern-matching useless. Just as an antivirus package can’t protect against a new virus until vendors patch their software, an IDS vendor must update its signature files—and it’s not clear how many vendors have figured that out.

The anomaly-based network IDS uses packet sniffing to characterize and track network activities to differentiate between abnormal and normal network behavior. These devices analyze the data transfer among IP devices, permitting them to discern normal traffic from suspicious activity without pattern/signature matching.

These devices don’t care about the content of data in a session (as with signature matching). They only care about how a session took place, where the connection was made, what time, how rapid (i.e., is a suspicious connection to one host followed by a suspicious connection to another host?).

With anomaly-based systems, it’s important to get a baseline of what “normal” network traffic looks like. The chief difficulty of this approach is how to baseline—to know what’s normal traffic as opposed to deviated. Signature-matching should be coupled with anomaly tracking. An anomaly can be compared against a signature, and if the anomaly doesn’t show up on multiple probes, you ignore it. Cisco Systems, Enterasys Networks, Lancope, Intrusion. com, ISS, and Recourse Technologies are among the vendors that offer anomaly-based network IDS products.

Faster Systems

Most IDSs on the market now can keep up with 100Mbit/sec Ethernet. Beyond that, they begin to drop packets and become less efficient. When vendors push their IDS offerings beyond 100Mbits/sec, they’re only looking at a subset of packets. You can find products that will die in 100Mbit/sec networks. Others players that boast IDSs capable of operating in 100Mbit/sec network environments are Cisco and Enterasys.

One vendor, Top Layer Networks, takes an unusual approach to managing intrusion detection in high-speed networking environments. Its product, the AppSafe 3500 IDS, uses “flow mirroring” to copy each packet of a transmission to a specific port on the AS 3500; each port then distributes the traffic stream to separate IDS systems connected to the AS 3500.

Moving to Appliances

Another trend among IDS products is the network-based IDS appliance. Unlike first-generation IDS products, which required installing and configuring the vendor’s intrusion-monitoring software on a PC, these appliances merge hardware and software into a preconfigured unit.

Cisco’s Secure IDS, formerly known as the NetRanger, was among the first such appliances, and IDC believes this makes Cisco the current leader in this area. ISS (working with Nokia), Intrusion.com, and NFR Security (formerly Network Flight Recorder), are also moving their IDS products into the appliance category.

The appliance approach makes sense for several reasons. First, it eliminates many of the performance issues involved in installing IDS software on a general-purpose PC. The IDS software vendor can’t optimize its product for every processor and revision of operating system.

Second, the appliance is a controlled environment, built to vendor specifications, so the IDS software can be configured specifically for the application. Appliance-based IDS boxes also eliminate operating system-related concerns, especially in all-Wintel or all-Unix organizations.

Finally, appliance-based IDSs give plug-and-play capabilities to IT departments in multilocation companies, and to service providers. These are especially valuable for deployment in remote offices, where novice end users can handle the physical connections while leaving setup and configuration to centralized IT staff.

Best of Both Worlds

IDS vendors have developed recent products that merge the capabilities of host-and network-based systems into a single management platform. In these environments, a management console works in conjunction with traffic- and log-analysis tools on the network and host IDS systems to provide a correlated view of network activity.

Correlating data from multiple network sources lowers the incidence of false positives and enables network security personnel to view traffic from a higher level. For instance, a single scan of Port 80 on a Web server via a single router probably would not indicate the presence of an attack, but multiple scans across several routers would.

CyberSafe’s CentraxICE is one of these “hybrid” IDSs that combine the two capabilities. It teams CyberSafe’s own Centrax 3.0 software with Network Ice’s ICEpak network-based IDS to monitor traffic on network segments as well as hosts.

Enterasys Networks’ Dragon family is similar to that of CyberSafe’s product. Symantec’s IDS products will soon also offer hybrid host-network IDS capabilities. In addition, Cisco is working with Entercept Security Technologies to add host-based detection to its network-based products.

Although Enterasys sells to enterprises, it’s particularly appealing to MSSPs, who need to monitor IDSs distributed not only throughout a network but also throughout multiple customers’ networks. Enterasys users include MSSPs such as Riptech, OneSecure, and TrustWave, among others.

Outsourcing Intrusion Detection

Advances in IDS technology notwithstanding, organizations worried about unauthorized intrusions and DoS attacks should also consider outsourcing their intrusion-detection needs. Outsourcing intrusion detection to an MSSP, which monitors customers’ IDSs via the Internet, can make sense for several reasons. Not the least of these is cost. Companies with small, limited staff with limited experience in security can benefit greatly from an MSSP. It would typically require five employees, working three eight-hour shifts (with extra staffing for vacations, sickness, and the like), to handle the 24-by-7 needs of an IDS-monitoring program. Forget about the $20,000 for the IDS—an employee costs at least $60,000 a year, and with five employees, you could spend a fortune on training and maintaining security personnel.

Thus, it’s important to sit down and perform a Return on Investment (ROI) study. During this process, IT organizations should ask themselves whether they have the expertise to operate critical systems that can cost a business revenue or customer confidence if they’re compromised due to hacking or DoS or DDoS attacks.

The MSSPs tout the level of security expertise among their employees, claiming that this expertise enables them to better handle the task of deciphering often arcane IDS logs and alarms that befuddle typical IT employees. There is some truth to this, of course. MSSP Ubizen, for instance, operates and tracks known and emerging security vulnerabilities and exploits and maintains its own database of threats; the information collected gives the analysts in Ubizen’s OnlineGuardian service knowledge that few enterprises can afford.

In addition, MSSPs have often deployed tools specifically designed to acquire and correlate information from a wide range of intrusion detection devices and systems. MSSPs Riptech and OneSecure, for example, both indicate that the technology they’ve developed in this area differentiates them from others in the market.

Riptech, for instance, has spent two years developing proprietary data-mining and correlation software for its Caltarian security service. Caltarian’s software permits the company to warn clients of attacks while they’re under attack, with recommendations to protect their networks in real time.

So, perhaps one shouldn’t pity the IDS after all. No longer an overly chatty box crying “wolf” too often, it now offers network managers an improved set of tools that can finally help them fend off unwanted attacks from insiders and outsiders alike.

Signs of Attempted and Successful Break-ins

Hackers are succeeding more and more in gaining root-privilege control of government computer systems containing sensitive information. Computers at many agencies are riddled with security weaknesses.

When an attacker gets root privileges to a server, he or she essentially has the power to do anything that a systems administrator could do, from copying files to installing software or sniffer programs that can monitor the activities of end users. And, intruders are increasingly doing just that.

The increase in the number of root compromises, denial-of-service attacks, network reconnaissance activities, destructive viruses, and malicious code, coupled with the advances in attack sophistication, pose a measurable threat to government systems.

In 2001, 266 systems at 43 federal agencies suffered root compromises in which intruders took full administrative control of the machines, according to the GSA. That’s up from totals of 75 root compromises in 1998 and 221 in 1999. And the government has only a vague idea of what kind of data may have fallen into the wrong hands.

For at least five of the root compromises, officials were able to verify that access had been obtained to sensitive information. But, for the remaining 261 incidents, compromise of any or all information must be assumed. The compromised data involves scientific and environmental studies.

Meanwhile, the U.S. General Accounting Office (GAO), in a report recently released, summarized security audits that have been completed at 35 federal agencies, and indicated it had identified significant security weaknesses at each one. The shortcomings have placed an enormous amount of highly sensitive data at risk of inappropriate disclosure.

The government is going to find itself in “deep, deep trouble” if its IT security procedures aren’t improved. If sensitive personal data about U.S. citizens is compromised, Americans are going to wake up angrier then you can possibly imagine.

Many of the thousands of attempts to illegally access federal systems come from abroad. Also, many nations are developing information warfare capabilities as well as adapting cyber crime tools.

Hackers are also exchanging vulnerability information with one another. There is a whole new currency on the Internet that’s called the back door. Attackers are trading information about back doors that provide access to different systems.

One step the government could take to increase the security of its systems is to focus more resources on improving education and training. Computer security experts are scarce. They are in short supply, and they are expensive. The average salary is $90,000.

A 1998 directive by ex–President Clinton, ordered all federal agencies to complete a virtual bulletproofing of their IT systems from attack by May 2003. But officials indicate that most agencies are behind in that work, and only a few are doing penetration testing.

Even more alarming, is the fact that many attacks aren’t detected. No one knows what was done, and no one has a way of knowing what was done.

Forensics

Threats to an enterprise’s information infrastructure can come in a number of unsuspecting forms. Beyond fending off network intrusions and DoS (denial of service) attacks, companies must stave off threats of industrial espionage.

Layoffs are occurring more frequently these days, and when the disgruntled, newly disenfranchised leave, today’s technology makes it easy for them to sneak off with trade secrets, research materials, client lists, and proprietary software. Increasingly, cyberthieves are raiding corporate servers, electronically stealing intellectual property, and using e-mail to harass fellow employees, putting companies at risk for liability. The impact on the bottom line alone is cause for concern; the American Society of Industrial Security reports the theft of intellectual property in the United States costs businesses almost $3.6 billion annually.

Constant developments in information technology have posed challenges for those policing cyber crime. For many organizations, identifying, tracking, and prosecuting these threats has become a full-time job.

Specialists in computer forensics must use sophisticated software tools and spend enormous amounts of time to isolate anomalies and detect clues for evidence of a cyber crime or security breach. As previously explained, computer forensics is the equivalent of surveying a crime scene or performing an autopsy on a victim. Clues inadvertently left behind after a cyber crime can often be pieced back together to reveal details of wrongdoing and eventually pinpoint the perpetrator.

Although software tools can identify and document evidence, computer forensics is more than just technology and analysis. Safeguards and forensics methodologies ensure digital evidence is preserved to withstand judicial scrutiny, and, to support civil or criminal litigation should the matter be brought to trial.

Divining Good Forensics

Obtaining a good digital fingerprint of a perpetrator requires that steps be taken to preserve the electronic crime scene. The systematic search for evidence must adhere to basic guidelines to prevent the inadvertent corruption of original data during the course of investigation. Even booting up or shutting down a system runs the risk of losing or overwriting data in memory and temporary files.

The examination will usually begin with a look at the disk drive. Minimal handling preserves its integrity, so any disk investigation should begin by making a copy of the original, using the least intrusive manner available.

Today’s forensic software tools, such as DiskSearch Pro and the Law Enforcement Suite from New Technologies, can sniff out storage areas for data that may otherwise go unnoticed. Ambient system data, such as swap files and unallocated disk space, and file “slack” (data padded to the end of files), often hold interesting clues, including e-mail histories, document fragments, Web browsing details, and computer usage timelines.

Be careful to document any inadvertent changes that may occur to the original drive data during data extraction. Complying with the rules of evidence preservation and upholding the integrity of the process, will help withstand any future challenges of admissibility.

Although somewhat trickier than hard drive examination, data communication analysis is another useful forensic tool. Data communication analysis typically includes network intrusion detection, data preservation, and event reconstruction. Isolating suspicious network behavior also requires the use of specialized monitoring software, such as NetProwler from Symantec. Doing so can reveal activities such as unauthorized network access, malicious data-packet monitoring, and any remote system modifications.

Leave It to the Pros

Although today’s sophisticated data-recovery tools have become fairly efficient, the process of recovery remains a tedious, labor-intensive task. And no matter how good the tools, the science of computer forensic discovery draws on multiple disciplines.

Forensics demands a skill set often comprised of software engineering and a solid familiarity with binary systems and memory usage, disk geometries, boot records, network systems, and data communications. Principals of cryptography are also important for identifying data encryption and password-protection schemes. And only experience can teach a forensic examiner how to avoid booby traps or an extortionist’s logic bomb—items often left to wreak havoc along the path to discovery if not properly dismantled.

For these reasons, it’s often wise to leave the process to the professionals. An expert in forensics will be able to quickly isolate the telltale signs of where to look for clues and will better understand data-discovery technologies as they apply to the legal process.

When selecting a forensic examiner, you should have several goals in mind: Your candidate should be familiar with the intricacies of your particular operating systems, know how to protect against data corruption and booby traps, and have a history of court appearances and controls established to deal with evidentiary procedures, such as chain-of-custody.

If you’re looking for more information on computer forensics or getting your staff trained on good procedure and practice, there are a number of good resources at your disposal. As storage capacities and network sizes continue to increase, so do the means by which cyberthieves can circumvent security as well as the effort required to bring them to justice. So, start training to detect the signs of suspicious activity today and learn how forensic computer investigation can protect your corporate assets in these dangerous times.

How a Hacker Works

Obviously, knowing how the hacker’s mind works is only half of the battle. You must also know your network inside and out, identify its vulnerable points, and take the necessary steps to protect it. This part of the chapter will look at some tips and tools administrators can use to prevent those vulnerabilities.

Diagram Your Network

You should begin by diagramming the topology of your network. You can do this with a sophisticated tool such as Visio, or you can use a less complex tool such as Word. Simpler yet, you can draw it by hand. Once you’ve diagrammed your network, identify all the machines that are connected to the Internet, including routers, switches, servers, and workstations. Then, evaluate the security precautions in place on those machines. You want to pay close attention to machines that have a public IP address on the Internet, because they’re the ones that will be scanned by hackers.

Always-on Means Always-Vulnerable

Currently, the greatest security vulnerability is always-on Internet access using static IP addresses. With always-on access and a static IP, you are a like a big bull’s-eye sitting on the Internet waiting to get hit. The question is, once hackers get in your network can they do any damage, or will they be frustrated and move on to the next target? If you have an always-on Internet connection, hopefully you already have a basic security policy and firewall in place on your network. If you have a Web server, mail server, and/or other servers constantly connected to the Internet, your security responsibilities are even greater. Because the Internet is built upon the TCP/IP protocol, many hacker attacks will seek to exploit the TCP ports of these servers with public IP addresses. A number of common ports are scanned and attacked:

  • FTP (21)

  • Telnet (23)

  • SMTP (25)

  • DNS (53)

  • HTTP (80)

  • POP3 (110)

  • NNTP (119)

  • IMAP (143)

  • SNMP (161)

    Note 

    You need to identify whether your servers are utilizing any of these ports because these represent known vulnerabilities.

Ways to Protect the Network

There are a number of ways to compensate for these vulnerabilities. First, you can implement firewall filtering. One of the best protections against port attacks is to implement a firewall with dynamic packet-filtering, also called “stateful inspection firewalls.” These firewalls open and close ports on an as-needed basis, rather than permanently leaving a port open where it can be identified by one of the hackers’ port scans and then exploited. You can also analyze your system log files to track hacker activity. A third option is to install an intrusion-detection program that will do much of the log file examination for you.

Seeing What the Hacker Sees

In addition to protecting against the well-known vulnerabilities, you need to see what the hacker sees when he looks at your network. The best way to do this is to use nmap, a program that gives you a look at your network from a hacker-like perspective. A company called eEye has released a new version of this program for Windows NT (you can download it at: http://www.eeye.com/html/Research/Tools/nmapNT.html ). The company also offers an industrial-strength network security scanner called Retina, which helps discover and fix known and unknown vulnerabilities. This is an expensive, yet valuable, product.

Note 

You can download the Linux version of nmap at: http://www.insecure.org/nmap/

Software Vulnerabilities

Hackers also often exploit software security problems. They take advantage of these behind-the-scenes parts of the software to gain access to your system. Thus, you should take stock of all the software running on your Internet-exposed systems. Go to the Web sites of the vendors that make each of the software packages and bookmark the page that has updates and patches for that software. You’ll want to check these sites regularly and always keep your software up-to-date with the latest patches. Some companies even have services that will e-mail you whenever there’s a new update or patch.

Security Expert Web Sites

In addition to staying on top of your vendors’ security updates and patches, you should also stay current on the security risks and problems that are identified by security experts in the industry. Often, vulnerabilities may become known long before a vendor issues a patch. Therefore, your systems could be vulnerable for a period during which the hackers may know about it, but you don’t. Two Web sites that will keep you informed are http://www.L0pht.com and http://www.403-security.org .

Now, let’s look at how Internal net saboteurs are being brought to justice. These are the computer forensics problems of the present.

[vii]John R. Vacca, Electronic Commerce, Third Edition, Charles River Media, 2001.

[viii]John R. Vacca, High-Speed Cisco Networks: Planning, Design, and Implementation, CRC Press, 2002.



 < Day Day Up > 



Computer Forensics. Computer Crime Scene Investigation
Computer Forensics: Computer Crime Scene Investigation (With CD-ROM) (Networking Series)
ISBN: 1584500182
EAN: 2147483647
Year: 2002
Pages: 263
Authors: John R. Vacca

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net