IDSs and IPSs can be as simple or as complex as you want them to be. At the simplest level, you could use a packet-capturing program to dump packets to files, and then use commands such as egrep and fgrep within scripts to search for strings of interest within the files.
This approach is not practical, though, given the sheer volume of traffic that must be collected, processed, and stored for the simple level of analysis that could be performed. Yet, even at this rudimentary level, more would be happening than one might imagine. Packets would be collected, and then decoded. Some of these packets would be fragmented, requiring they be reassembled before they could be analyzed. TCP streams would often need to be reassembled, too.
In a more complex IDS or IPS, additional sophisticated operations, such as filtering out undesirable input, applying firewall rules, getting certain kinds of incoming data in a format that can be more easily processed, running detection routines on the data, and executing routines such as those that shun certain source IP addresses would occur. In this latter case, even more sophisticated internal events and processes would occur.
This chapter covers the internals of IDSs and IPSs, focusing on information flow in these systems, detection of exploits, dealing with malicious code, how output routines work, and, finally, how IDSs and IPSs can be defended against attacks against them.
How does information move internally through IDSs and IPSs? This section answers this question.
IDS and IPS internal information flow starts with raw packet capture. This involves not only capturing packets, but also passing the data to the next component of the system.
As explained in Chapter 6, promiscuous mode means a NIC picks up every packet at the point at which it interfaces with network media (except, of course, in the case of wireless networks, which broadcast signals from transmitters). To be in nonpromiscuous mode means a NIC picks up only packets bound for its particular MAC address, ignoring the others. Nonpromiscuous mode is appropriate for host-based intrusion detection and prevention, but not for network-based intrusion detection and prevention. A network-based intrusion detection/prevention system normally has two NICs—one for raw packet capture and a second to allow the host on which the system runs to have network connectivity for remote administration.
Most packets in today’s networks are IP packets, although AppleTalk, IPX, SNA, and other packets still persist in some networks. The IDS or IPS must save the raw packets that are captured, so they can be processed and analyzed at some later point. In most cases, the packets are held in memory long enough so initial processing activities can occur and, soon afterwards, written to a file or a data structure to make room in memory for subsequent input or discarded.
Solving Problems
IDSs and IPSs typically experience all kinds of problems, but one of the most-common problems is packet loss. A frequent variation of this problem is that the NIC used to capture packets receives packets much faster than the CPU of the host on which the IDS/IPS runs is capable of despooling them. A good solution is simply to deploy higher-ended hardware.
Another problem is this: the IDS/IPSs itself cannot keep up with the throughput rates. Throughput rate is a much bigger problem than most IDS/IPS vendors publicly acknowledge—some of the best-selling products have rather dismal input processing rates. One solution is to filter out some of the input that would normally be sent to the IDS or IPS, as discussed shortly. Another more drastic solution is to change to another, different IDS or IPS. High-end IDSs/IPSs can now process input at rates up to 2GB (see http://enterprisesecurity.symantec.com/products/products.cfm?ProductID=156 for information about Symantec's ManHunt firewall, for example).
Yet another manifestation of the packet-loss problem is misdirected packet fragments on connections that have an intermediate switching point with a smaller maximum transmission unit (MTU) than that of the remote system. In this case, the problem could, once again, simply be a CPU that is too slow—the problem may not be fragmentation per se. Or, the problem could be inefficient packet reassembly, which is discussed in the section “Fragment Reassembly.”
Whatever the solution to any of these problems might be, it is important to realize that packet loss is a normal part of networking—not exactly anything to panic about (unless, of course, the loss rate starts to get unacceptably high).
Chapters 5 and 6 covered the details of how programs such as tcpdump and libpcap work in connection with packet capture. These programs run in an “endless loop,” waiting until they obtain packets from the network card device driver.
No need for an IDS or IPS to capture every packet necessarily exists. Filtering out certain types of packets could, instead, be desirable. Filtering means limiting the packets that are captured according to a certain logic based on characteristics, such as type of packet, IP source address range, and others. Especially in very high-speed networks, the rate of incoming packets can be overwhelming and can necessitate limiting the types of packets captured. Alternatively, an organization might be interested in only certain types of incoming traffic, perhaps (as often occurs) only TCP traffic because, historically, more security attacks have been TCP-based than anything else.
Filtering raw packet data can be done in several ways. The NIC itself may be able to filter incoming packets. Although early versions of NICs (such as the 3COM 3C501 card) did not have filtering capabilities, modern and more sophisticated NICs do. The driver for the network card may be able to take bpf rules and apply them to the card. The filtering rules are specified in the configuration of the driver itself. This kind of filtering is not likely to be as sophisticated as the bpf rules themselves, however.
Another method of filtering raw packet data is using packet filters to choose and record only certain packets, depending on the way the filters are configured. libpcap, for example, offers packet filtering via the bpf interpreter. You can configure a filter that limits the particular types of packets that will be processed further. The bpf interpreter receives all the packets, but it decides which of them to send on to applications. In most operating systems filtering is done in kernel space but, in others (such as Solaris), it is done in user space (which is less efficient, because packet data must be pushed all the way up the OSI stack to the application layer before it can be filtered). Operating systems with the bpf interpreter in the kernel are, thus, often the best candidates for IDS and IPS host platforms, although Solaris has an equivalent capability in the form of its streams mechanism (see http://docs.sun.com/db/doc/801-6679/6i11pd5ui?a=view).
Filtering rules can be inclusive or exclusive, depending on the particular filtering program or mechanism. For example, the following tcpdump filter rule (port http) or (udp port 111) or (len >= 1 and len <= 512) will result in any packets bound for an http port, or for UDP port 111 (the port used by the portmapper in Unix and Linux systems), or that are between 1 and 512 bytes in length being captured—an inclusive filter rule.
Packets are subsequently sent to a series of decoder routines that define the packet structure for the layer two (datalink) data (Ethernet, Token Ring, or IEEE 802.11) that are collected through promiscuous monitoring. The packets are then further decoded to determine whether the packet is an IPv4 packet (which is the case when the first nibble in the IP header is 4), an IP header with no options (which is the case when the first nibble in the IP header is 5), or IPv6 (where the first nibble in the IP header will be 6), as well as the source and destination IP addresses, the TCP and UDP source and destination ports, and so forth.
It is quite important to realize just how broken a good percent of the traffic on the Internet is. Everyone agrees on the steps for the so-called TCP three-way handshake, but numerous instances exist where the RFCs have not done a perfect job of defining behavior for certain protocols. Also, in many other cases, RFCs have been at least partially ignored in the implementation of network applications that use these protocols, so some kind of “sanity check” on these protocols is thus needed. Packet decoding accordingly examines each packet to determine whether it is consistent with applicable RFCs. The TCP header size plus the TCP data size should, for instance, equal the IP length. Packets that cannot be properly decoded are normally dropped because the IDS/IPS will not be able to process them properly.
Some IDSs such as Snort (covered in Chapter 10) go even further in packet decoding in that they allow checksum tests to determine whether the packet header contents coincide with the checksum value in the header itself. Checksum verification can be done for one, or any combination of, or all of the IP, TCP, UDP, and ICMP protocols. The downside of performing this kind of verification is that today’s routers frequently perform checksum tests and drop packets that do not pass the test. Performing yet another checksum test within an IDS or IPS takes its toll on performance and is, in all likelihood, unnecessary. (Despite this, the number of IP fragments per day is large).
Once each packet is decoded, it is often stored either by saving its data to a file or by assimilating it into a data structure while, at the same time, the data are cleared from memory. Storing data to a file (such as a binary spool file) is rather simple and intuitive because “what you see is what you get.” New data can simply be appended to an existing file or a new file can be opened, and then written to.
But writing intrusion detection data to a file also has some significant disadvantages. For one thing, it is cumbersome to sort through the great amount of data within one or more file(s) that are likely to be accumulated to find particular strings of interest or perform data correlation, as discussed in Chapter 12. Additionally, the amount of data that are likely to be written to a hard drive or other storage device presents a disk space management challenge. An alternative is to set up data structures, one for each protocol analyzed, and overlay these structures on the packet data by creating and linking pointers to them.
Taking this latter approach is initially more complicated, but it makes accessing and analyzing the data much easier. Still another alternative is to write to a hash table to condense the amount of data substantially. You could, for example, take a source IP address, determine to how many different ports that address has connected, and any other information that might be relevant to detecting attacks, and then hash the data. The hash data can serve as a shorthand for events that detection routines can later access and process.
Decoding “makes sense” out of packets, but this, in and of itself, does not solve all the problems that need to be solved for an IDS/IPS to process the packets properly. Packet fragmentation poses yet another problem for IDSs and IPSs. A reasonable percentage of network traffic consists of packet fragments with which firewalls, routers, switches, and IDSs/IPSs must deal. Hostile fragmentation, packet fragmentation used to attack other systems or to evade detection mechanisms, can take several forms:
Evading Intrusions
As new and better systems to detect and prevent intrusions emerge, the black-hat community is devoting more effort to discover more effective ways to defeat them. Many network-based IDS and IPS tools go through a plethora of data trying to identify signatures of known exploits in the data. Evasion focuses on fooling signature-based attack detection by changing the form of an attack. Fragroute is a tool that works in this manner—it breaks packets into tiny fragments in extremely unusual ways before transmitting them across the network. A network-based IDS or IPS collects these fragments, and then tries to decode and reassemble them before detection routines analyze their data, but it will not be able to do so properly. In particular, older IDSs are vulnerable to fragroute attacks.
Although many types of packet-fragmentation attacks exist, most of them are currently pass. Most OS vendors have developed patches for the vulnerabilities on which fragmentation attacks capitalized soon after the vulnerabilities were made public. Meanwhile, new releases of operating systems have been coded in a manner so they are not vulnerable to these attacks. The same applies to IDSs, IPSs, and firewalls, all of which are likely to detect ill-formed fragments, but not process them any further. So, knowing about packet fragmentation attacks and their potential perils is important, usually nothing special must normally be done to defend systems against them, provided you use recent versions of IDSs/IPSs and network devices.
A critical consideration in dealing with fragmented packets is whether only the first fragment will be retained or whether the first fragment, plus the subsequent fragments, will be retained. Retaining only the first fragment is more efficient. The first fragment contains the information in the packet header that identifies the type of packet, the source and destination IP addresses, and so on—information that detection routines can process later. Having to associate subsequent fragments with the initial fragment requires additional resources. At the same time, however, although some of the subsequent fragments are unlikely to contain information of much value to an IDS or IPS, this is not true for many types of attacks, such as those in which many HTTP GET command options are entered, and then the packets are broken into multiple fragments. Combining fragments is necessary if you want a more thorough analysis of intrusion detection data.
Fragment reassembly can be performed in a number of ways:
Stream reassembly means taking the data from each TCP stream and, if necessary, reordering it (primarily on the basis of packet sequence numbers), so it is the same as when it was sent by the host that transmitted it and also the host that received it. This requires determining when each stream starts and stops, something that is not difficult given that TCP communications between any two hosts begin with a SYN packet and end with either a RST (reset) or FIN/ACK packet.
Stream reassembly is especially important when data arrive at the IDS or IPS in a different order from their original one. This is a critical step in getting data ready to be analyzed because IDS recognition mechanisms cannot work properly if the data taken in by the IDS or IPS are scrambled. Stream reassembly also facilitates detection of out-of-sequence scanning methods.
According to RFC 793, FIN packets should be sent only while a TCP connection is being closed. If a FIN packet is sent to a closed TCP port, the server should respond back with an RST packet. So, stream reassembly is critical in recognizing situation such as these (such as when an ACK packet is sent for a session that has not been started. Additionally, determining which of two hosts has sent traffic to the other is a critical piece of information needed by analyzers.
Stream reassembly results in knowing the directionality of data exchanges between hosts, as well as when packets are missing (in which case a good IDS/IPS will report this as an anomaly). The data from the reassembled stream are written to a file or data structure, again, either as packet contents or byte streams, or are discarded.
Stream reassembly may sound simple but, in reality, it is rather complicated because many special conditions must be handled. Policies based on system architecture usually dictate how stream reassembly occurs under these conditions.
From an IDS/IPS point of view, it is critical to know what the policy for overlapping fragments is—whether only the first fragment or all fragments are retained, for example—on each target host. One host TCP may drop overlapping fragments, whereas another may attempt to process them. Retransmissions in TCP connections pose yet another problem. Should data from the original retransmission or the subsequent one be retained?
The issue of whether the target host handles data in a SYN packet properly is yet another complication in stream reassembly. According to RFC 793, data can be inserted into a SYN packet, although this is not usually done. This also applies to FIN and RST packets. Should these data be included in the reassembled stream? Failure to include them could mean the detection routines to which the stream will be sent could miss certain attacks.
The packet timeout is another issue. What if a packet from a stream arrives after the timeout? How does the reassembly program handle this? Once again, the program doing the session reconstruction needs to know the characteristics of the target host to understand what this host saw. You might think the session has been properly reassembled when, in fact, it has not. The saving grace is that you can assume something is wrong whenever you see overlapping fragments. Reacting to overlapping fragments is, usually unnecessary; simply flagging the fact that they have been sent is sufficient, at least for intrusion-detection purposes.
A type of stream reassembly with UDP and ICMP traffic can also be done but, remember, both these protocols are connectionless and sessionless and, thus, do not have the characteristics TCP stream reassembly routines use. Some IDSs/IPSs make UDP and ICMP traffic into “pseudosessions” by assuming that whenever two hosts are exchanging UDP or ICMP packets with no pause of transmission greater than 30 seconds, something that resembles the characteristics of a TCP session (at least to some degree) is occurring. The order of the packets can then be reconstructed. This approach to dealing with UDP and ICMP traffic is based on some pretty shaky assumptions but, nevertheless, useful analyses could be subsequently performed on the basis of data gleaned from these reassembled pseudosessions.
Note |
In general, the current generation of IDSs and IDPs actively reassemble IP fragments and TCP streams. Accordingly, certain types of fragmentation attacks against these systems that used to be able to evade these systems’ recognition capabilities no longer work. Other attacks are based on how differing operating systems put fragments back together but, even these types of attack are being thwarted by IDSs and IPSs that are aware of the flavor of the OS at the destination and that reassemble the stream appropriately for each particular OS. |
Stateful inspection of network traffic is a virtual necessity whenever the need to analyze the legitimacy of packets that traverse networks presents itself. As mentioned previously, attackers often try to slip packets they create through firewalls, screening routers, IDSs, and IPSs by making the fabricated packets (such as SYN/ACK or ACK packets) look like part of an ongoing session or like one being negotiated via the three-way TCP handshake sequence, even though such a session was never established.
IDSs and IPSs also have a special problem in that if they were to analyze every packet that appeared to be part of a session, an attacker could flood the network with such packets, causing the IDSs and IPSs to become overwhelmed. An IDS evasion tool called stick does exactly this.
Current IDSs and IPSs generally perform stateful inspections of TCP traffic. These systems generally use tables in which they enter data concerning established sessions, and then compare packets that appear to be part of a session to the entries in the tables. If no table entry for a given packet can be found, the packet is dropped. Stateful inspection also helps IDSs and IPSs that perform signature matching by ensuring this matching is performed only on content from actual sessions. Finally, stateful analysis can enable an IDS or IPS to identify scans in which OS fingerprinting is being attempted. Because these scans result in a variety of packets sent that do not confirm to RPC 793 conventions, these scans “stand out” in comparison to established sessions.
Earlier in this chapter, you learned that part of the internal information flow within an IDS and IPS includes filtering packet data according to a set of rules. Filtering is essentially a type of firewalling, even through it is relatively rudimentary. But, after stateful inspections of traffic are performed, more sophisticated firewalling based on the results of the inspections can be performed. While the primary purpose of filtering is to drop packet data that are not of interest, the primary purpose of firewalling after stateful inspection is to protect the IDS or IPS itself. Attackers can launch attacks that impair or completely disable the capability of the IDS or the IPS to detect and protect. The job of the firewall is to weed out these attacks, so attacks against the IDS or IPS do not succeed. Amazingly, a number of today’s IDSs and IPSs do not have a built-in firewall that performs this function. Why? A firewall limits performance.
Figure 7-1 shows how the various type of information processing covered so far are related to each other in a host that does not support bpf. The NIC collects packets and sends them to drivers that interface with the kernel. The kernel decodes, filters, and reassembles fragmented packets, and then reassembles streams. The output is passed on to applications.
Figure 7-1: Information processing flow without bpf
Figure 7-2 shows how information processing occurs when an operating system supports bpf. In this case, a program such as libpcap performs most of the work that the kernel performs in Figure 7-1, and then passes the output to bpf applications.
Figure 7-2: Information processing flow with bpf
We turn our attention next toward how IDSs and IPSs detect exploits. After defining various kinds of attacks that IDSs and IPSs are designed to detect and prevent, we consider several matching methods: signature matching, rule matching, profiling, and others.
The number of exploits with which an IDS or IPS potentially must deal is in the thousands. Snort, a tool discussed in Chapter 10, has over 2000 exploit signatures, for example. Although describing each exploit is not possible in this book, the SANS Institute (http://www.sans.org) has helped the user community become acquainted with the most frequently exploited vulnerabilities by publishing the “SANS Top 20 Vulnerabilities” at http://isc.sans.org/top20.html. Consider the top 20 vulnerabilities at press time.
Top Vulnerabilities in Windows Systems
A list of Windows systems’ top vulnerabilities follows:
Top Vulnerabilities in Unix Systems
A list of Unix systems’ top vulnerabilities follows:
Although these 20 vulnerabilities are currently the most exploited ones, an IDS or IPS that recognized nothing more than the exploits for these vulnerabilities would be a dismal failure. But, because of the frequency with which these vulnerabilities are exploited in real-life settings, failure to recognize any of these 20 vulnerabilities would also be catastrophic.
The SANS Top 20 List: How Current?
The SANS Top 20 List has proven extremely useful, but it is not always updated as quickly as it needs to be. At press time, for example, the SANS Top 20 List did not include an at-the-time new, serious vulnerability in Windows systems that the MSBlaster worm and its variants exploited. The problem is in an RPC function that deals with TCP/IP message exchanges, particularly in the manner it handles improperly formed messages. The Distributed Component Object Model (DCOM) interface with RPC listens for client requests for activating DCOM objects via TCP port 135. An attacker could forge a client RPC request with a group of specially crafted arguments in a packet, causing the server that receives this packet to execute the arguments with full privileges. This vulnerability will undoubtedly be included in the next SANS Top 20 List.
We’ll start with the simplest type of matching, signature-based matching.
How Signature Matching Works
As discussed previously, a signature is a string that is part of what an attacking host sends to an intended victim host that uniquely identifies a particular attack. In the case of the exploit for the ida script mapping buffer overflow vulnerability (see the sidebar “How Vulnerabilities Are Exploited”),.ida? is sufficient to distinguish an attempt to exploit the buffer overflow condition from other attacks, such as exploiting a buffer overflow condition in the idq script mapping.
Signature matching means input strings passed on to detection routines match a pattern in the IDS/IPS’s signature files. The exact way an IDS or IPS performs signature matching varies from system to system. The simplest, but most inefficient, method is to use fgrep or a similar string search command to compare each part of the input passed from the kernel to the detection routines to lists of signatures. A positive identification of an attack occurs whenever the string search command finds a match.
How Vulnerabilities Are Exploited
How are vulnerabilities exploited? Consider the first vulnerability from the SANS Top 20, buffer overflows in IIS script mappings. One of the script mappings that the IIS Indexing Service Application Programming Interface (ISAPI) uses is ida (Internet data administration). This script mapping has an unchecked buffer when it is encoding double-byte characters. Whoever wrote this script mapping did not consider what would happen if this kind of input were sent to it. Ida.dll should not accept this kind of input at all. An attacker can exploit this problem in an IIS server that has not been patched by sending the following input to the server:
GET /default.ida?NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNnnnnnnnn%u9090%u6858%ucbd3%u7801% %u9090%u6858%ucbd3%u7801%%u9090%u6858%ucbd3%u7801%%u9090%u9090%u8190%u ooc3%u0003%u8b00%u531b%u53ff%u0078%u0000%u00=a FTTP/1.0
240 of more Ns (of any other character—this does not matter) overflow the buffer, at which point the remaining input spills over into memory where the commands—the portion of the previous input after the string of Ns (note, they are in Unicode format) will be executed. A similar vulnerability also exists in the Internet data query (idq) script mapping. Interestingly, these vulnerabilities are the ones exploited by the Code Red worm family
More sophisticated types of signature matching exist, however. For example, Snort, an IDS covered in Chapter 10, creates a tree structure of attack patterns and uses a search algorithm that results in only patterns relevant to the particular packets being matched in a particular branch of the tree being used in the matching process. This produces a far more efficient search process.
Evaluation of Signature Matching
Signature-based matching is not only simple, it is also intuitive. People who understand little about intrusion detection quickly understand that certain input patterns, if picked up by an IDS or IPS, indicate an attack has occurred. But few additional advantages of signature-based matching exist.
Some of the major limitations of signature-based matching include the following:
quote site exec exec echo toor::0:0::/:/bin/sh >> /etc/passwd
vipw /etc/passwd.adjunct
can constitute attacks, but may sometimes also constitute legitimate usage (for example, by system administrators). Signatures are all-or-none in nature—in and of themselves, they reveal virtually nothing about the context of the input. Any time one of the predesignated signatures comes along, a signature-based IDS or IPS issues an alarm.
We’ll next turn our attention to rule matching, another matching method.
How Rule Matching Works
Rule-based IDSs and IPSs are, as their name implies, based on rules. These types of IDSs hold considerable promise because they are generally based on combinations of possible indicators of attacks, aggregating them to see if a rule condition has been fulfilled.
Signatures themselves may constitute one possible indication. In some cases (but not usually), a signature that invariably indicates an attack may be the only indicator of an attack that is necessary for a rule-based IDS or IPS to issue an alert. In most cases, though, particular combinations of indicators are necessary. For example, an anonymous FTP connection attempt from an outside IP address may not cause the system to be suspicious at all. But, if the FTP connection attempt is within, say, 24 hours of a scan from the same IP, a rule-based IDS should become more suspicious. If the FTP connection attempt succeeds and someone goes to the /pub directory and starts entering cd .., cd .., cd .., a good rule-based IDS or IPS should go crazy. This is because what we have here is most likely a dot-dot attack (in which the intention is to get to the root directory itself) with the major antecedent conditions having been present. This example is simple, yet it is powerful. Real rule-based systems generally have much more sophisticated (and thus even more powerful) rules.
Evaluation of Rule Matching
The main advantages of rule matching are as follows:
The Pros and Cons of Protocol Analysis
Using rules based on protocol behavior has become increasingly popular (especially in IDSs) over the years and for good reason. Deviations from expected protocol behavior—each protocol is supposed to behave according to RFC specifications (although, as mentioned before, these specifications are not always as “air tight” as they should be)—is one of the most time-proven and reliable ways of detecting attacks, regardless of the particular signature involved. Many consecutive finger requests from a particular IP address are, for example, almost always an indication of an attack, as is a flood of SYN packets from a particular host. Excessively large packet fragments and improperly chunked data sent to a web server are other indications of anomalous protocol behavior. Most of the systems with the highest overall detection proficiency use a variety of protocol analysis methods. On the downside, protocol analysis can result in a substantially elevated false alarm rate if a network is not working properly. For example, a malfunctioning router may spray packets, causing protocol anomaly routines to go berserk.
The main limitations of rule matching include the following:
Next we’ll look at profile-based matching methods.
How Profile-Based Matching Works
Information about users’ session characteristics is captured in system logs and process listings. Profiling routines extract information for each user, writing it to data structures that store it. Other routines build statistical norms based on measurable usage patterns. When a user action that deviates too much from the normal pattern, occurs the profiling system flags this event and passes necessary information on to output routines. For example, if a user normally logs in from 8:00 A.M. to 5:30 P.M. but, then, one day logs in at 2 A.M., a profile-based system is likely to flag this event.
Evaluation of Profile-Based Matching
Profile matching offers the following major advantages:
Some major downsides of profile-based intrusion detection include the following:
Many other types of matching or, more properly, detection methods, too numerous to cover here, are also used. Tripwire-type tools compare previous and current cryptochecksum and hash values to detect tampering with files and directories. Another method uses Bayes’ theorem, which allows computation of the probability of one event given that another has occurred, providing a way of calculating the probability that a given host has been successfully attacked given certain indications of compromise.
Malicious code is so prevalent and so many different types of malicious code exist, antivirus software alone cannot detail with the totality of the problem. Accordingly, another important function of intrusion detection and intrusion prevention is detecting the presence of malicious code in systems. The next section discusses this function.
According to Edward Skoudis in Malware: Fight Malicious Code (Addison Wesley, 2003), major types of malicious code include the following:
IDSs and IPSs generally detect the presence of malicious code in much the same manner as these systems detect attacks in general. This is how these systems can detect malicious code:
Although IDSs and IPSs can be useful in detecting malicious code, relying solely on these kinds of systems to do so is unwise. Instead, they should be viewed more as an additional barrier in the war against malicious code than anything else. Why? First and foremost, other types of detection software (at least antivirus software for Windows and Macintosh systems) is more directly geared to serving this kind of function. Antivirus vendors generally do better in writing malicious code detection routines in these systems in the first place. The situation with Unix, Linux, and other systems is quite different, however. In these cases, IDSs and IPSs should be considered one of a few first-line defenses against malicious code. This is because antivirus software for these systems is not as necessary as for Windows systems and it is not as freely available. This is not to imply that malware detection software in these systems is useless. The chkrootkit program, for example, is simple to use and quite effective in detecting the presence of rootkits on Unix and Linux systems. It can even detect when something in a system is not behaving correctly that might indicate the presence of a rootkit. Additionally, network encryption presents a huge obstacle to detection of malicious code on systems on which this code runs. Finally, symptoms of possible malicious code attacks often turn out to be nothing more than false alarms. Suppose, for example, an IDS or IPS detects that a host has received packets on UDP port 27374. This could possibly indicate the host is running the SubSeven Trojan horse, but this could also simply mean a network program has sent packets to this port. It may transmit packets to a different port next time and still another port the time after that. The behavior of many network applications with respect to ephemeral ports is, at best, weird.
The bottom line is this: IDSs and IPSs can help in numerous ways in the war against malicious code. Many of the indications of incidents jibe with indications of the presence of malicious code. In and of themselves, however, IDSs and IPSs are not likely to be sufficient
Once detection routines in an IDS or IPS have detected some kind of potentially adverse event, the system needs to do something that at a minimum alerts operators that something is wrong or, perhaps, to go farther by initiating evasive action that results in a machine no longer being subjected to attack. Normally, therefore, calls within detection routines activate output routines. Alerting via a pager or mobile phone is generally trivial to accomplish, usually through a single command such as the
rasdial <phone_number>
command in Windows systems. Additionally, most current IDSs and IPSs write events to a log that can easily be inspected. Evasive action is generally considerably more difficult to accomplish, however. The following types of evasive actions are currently often found in IDSs and IPSs:
A major limitation to taking evasive action (other than the fact that, with UDP traffic this is, for all practical purposes, impossible) is the action may not be appropriate. Only heaven knows the number of packets daily transmitted over the Internet that have spoofed source IP addresses. Blocking traffic from these apparent hostile addresses can block legitimate connections, causing all kinds of trouble. Additionally, evasive action by IPSs can readily result in DoS in systems. IPSs can prevent systems from processing commands and requests that may be necessary in certain operational contexts, but that are identical or similar to those that occur during identified attacks. Critics of intrusion prevention technology are in fact quick to point out that they are ideally suited to the purposes of insiders (and possibly also outsiders) intent on causing massive DoS attacks.
Most of the currently available IDSs and IPSs have little or no capability of monitoring their own integrity. This potentially is an extremely serious problem given the importance that intrusion detection and intrusion protection play in so many organizations. An attacker who wants to avoid being noticed can break into the host on which an IDS or IPS runs, and then corrupt the system, so it will not record the actions of the attacker (and also possibly anyone else!). Or, the attacker can send input that causes an IDS or IPS to process it improperly or that results in DoS. Countermeasures against these kinds of threats include the following:
This chapter has covered the topic of IDS and IPS internals. These systems almost always process information they receive in a well-defined order, starting with packet capture, filtering, decoding, storage, fragment reassembly, and then stream reassembly. The packets are then passed on to detection routines that work on the basis of signatures, rules, profiles, and possibly even other kinds of logic. IDSs and IPSs can also detect the presence of malicious code, although antivirus software (if available for a particular operating system) is likely to be the more logical first line of defense against this kind of code. Detection routines can call output routines that perform a variety of actions, including alerting operations staff, resetting connections, shunning IP addresses that appear to be the source of attacks, and/or modifying intrusion prevention policy for systems. IDSs and IPSs are a likely target of attacks, so they need to have adequate defenses. Defensive measures include filtering out any input that could cause the IDS/IPS to become dysfunctional, terminating further processing of input if that input could cause partial or full subversion of the IDS or IPS, shunning ostensibly malicious IP addresses, and incorporating an internal watchdog function into the system.
Part I - Intrusion Detection: Primer
Part II - Architecture
Part III - Implementation and Deployment
Part IV - Security and IDS Management