Endpoint Agents


Imagine a guard who is assigned to secure the entrance to a building. When someone approaches a protected resource, the guard begins an access control process and stops the person and asks for some form of identification. After the necessary information has been gathered, the guard follows policy and decides whether the person can enter. The policy might say, for example, that employees of ACME Incorporated can enter the building between the hours of 8 A.M. and 5 P.M. (0800 and 1700). After the person has been granted or denied access, the guard simply waits for the next person to approach so that the process can be repeated.

Essentially, HIPS agents apply a similar access control process to computers, as illustrated by Figure 6-1. The process is activated when an operation occurs on a system and can be divided into the following phases:

  • Identifies the type of resource being accessed "Is the resource being accessed the entrance to the building or the elevator?"

  • Gathers data about the operation "Who is this person, who do they work for, and does the picture on the ID look similar to the person?"

  • Determines the state of the system "Is it between the hours of 8 A.M. and 5 P.M. (0800 and 1700)?"

  • Consults security policy "Is this person permitted to access the resource?"

  • Takes action "The person can enter."

Figure 6-1. Access Control Process


Identifying the Resource Being Accessed

The first phase in the access control process is to identify what type of resource is being accessed. This determination triggers the data gathering phase and changes the type or amount of data to be gathered. If the building entrance is the resource in question, the guard might need to ascertain only the requestor's name. The elevator might require additional information, such as the requestor's company, so that the guard can determine which bank of elevators the requestor should use.

Buildings have thousands of resources. Luckily, not all of them are equally important in the context of security. The resources that are most important are those that attackers could use to gain access to, modify, or damage the building. A catalog of a building's resources could include the flowers near the building, doorways, elevators, windows, fire escapes, and postal address. Attackers are not likely to use the flowers or postal address to attack the building.

In that sense, computer systems are similar to buildings. A HIPS product that identifies and protects every computer system resource would be a secure way to go, although the product would be too cumbersome. To be more efficient, products worry only about the system resources that are the most appealing to attackers. The tough part is to correctly identify the most important resources.

A good way to characterize a resource's level of appeal is to conduct a high-level analysis of attacks that target hosts. Commonalities between them can shed light on subject and help identify important system resources. For example, if different types of attacks tend to use the same sets of resources, those resources are probably more appealing than less commonly used resources.

The first step in the analysis looks at what attacks such as viruses, worms, Trojans, and malicious mobile code actually do. To be successful, attacks must accomplish a set of ordered tasks. This is called the lifecycle of an attack, and it has five phases, as shown in Figure 6-2:

  1. Probe The attack looks for a vulnerability.

  2. Penetrate The attack uses a vulnerability it finds to compromise the host.

  3. Persist The attack installs something on the system.

  4. Propagate The attack uses the compromised host to attack other systems.

  5. Paralyze Damage occurs, either through malicious intent or huge amounts of network traffic because of propagation.

Figure 6-2. Lifecycle of an Attack


Note

Technically, viruses and worms are mobile because they can move from system to system. They could be called malicious mobile code, but in this case, the term refers specifically to dangerous ActiveX and Java programs.


The second part of the analysis matches what the attack does in each phase with the resources it needs to accomplish its activity. For example, the probe phase cannot succeed if the attack cannot access the host via the network. To persist, an attack must be able to modify files, memory, system service configuration, and so on. The propagation phase requires the network once again or some other kind of media like floppy discs, compact discs, and so on.

This mapping of lifecycle phases to required resources yields five critical resource categories. HIPS products should be most concerned with these categories:

  • Network

  • Memory

  • Application execution

  • Files

  • System configuration

Some HIPS products are able to identify attempts to access all the crucial resource sets, although some cover only portions of the list. Furthermore, each product might identify only particular types of access request. This section delves into these five, touches briefly on some less critical but still important resource sets, and uses real HIPS products as examples.

Network

Before a system can be compromised, an attacker must "find a way" to it by probing for its existence and for vulnerabilities it might have. Barring physical access, in which case you have a problem HIPS can't fix, the only way to probe a system is via the network. Because the network is the only way to probe the system, it is the first resource HIPS protects.

Note

Vulnerabilities are weaknesses in a target that can be used to an attacker's advantage. Exploits take advantage of vulnerabilities. For example, a front door protected by a flimsy lock is vulnerable. The vulnerability is exploited by breaking the lock and opening the door.


Several important resource subcategories exist within the broader network resource type. The subcategories are generally in step with the relevant layers of the Open Systems Interconnection (OSI) model:

  • Application data (Layer 7)

  • Establishment of connections from one host to another (Layer 5)

  • Network packet headers (Layers 3 and 4)

Not only is the network used to probe and deliver an attack to the host, but it's also the only way to use one compromised system to attack another. The network is so important that some products protect it to the exclusion of other the other critical resource sets. eEye Blink, for example, focuses only on the network and application execution (see the "Application Execution" section later in this chapter).

The McAfee Entercept and Cisco Security Agent (CSA) also cover the network. They both identify inbound connection requests, outbound connection requests, packet headers, and in some cases, packet contents. One difference between the two is that CSA also identifies the number of simultaneously open TCP connections.

Memory

If the attacker or malicious code has probed and found a way to access the system, their next step is to penetrate or deliver an attack payload. The path of least resistance is to use a pre-existing pathway by attaching the payload to an e-mail message, placing it on a network share, sending it with an instant messenger program, or forcing an Internet browser to download it. One problem with this delivery mechanism is that it often requires the user to activate the payload. History has shown that it isn't hard to use social engineering to trick the user to activate the payload. However, the attack has a higher chance of success if it doesn't have to rely on the user to succeed.

Social Engineering

Social engineering exploits weaknesses in people, rather than weaknesses in computer systems. It's often used to convince computer users to share their passwords or other confidential information with people they should not. For example, an evil-doer pretends to be a help desk technician calling to perform routine maintenance on the target's system. The pretend technician asks the target for the password to access the system, and often the target shares that information without validating that the person's identity.

Another great social engineering example is the "I Love You" worm. The worm payload appeared as an attachment to an e-mail message. Recipients of the message were tricked into invoking the payload by the body of the message which said, "Kindly check the attached love letter coming from me." Naturally, many recipients invoked the attachment hoping to read a sweet love note. Instead, they were infected with a worm.


One commonly used method to deliver a payload without involving the user is the buffer overrun exploit. Buffer overruns, also called buffer overflows, compromise the system memory used by a legitimate running process. The payload replaces the memory used by the legitimate process and is automatically invoked using the privileges of that process.

Buffer Overruns

Simply put, a buffer overrun happens when an unexpected amount of data is delivered to a vulnerable process running in memory. The memory used by the vulnerable process is replaced by an attack payload. The system automatically invokes the payload, and the payload has the same system privileges as the process it replaced.

Numerous flavors of buffer overrun exist, but the easiest kind to describe is the stack overflow. Stack memory can be thought of as a bucket of memory that programs use to store instructions that will run by the operating system (OS). When a program needs to use stack memory, it takes what it needs by temporarily reserving a slice of the memory, also known as a buffer, and then inserts the instructions. The OS completes the instructions, and then moves on to the next buffer. It refers to a value called a return to know which buffer to run next. Figure 6-3 illustrates this process.

Figure 6-3. Stack Memory Buffers and Returns


Figure 6-4 shows how a buffer overrun occurs when an attacker delivers, usually via the network, enough data to fill the buffer and overwrite the return. The return now points back into the buffer consisting of the data the attacker delivered. The data contains malicious instructions, which the operating system runs automatically.

Figure 6-4. Buffer Overruns



Buffer overrun vulnerabilities and exploits are so commonplace that every major HIPS product protects memory access vigorously. Entercept, for example, closely monitors attempts to access memory and identifies attempts to access other system resources from memory. CSA's approach is similar. eEye identifies memory access attempts by peering inside network data streams to identify patterns indicative of a memory access attempt.

Application Execution

After the attack has delivered and activated its payload, the next step in its lifecycle is to persist, or install itself on the system. Some attacks are not persistent, but worms and Trojans in particular install themselves so that they run whenever the system is running. Installation can be successful only if files and/or configuration settings are modified. The attack often runs itself or other programs to access those resources.

Many products, such as ISS Proventia Desktop, CSA, McAfee Entercept, and eEye Blink, notice when an executable file attempts to become a running process. Entercept and CSA in particular also track the invoking process. CSA adds a level of granularity by being able to identify a process attempting spawn a child process.

Note

A child process is a new process that is created by an existing process. The original process is called the parent process.


Note

It might seem strange to characterize application execution as a resource. However, programs do not run by themselves. They are invoked by other processes. So in that sense, the application being invoked is a resource for the invoking process.


Files

Things start to get interesting now. The attacker or malicious code has found the system, delivered the payload, and started to persist by installing itself. Malicious code can be installed on a system in only a few ways. A kernel module can be loaded, startup files can be modified, or files might be written to disk. The most effective place to write files is where they can be available for execution every time the system boots. Any directory that is part of the default path will do. Favorites include X:\windows\system32, X:\windows on Windows OS, and /usr, /usr bin on UNIX OS.

Note

Kernel modules are pieces of code that extend kernel functionality but are not actually a part of the kernel itself. They are usually file system or device drivers.


Files are also an important resource during the paralyze phase of the lifecycle. The data they contain might be important. It could be confidential information, critical system files, or log files that identify the intruder. Reading, modifying, or deleting these types of files can severely damage the system and the organization to which the confidential information belongs.

Therefore, HIPS products should monitor file read attempts as well as file write. As an example, CSA identifies directory write attempts, file reads, and file writes. McAfee Entercept takes the same approach, but also offers fine-grained coverage by making a distinction between write and rename, change attribute, create, modify permissions, move, link, and unlink.

System Configuration

Application execution and file modification are two parts of the persistence process. The final part is modification of the system configuration. For example, a Trojan might add itself to the Windows startup folder, the Windows Registry run keys, or .ini files so that it runs every time the system is started. UNIX startup .rc (run commands) files can also be targeted. The attacker might modify the Windows Registry, UNIX .conf files, or security countermeasure settings to weaken operating system security. In some cases, a worm or Trojan can even disable security products, such as antivirus and personal firewalls, or even the HIPS agent itself.

Note

The most crucial access attempts are the attempts to access the HIPS agent itself. While they are running, HIPS agents must be able to identify attempts to modify their own files, services, running processes, and configuration settings. What good is intrusion prevention if it can be disabled or modified by an attack?


CSA and Entercept are two examples of products that protect the system configuration by protecting the Windows Registry. CSA identifies any type of write activity. Entercept monitors additional activities such as read, create, delete, modify, change permissions, enumerate, monitor, restore, replace, and load.

Additional Resource Categories

The network, memory, application execution, and system configuration are certainly the most critical system resources to protect. Many other resources are less critical, but still important because they can be useful to attackers. Knowing what resources a product has on its "coverage list" is an important step toward understanding the product's capabilities.

Here is a brief list of some of the additional resources for which HIPS products can identify access attempts:

  • OS kernel The kernel is the central portion of the operating system. It can be extended by loading a piece of code, usually a type of driver, separately from the main body of the kernel itself. One category of attack, called a rootkit, is known for loading itself as a kernel module.

  • OS events Operating system logs often contain events that can be of interest for forensic or troubleshooting purposes. To make sure that the events in the log cannot be modified, some HIPS products capture events as they are written to the log and store them elsewhere.

  • Windows Clipboard The clipboard is not something that malicious code generally uses; however, it is a way to "get around" restrictions placed on certain applications that might access a set of data. An application with permission can copy the sensitive data to the clipboard, and then someone without access to the data removes it from the clipboard.

  • COM component access The Microsoft Component Object Model (COM) allows programs to interact with each other easily. COM is used, for example, to copy an Excel spreadsheet into a Word document. Malicious scripts sometimes exploit COM objects such as Outlook.application to send e-mail.

  • Devices Devices such as keyboards, microphones, and cameras are of particular interest to attackers. An attacker can intercept communications between the keyboard and the OS in an effort to capture passwords and other sensitive information. Some sophisticated attacks also take control of cameras or microphones to spy on the user.

  • Symbolic Links Symbolic links are UNIX resources similar to Windows shortcuts. They are special types of file that point to another file or folder. Attackers can use symbolic links to gain access to confidential or important system information.

Gathering Data About the Operation

The next task in the access control process is to gather data about the operation. The operation has been identified as being something worthy of closer examination, but no details have been ascertained as yet. To return to the security guard analogy, consider that the guard knows that someone is trying to use the elevator but doesn't know anything about the person or exactly how the elevator will be used.

To get the details, the guard must know how to get them and what details to get. For example, the guard might ask a set of questions such as, "Who are you?" and "Where are you going?" That's one way to get details. Another approach might be to inspect an identification badge without asking questions. In either case, the guard must also know what data to get from the badge or what questions to ask.

This portion of the book examines how HIPS products gather data and what data they typically gather.

How Data Is Gathered

Data gathering methods differ from product to product. None of the approaches is necessarily better than the others, but each has positives and negatives associated with it. To overcome any negatives and provide more comprehensive protection, most products implement more than one data gathering method. The four most common methods are as follows:

  • Kernel modification

  • System call interception

  • Virtual OSs

  • Network traffic analysis

Note

It might seem like a good idea to install multiple HIPS products so that you can gather data using all of the possible methods. Unfortunately, HIPS products do not usually work well together, and having more than one on a system will likely cause system instability.


Kernel Modification

The kernel modification method is used by trusted OS products such as Sun Microsystems' Trusted Solaris, Security Enhanced Linux (SE Linux) created by the National Security Agency, and PitBull by Argus Systems Group. System objects such as users, processes, files, network interfaces, and host IP addresses are labeled. These labels contain security-related attributes and are called domains.

The OS kernel is replaced or modified so that any time one object requests access to another, the security-related attributes are captured before the operation is allowed. For example, if Process A requests access to File A, the domains associated with each object are captured as the request is processed by the kernel. Figure 6-5 illustrates the kernel modification approach.

Figure 6-5. Kernel Modification


Kernel modification is an older approach that works well with traditional access control models such as mandatory access control (MAC) and role-based access control (RBAC). (See the section, "Access Control Matrix" later in this chapter for more details about MAC and RBAC.) Also, subversion of kernel modifications is difficult because they are tightly integrated with the OS.

One of the downsides of kernel modification is that the new or modified OS kernel might be incompatible with third-party software. If you discover an incompatibility, you have to wait for the HIPS vendor to release a fix before the third-party software can be used. Additionally, the vendor might have to release new product in response to OS updates, which can delay the update process.

System Call Interception

A system call is a request from a process to the operating system kernel when it wants to access an OS resource. A set of interceptors, sometimes called shims, are installed as part of the endpoint agent. The shims sit between processes running on the system and objects they might attempt to access. When a process tries to access a protected resource, the system call is intercepted by the shim before the OS kernel receives it. Information such as the process name, object name, access type, and access time is captured. Figure6-6 shows a graphical representation of this process.

Figure 6-6. System Call Interception


Note

In the world of carpentry, shims are tapered pieces of material used to fill space between things for support or leveling. In the computer world, shims can be thought of as tapered software components inserted between two other software components.


This is a commonly used method for data collection and is implemented in products such as McAfee Entercept, CSA, and Sana Security Primary Response. Part of the reason for system call interception's popularity is that it is easier to implement than kernel modifications. It is also less subject to third-party software incompatibilities, although they can still occur. One downside of system call interception is that it is not as tightly bound to the OS and opportunities to "get around" it might exist.

A close examination of the CSA explains how system call interception is implemented. During the agent installation, the shims are inserted into the operating system in such a way that they can intercept attempts to access important resources. Important resource categories were listed earlier in the chapter.

In Windows, for example, four shims are installed:

  • CSATdi Tdi stands for Transport Driver Interface and is the interface between the network protocols and application programming interface. In the Windows networking model, Tdi corresponds with the transport layer of the OSI model. Essentially, this shim intercepts network connection requests to and from applications running on the system.

  • CSAFile This shim captures information about file read/write operations. It uses a Microsoft Installable File System filter.

  • CSAReg Registry write actions are intercepted with this shim.

  • CSACenter Handles system API calls such as downloading and invoking ActiveX controls, dangerous system calls from stack or heap memory, keyboard IRQ hooking, and media device hooking. It is the most important component of the agent, as it performs several duties beyond system call interception including event management, correlation, and rule enforcement.

Practically speaking, these interceptors install in the system as device drivers and load at the same time as other devices. Figure 6-7 shows the list of loaded drivers on a Windows XP system that is protected by CSA.

Figure 6-7. CSA Drivers


Virtual Operating Systems

The endpoint agent monitors the system for any operation where an application is trying to write an executable file to disk. Before the write operation is allowed, the executable is temporarily placed in a virtual copy of the operating system. This virtual OS is sometimes called a sandbox, because it's a safe place "play."

The executable runs in the sandbox, and any malicious actions it might attempt can be observed before it occurs on the real OS (see Figure 6-8). The virtual OS monitor captures activities such as modifying other files, making network connections, and modifying configuration files. Internet Security Systems Proventia Desktop and Finjan Vital Security for Clients use virtual OSs.

Figure 6-8. Virtual OSs


Virtual OSs are potentially a secure way to gather data. Assuming that the executable code is correctly identified and can be placed inside the virtual OS before it runs, the data is gathered without any danger to the actual OS. However, the virtual OS must conform closely with the actual OS in order for the data that is gathered to be accurate.

Network Traffic Analysis

Figure 6-9 shows how in network traffic analysis all network data to and from the protected host is inspected by the endpoint agent. Network packets are re-assembled and analyzed before they are delivered up or down the TCP/IP stack. In most cases, the agent understands commonly used protocols such as HTTP, Simple Mail Transport Protocol (SMTP), FTP, and so on. eEye Digital Security Blink relies primarily on network traffic analysis such as data gathering, although many of the other HIPS products like Entercept and ISS Proventia Desktop make use of some network traffic analysis.

Figure 6-9. Network Traffic Analysis


The problem with network traffic analysis is that some traffic cannot be analyzed. Encrypted data, for example, cannot be examined until it has been decrypted. Products that rely solely on this data gathering method must find a way to decrypt the traffic so that it can be examined. However, network traffic analysis is the only way to stop an attack before it arrives on the system. System call interception, virtual OSs, and kernel modification gather data after a potential attack has arrived.

What Data Is Gathered

A wise G.I. Joe character once said, "Knowing how to collect data about an operation is only half the battle." The other half is to know what data to collect. Exactly which details of the operation are relevant depends on the type of resource that's accessed. For example, the source IP address would not be a relevant piece of information if the type of resource being accessed was a file. The same goes for file name, if the network is being accessed.

For the most part, all HIPS products collect similar data for each type of resource. Table 6-1 shows a few examples.

Table 6-1. Data Collected per Resource Type

Type of Resource

Data Collected

All

Time, host identification, access token where applicable, credentials where applicable

Network packet inspection

Source IP, destination IP, packet details, source port, destination port

Network connection request

Process name, source IP, destination IP address, source port, destination port, transport, operation (connect or accept)

File access

Process name, file path, file name, operation (read, write, write directory)

Registry access

Process name, key path, key name, key value, key type

Application execution

Process name, process path, target process name, target process path

Kernel protection

Kernel module name, module hash, code pattern

System event log

Event source, priority, facility (UNIX), event ID (Windows), message pattern (UNIX)

Memory

Process name, function call, buffer return address, buffer contents, target process where applicable


Determining the State

The security guard is almost halfway through the access control process. All the relevant data has been compiled about the operation itself, but the state of the system can alter the outcome of the request. For example, under ordinary circumstances, employees of the ACME Company are allowed to enter the building whenever they want. However, if the building is on fire, they are not.

In practical terms, state conditions determine when a particular security policy is in place. "If the building is not on fire, the normal employee entry policy is enforced. If the building is on fire, the fire employee entry policy is active." State conditions make HIPS policies more restrictive under some circumstances and less under others.

HIPS products commonly use at least one, if not all, the following state types:

  • Location

  • User

  • System

Location State

In this case, location refers to the location from which the system connects to the network. "In the office" and "Out of the office" are obviously useful locations to define. Corporate networks are usually more secure and trustworthy than the Internet or home networks. A more permissive security policy can be active while the host is in the office, and a restrictive policy can be automatically activated as soon as the location state changes to out of the office.

More fine-grained location state conditions could include the following:

  • Connected via virtual private network (VPN)

  • Austin office

  • Europe

  • Wireless

The location of the system is usually dependant on a combination of variables such as the following:

  • Currently assigned IP address

  • Assigned DNS suffix

  • The availability of the management server

  • Type of network interface being used (wired or wireless)

  • MAC address

  • Status of VPN client

  • IP address of DHCP server

The criteria that can be used to define a location vary from product to product. As an example, Figure 6-10 shows the CSA location state configuration screen.

Figure 6-10. CSA Location State Configuration


User State

User state is fairly straightforward. User or group names are attached to policies. User states are useful when a particular policy should be overruled for a group of users. Usually, users should not be able to disable the HIPS agent. What if the system is having difficulty and the help desk technician needs to temporarily disable the agent protection to troubleshoot. You could make an exception that is applicable only when a user from the group "Help Desk" is logged on. You could also apply the exception to Administrative users, as shown in Figure 6-11.

Figure 6-11. CSA User State Configuration


Of course, user state conditions rely a great deal on the accuracy of the corporate directory service. Also, strong authentication controls for users who have the ability to stop the agent should be in place wherever possible. Making HIPS policies less secure is a privilege that must be guarded carefully.

System State

System state is more complicated than the other two states and is something of a catch-all for states that do not fall under the location or user categories. Essentially, it is any previously observed activity or activities that are indicative of an overall condition. That is a vague description, so perhaps the best way to clarify is to give a few examples.

If a host has been repeatedly pinged, port scanned, and connected to by an untrusted host, you can safely assume that it is the target of an attack reconnaissance effort. System state could be set to "Under Attack" and more rigid policies activated until the system is no longer under attack. In a similar vein, a state called "Currently Mapping Network" could be in place if a network scanning process such as Nmap or NBTscan is running and the host is making numerous outbound network connections.

Most HIPS products do not support these types of sophisticated system state conditions. For those that do, what follows is a list of potentially useful ones:

  • Security level Some HIPS agent user interfaces (UIs) allow users to set their own security level. The UI might, for example, have a slide bar with off, low, medium, or high choices. When the slide bar is set to medium, policies whose state set contains medium are activated. When the slide bar is set to high, more restrictive policies are enforced.

  • Rootkit detected This state is applied when a driver or kernel module (see the sidebar concerning rootkits) attempts to load after the system has booted.

  • Installation process detected Less restrictive policies can be activated when a software installation is in process. Conditions indicative of a running installation might be a setup.exe or install.exe process and the user responding affirmatively when asked if an installation is occurring.

  • System booting An endpoint is generally more vulnerable while it is booting because security countermeasures might not be loaded.

Rootkits

A rootkit is a type of Trojan horse that intercepts data from terminals, the network, keyboards, and in some cases multimedia devices such as cameras. One of the distinguishing characteristics of rootkits is that they hide their processes, logs, and logins so that they are more difficult to detect. Some rootkits contain backdoor software that allows attackers to remotely access the host on which the kit is installed.

Two different types of rootkit exist, application and kernel. Application rootkits replace existing application executables with fakes. Kernel rootkits load new code into the operating system kernel.


Consulting the Security Policy

Now that the security guard has information about the person attempting to access the building, it's finally time to consult the security policy. The policy should contain a list of criteria required for entry and items that deny entry. The guard matches the captured information with an item on the policy list, assesses the state of the system, and then takes the action associated with the policy object that matched.

You can take many different policy approaches. One possibility is to admit only people who work in the building. A different implementation might say that anyone can enter as long as they do not bring a bomb along. Ideally, an employee who would ordinarily be admitted should not be admitted if the guard sees that person carrying a bomb. Chapter 2, "Signatures and Actions," defines the main categories of signature and explains some of the positive and negative factors associated with each. This section uses the word rule to describe the criteria by which HIPS decisions are made, but the word signature could also be used. The word policy, a collection of rules, is also used. Rules and policies replace signatures and signature sets simply because those words more aptly describe the way most HIPS tools operate. You might notice some overlap, but the intent of this section is to give specific rule examples in the host context. Also, any policy types that do not fall under the categories listed in Chapter 2 are examined. Another important thing to note is that few HIPS products use just one flavor of policy. Several approaches are combined so that the negatives associated with one are overcome another. Also, a combination of policy types makes the product more suitable for use in every situation. You have five essential HIPS policy types:

  • Anomaly-based

  • Atomic rule-based

  • Pattern-based

  • Behavioral

  • Access control matrix

Anomaly-Based

As defined in Chapter 2, anomaly-based policies are based on deviations from a known and established baseline of typical user traffic and operations. The policy is built when the product monitors the operations performed by users and processes on a protected endpoint for some amount of time. At the end of the learning period, it adds anything it saw to the good and normal activity list. Any subsequent activity the product sees that is not on the good and normal list is denied because different is bad. This is sometimes called a white list, because all activities that are not explicitly permitted are denied.

Sana Security Primary Response is the only HIPS product that relies solely on an anomaly-based policy, although others such as eEye Digital Security Blink use it to some extent. During its learning period, Primary Response monitors the entire host and "learns," for example, that Internet Information Server (IIS), which is the process created when inetinfo.exe invokes, is running on the system. Primary Response "sees" all the file write, file read, Registry modify, and network connection operations that the process performs. It detects that IIS has read index.html and adds that action to the list of acceptable and normal actions in the security policy. The list might look similar to Table 6-2.

Table 6-2. Learned Activity List

Process Name

File Reads

File Writes

Registry Writes

Inbound Connections

Outbound Connections

Inetinfo.exe

\inetpub\index.html

\inetpub\log\log.txt

\HKLM\software\ IIS\config\value

TCP\80

TCP\1433

 

\inetpub\wwwroot\site1 \site1.html

\inetpub\tmp\1.tmp

 

TCP\443

TCP\1434

    

TCP\8080

 
 

\inetpub\log\log.txt

    

Outlook.exe

\application data\user.pst

\application data\user.pst

\HKLM\software\ outlook\config

TCP\25

TCP\110

 

\*.wab

    
  

\*.wab

   


Atomic Rule-Based

Atomic rule-based policies are simply sets of regulations governing the activities of users and processes. Atomic rules contain only one triggering criteria and are composed of five parts:

  • Type Type identifies the type of resource the rule intends to protect. A File Access Control rule type, for example, indicates that the rule is invoked when a file access request is identified.

  • Action The action that the product takes when the rule is triggered. Log event is one example of an action.

  • Application Class Application class refers to the process to which the rule is applied. Usually, they are tied to an executable name. For example, the Web Browsers application class might contain the processes created when iexplore.exe, netscape.exe, opera.exe, or mozilla.exe run. The All Applications application class contains all running processes, regardless of the executable that was invoked to create them.

  • Directive The kind of access request. The verb varies by rule type, but in a File Access Control rule it could be read, write, rename, modify attribute such as "read-only" or "archive," or a combination of all four. A Network Access Control rule makes a connection or accepts a connection for its verb choices.

  • Object The target of the access request. It contains things such as IP address, file, or Registry values.

McAfee Entercept, CSA, ISS Proventia Desktop, and Finjan Vital Security for Clients all use some atomic rules. To make it clear exactly what atomic rules look like, Table 6-3 shows a few examples from CSA.

Table 6-3. Atomic Rule Examples

Type

Action

Application Class

Directive

Object

File Access Control

Deny

Web Servers (inetinfo.exe, apache.exe)

Write

HTML files (*.html)

Registry Access Control

Permit

Installation Applications (setup.exe, install.exe)

Write

Windows run keys (HKLM\software\micros oft\windows\currentversi on\run, runonce, runonceex)

Network Access Control

Log Event

Web Browsers (iexplore.exe, mozilla.exe, netscape.exe, firefox.exe)

Make a connection

HTTP (TCP/80, TCP/ 443)

Application Control

Deny

All Applications (*.exe)

Execute

Command shells (cmd.exe, bash, csh, command.exe)


Pattern-Based

Pattern-based policies differ from atomic-rule based policies primarily in the specificity of their triggering criteria. Atomic rules have broadly defined triggering criteria, such as any process trying to modify a system executable. Pattern-based rules, also known as signatures, fire when the criteria are far more specific. For example, the signature triggers when it sees a string of data being delivered via the network that is carrying a known attack payload.

McAfee Entercept is a fine example of a product that combines two policy approaches to make up for the deficiencies of each. One of the difficulties with atomic rules is that they can stop an attack but cannot determine exactly what attack they have stopped. Also, atomic rules can suffer from high false positive rates. On the other hand, pattern-based policies cannot identify an attack pattern that has not been seen already.

Entercept combines both atomic rules and patterns. The atomic rules have the ability to stop the new and unknown attacks. After a pattern is available for an attack, Entercept can identify it specifically and avoid false positives. This is called a hybrid approach.

Behavioral

Behavioral policies also make use of rules, but the rules are more involved. Rules contain more than one activity that must occur before a match can be made. Instead of saying that iexplore.exe and netscape.exe might not invoke executable code, a behavioral rule could say that "Web Browsers" cannot invoke executables. The rule is tripped only if a process is both a web browser and is attempting to invoke executable code.

The process is categorized as a web browser by remembering, or keeping state, on its prior activities. For example, if any process on the system makes an outbound HTTP or HTTPS connection, it is "tagged" as a web browser. It is then subject to the rule that prevents web browsers from invoking executables.

CSA is one product that uses behavioral rules extensively. To help clarify the differences between behavioral rules and other types, Table 6-4 shows a few examples from CSA.

Table 6-4. Behavioral Rule Examples

Type

Action

Application Class

Verb

Object

Network Access Control

Add process to application class "Web Browsers" when

Any application

Make a connection

HTTP (TCP/80, TCP/443)

File Access Control

Deny

Web browsers (dynamic app classdefined by previous network access control rule)

Write

System Executables (\windows\system32\*.exe)

Network Access Control

Add process to application Class "Network Server Applications" when

Any application

Accept a connection

TCP/*, UDP/*

Application Control

Deny when

Network applications (dynamic app classdefined by previous network access control rule)

Attempt to invoke

Command Shells (cmd.exe, bash, csh, command.exe)


Note that half of these rules do not have an enforcement action. Instead, the process triggering the rule is "tagged" with a description such as Network Server by adding the process to the Network Servers Application class. The actual enforcement is performed by other rules that apply to the Network Servers Application class. This way, you have an implied "If I see X activity and I see Y activity, then trigger" statement in the enforcement rule.

Access Control Matrix

The products that collect data using kernel modifications almost exclusively use access control matrixes. An access control matrix lists the labels given to users, processes, and resources on the system. Matrixes are a formal and well-tested way to implement policy.

You find two different types of access control matrix. The first is mandatory access control (MAC), which is used by Argus Systems PitBull and Trusted Solaris. This policy is based on the concept of least privilege. Least privilege means that a user or process should have access only to the resources needed to do its job. A MAC matrix, like the one shown in Table 6-5, has the user and process labels in the columns, and the rows contain the resources. A user or process is able to access only the resources in its row.

Table 6-5. Example Mandatory Access Control Matrix
 

User

Process

Resource A

Y

Y

Resource A

N

N

Resource A

Y

N

Resource A

N

N


A slightly more permissive type of matrix is the role-based access control (RBAC). SE Linux uses a combination of Type Enforcement, which is a flavor of MAC, and RBAC. RBAC assigns each user a set of roles and each role is authorized to access a set of objects.

Taking Action

The type of resource being accessed is identified, which in turn defines what data should be gathered about the operation. The data is then gathered, the policy is consulted, and it is finally time to take action and enforce the decision listed in the policy. In the security guard example, the action might be to allow access, deny access, or some middle ground such as "detain for questioning."

For HIPS agents, the most obvious actions are permit or deny. However, like the security guard, the agent can take other actions. Not all products support the same actions, but here is a list of some of the possibilities:

  • Permit Allow the activity to occur.

  • Deny Do not allow the activity to occur.

  • Log Event This action is used in conjunction with permit or deny. For example, the activity should be allowed, but also log an event.

  • Drop packet Discard the network packet that triggered the rule or matched the pattern. Agents that are capable of network traffic inspection often have this action available.

  • Shun host Drop all network traffic, do not accept network connections from, or make connections to, a particular host or set of hosts.

  • Query the user Ask the user if the action should be allowed. If performed manually, this action is useful and not malicious. However, if the action is automated, it is probably malicious. Software installation, for example, is fine if the user is actually installing software but dangerous if something or someone malicious is installing software without the user's knowledge. In a situation like that, it's helpful to ask the user, "Are you installing software?"




Intrusion Prevention Fundamentals
Intrusion Prevention Fundamentals
ISBN: 1587052393
EAN: 2147483647
Year: N/A
Pages: 115

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net