Agents are the
Agents are relatively new in intrusion detection and prevention, having been developed in the mid-1990s. As mentioned previously in this chapter, their primary function is to analyze input provided by sensors. Although many definitions exist, we’ll define an
Our definition of agent states that agents run independently. This means that if one agent crashes or is impaired in some manner, the others will continue to run normally (although they may not be provided with as much data as before). It also means that agents can be added to or deleted from the IDS or IPS as needed. In fact, in a small intrusion-detection or intrusion-prevention effort, perhaps only a few of two
Although each agent runs independently on the particular host on which it resides, agents often cooperate with each other. Each agent may receive and analyze only one part of the data regarding a particular system, network, or device. Agents normally share information they have obtained with each other by using a particular communication protocol over the network, however. When an agent detects an anomaly or policy violation (such as a brute force attempt to su to root, or a massive flood of packets over the network), in most cases, the agent will immediately notify the other agents of what it has found. This new information, combined with the information another agent already has, may cause that agent to report that an attack on another host has also occurred.
Agents sometimes generate false alarms, too, thereby misleading other agents, at least to some degree. The problem of false alarms is one of the proverbial vultures
The use of agents in intrusion detection and prevention has proven to be one of the greatest breakthroughs. Advantages include:
Having a number of small agents means that any of them can
The simplicity of most agent
Resilience Agents can and do maintain state information even if they fail or their data source fails.
Independence Agents are implemented to run independently, so if you lose one or two, the others will not be affected.
Agents can readily be
Mobility Some agents (believe it or not) may actually move from one system to another; agents might even migrate around networks to monitor network traffic for anomalies and policy violations.
There are some drawbacks to using agents, too:
Resource allocation Agents cause system overhead in terms of memory consumption and CPU allocation.
False alarms False alarms from agents can cause a variety of problems.
Time, effort, and resources needed
Agents need to be modified according to an organization's requirements, they must be
Potential for subversion
A compromised agent is
At a bare minimum, an agent needs to
to communicate with other components of IDSs
A listener that waits in the background for data from sensors and messages from other agents and then receives them
A sender that transmits data and messages to other components, such as other agents and the manager component, using established means of communication, such as network protocols
Agents can also provide a variety of additional functions. Agents can, for example, perform correlation analyses on input received from a wide range of sensors. In some agent implementations, the agents
Although the types of additional functions that agents can perform may sound impressive, “beefing up” agents to do more than simple analysis is not
Decisions about deployment of agents are generally easier to make than decisions concerning where to deploy sensors. Each agent can and should be configured to the operating environment in which it runs. In host-based intrusion detection, each agent generally
In network-based intrusion detection, agents are generally placed in two locations:
Where they are most
Where they will be sufficiently secure Security of agents is our next topic, so suffice it to say here that placing agents in secure zones within networks, or at least behind one or more firewalls, is essential.
Finally, tuning agents is a very complicated issue. It is highly desirable that each agent produce as high a hit rate (positive recognition rate) as possible, while also producing as low a false-alarm rate as possible. When agents are first deployed, however, they usually perform far from optimally in that they yield output with excessively high false-alarm rates. Fortunately, it is possible to reduce the false-alarm rate by eliminating certain attack signatures from an analyzer, or by adjusting the statistical criteria for an attack to be more
The threat of subversion of agents is a major issue. Agents are typically much
Fortunately, the way agents are typically deployed provides at least some level of defense against attacks that are directed at them. Agents (
Nevertheless, agents need to be secured by doing many of the same things that must be done to protect sensors—hardening the platform on which they run, ensuring that they can be accessed only by authorized persons, and so on. Here are a few guidelines:
Dedicate the hardware platform
Dedicating the hardware platform on which agents run to agent functionality is essential. If other applications run on the same platform as one that
Because of the high importance of agent security, encrypting all traffic between agents and other agents and possibly also between agents and other components is also advisable. Including a digital signature that
Filter input Additionally, to guard against denial-of-service attacks, filters that prevent excessive and repetitive input from being received should be deployed. Many vendor agent implementations have this kind of filtering capability built in.
Other interesting approaches to agent security include using APIs (application programming interfaces) to control data transfer between agents. In this approach, one of the most important considerations is sanitizing the data transferred between agents to guard against the possibility of exploiting vulnerabilities to gain control of agents and the platforms on which they run by passing specially crafted data.