1.1 Software Threats and the Internet

     

Because you're reading this book, it's likely that you're responsible for the management of one or more sensitive hosts . If that's the case, you're aware that the threat level for Internet-based attacks has increased rapidly over the last several years and continues to do so. One authoritative barometer of this trend is the number of incident reports logged by the Computer Emergency Response Team Coordination Center (CERT/CC) of Carnegie Mellon University's Software Engineering Institute. Table 1-1 shows the number of incident reports for 2000 through 2003. During this four-year period, incident reports increased at an average annual rate of almost 85 percent. That is, the number of incidents has roughly doubled each year. If this rapid rate of increase continues, the year 2010 will see over 10 million incident reports.

Table 1-1. CERT/CC incident reports [1]

Year

Reports

2000

21,756

2001

52,658

2002

82,094

2003

137,529


[1] Source: http://www.cert.org/stats/cert_stats.html.

Of course, the number of incident reports is an indirect rather than direct measure of the threat level. So some might argue that the threat level is unchanged, and the increase in incident reports is due to system administrators reporting a greater proportion of incidents.

Insider Threats

Not all threats arise from software or the Internet. So-called insider threats , which come from local-area networks or proprietary wide-area networks, can present even more serious risks. Insiders often attack systems by means other than software vulnerabilities. For instance, employees in two work groups may collude to falsify database records to steal from their employer. Such threats generally cannot be prevented by purely technical means. Gartner research has estimated that 70 percent of security incident costs are related to breaches committed by insiders. Securing the Enterprise: The Latest Strategies and Technologies for Building a Safe Architecture (Gartner, 2003), available at http://www4.gartner.com/5_about/news/sec_sample.pdf.


While available evidence does suggest that system administrators have historically been reluctant to report incidents and have become less reluctant lately, evidence also indicates that the threat level is substantial and is rising rapidly. As an information assurance researcher, I monitor several class-C networks for familiar and novel attacks. My data shows that a typical host on these networks is subject to attack every few seconds. An unprotected host can succumb to attack in less time than it takes to install a typical operating system or software patch. Therefore, those for whom the confidentiality, integrity, and availability of information are important must invest significant effort to protect their hosts, especially those that connect to the Internet.

To effectively protect hosts against threats, it's important to understand the nature of the threats and why they are increasing. Three of the most significant factors that have led to the increased level of software threats are software complexity, network connectivity, and active content and mobile code.

1.1.1 Software complexity

Because the human intellect is finite, software developers commit errors and leave omissions during the implementation of software systems. The defects resulting from their errors and omissions cause software systems to behave in unwanted or unanticipated ways when executed in untested or unanticipated ways. Attackers can often exploit such misbehaviors to compromise systems. As a general principle, the more complex a system, the greater the intellectual demands its implementation imposes upon its developers. Hence, complex systems tend to have relatively large numbers of defects and be relatively more vulnerable to attacks than smaller, simpler systems. Modern software systems, such as operating systems and standard applications, are large and complex. The Linux operating system, for instance, contains over 30 million source lines of code. And Red Hat Linux 7.1 was 60 percent larger than Red Hat Linux 6.2, which was released about one year earlier. [2] Therefore, contemporary systems are generally vulnerable to a variety of attacks and attack types, as explained in the following sections of this chapter.

[2] Source: http://www.dwheeler.com/sloc/.

1.1.1.1 Network connectivity

A second factor contributing to increased software threats is increased network connectivity and, in particular, the Internet itself. Connectivity provides a vector whereby attacks successfully launched against one networked host can be launched against others. The Internet, which interconnects the majority of networks in existence, is the ultimate attack vector. The recent popularity of consumer access to the Internet compounds the threat, since the computers of most consumers are not hardened to resist attack. Unsecured hosts easily fall prey to viruses and worms, many of which install backdoors or Trojan horses that enable compromised systems to be remotely accessed and controlled. Attackers can launch attacks by using these compromised hosts, thereby hiding their identity from the victims of their attacks and law enforcement. Many attackers attack from across international borders, which complicates the work of law enforcement. Because law enforcement generally has been ineffective in identifying and apprehending all but a handful of notorious computer criminals, attackers have believed themselves to be beyond the reach of prosecution and have acted out their whims and criminal urges with impunity. The recent advent of wireless connectivity exacerbates the risks, as several of the security facilities commonly used on wireless networks implementing the IEEE 802.11 standard (such as Wireless Encryption Equivalent Privacy (WEP)) have turned out to be flawed, and therefore vulnerable to attack.

Active content and mobile code

A third factor contributing to increased software threats is the use of active content and mobile code. Active content refers to documents that have the capability of triggering actions automatically without the intervention, or possibly even the awareness, of their user . Ordinary, ASCII-encoded documents are not active in this sense. However, a variety of modern document types can include active content such as Abobe PDF documents, MS Office documents, Java applets, and web pages containing JavaScript code or using browser plug-ins. Even PostScript documents, which are widely thought to be safe, can contain active content. The danger of active content is that users generally perceive documents as benign , passive entities. However, malicious active content can compromise a user's computer as easily as any other form of malicious code. Opening, or even merely selecting and previewing, a document containing malicious active content may enable the malicious code to compromise a user's computer.

Cybercriminals Think Themselves Safe

One of my research projects involves the use of honeypots to study computer attacks and attackers. A honeypot is a specially instrumented system that is left open to attack. You can learn more about them at http://www.honeynet.org.

In 2003, I monitored intruders on one of my honeypots, who were discussing the likelihood of their apprehension and prosecution. In response to concerns expressed by one attacker, another ”whom I'll call Peer ”responded as follows :

Peer: well.... didn't give a ”***. I'm not in US

Peer: and frankly my country doesn't have a cyberlaw :P

The final two characters in Peer's response, :P, are an Internet Relay Chat (IRC) device intended to represent the appearance of sticking out one's tongue, a common gesture of disdain.


Mobile code is code designed to be transported across a network for execution on remote hosts. Mobile code is often designed to extend the capabilities of software programs and, because of users' desires for flexible and convenient software, has become ubiquitous. Email clients and web browsers, for example, accept and process a wide variety of mobile code types, including Java and JavaScript programs, Microsoft ActiveX controls, and others.

Unfortunately, active content and mobile code provide more than flexibility and convenience to users: they provide attackers with a flexible and convenient attack vector. Many Internet attacks take the form of active content or mobile code delivered via email. When a user views an email message containing malicious code, the malicious code may seize control of the user's computer. Especially sophisticated malicious code may not even require user action. Such code may be capable of compromising a vulnerable computer in a fraction of a second, without presenting the computer's user with an opportunity to refuse the code permission to execute or even receive notification of the event.

For more information on malicious mobile code in the context of Microsoft Windows, see Malicious Mobile Code (O'Reilly).


1.1.2 Privilege Escalation

Most common operating systems, including Microsoft Windows and Unix/Linux, provide multiple levels of authorization, thereby restricting the operations that some programs or users are permitted to perform. Multiple levels of authorization act as bulwarks against the damage done when a program is compromised. Many common operating systems have two primary levels of authorization ”one for ordinary users and one for the system administrator. A handful of operating systems, such as those used on PDAs and small computing devices, do not impose any such restrictions.

Restricting programs to the few functions they need to perform is called the principle of least privilege . Operating systems that lack multiple levels of authorization cannot implement the principle of least privilege and are therefore inherently quite insecure . When an attacker compromises a program running under a single-level operating system, the attacker gains the ability to perform any operation of which the system is capable. However, an attacker who compromises a program on a system that has multiple levels of authorization obtains only the privilege to perform those operations for which the program is authorized. If the program performs tasks related to system administration, the attacker may gain wide- ranging privileges. However, if the program performs relatively mundane tasks, the attacker may achieve relatively little beyond gaining the ability to disrupt operation of the compromised program. Nevertheless, an attacker who compromises even a program that confers few privileges may achieve a significant victory, because the attacker can use the privileges conferred by the program as a beachhead from which to attack programs conferring additional or greater privileges. Alternatively, the attacker may intentionally disrupt operation of the compromised program in what is called a denial of service .

The Apache OpenSSL Attack

A popular Internet attack during 2002 and 2003 was the Apache OpenSSL attack, directed against the Apache web server. Most users configure Apache to run as an ordinary user, rather than as the system administrator. So, attackers who successfully exploited a web server using the Apache OpenSSL attack generally obtained only limited privileges. However, at the time of the attack's popularity, Linux systems were vulnerable to a second attack, one targeting the ptrace facility used to trace and debug processes. Unlike the Apache web service, which is available to remote users, the ptrace facility is available only to local users. Successful compromise of an Apache web server enabled attackers to access the ptrace facility and exploit a ptrace defect that conferred full system administration privileges.


1.1.3 The Patch Cycle and the 0-Day Problem

When a software vendor learns that one of its products is vulnerable to attack, the vendor will generally issue a patch . Users can install the patch, which modifies the vulnerable product in a way intended to eliminate ”or at least mitigate ”the vulnerability. Occasionally, a patch alleged to eliminate a vulnerability will fail to actually do so. Worse yet, occasionally a patch will introduce one or more new vulnerabilities. So patches are sometimes less than ideal solutions. But, as a means of defending against software attacks, patches suffer from a more fundamental flaw.

The essential problem with patches is that they are a reactive, rather than proactive, response. Patching is thus a continual process consisting of the following steps, known as the patch cycle :

  1. A vulnerability in a software product is discovered .

  2. The product's vendor prepares and publishes a patch for the vulnerability.

  3. Users acquire, authenticate, test, and install the patch.

It may seem odd that security researchers publish vulnerabilities rather than privately inform vendors of them, because publication of a vulnerability may help attackers discover a way to exploit it. Indeed, most security researchers do prefer to inform vendors of vulnerabilities privately rather than publicly . But many vendors consistently fail to release patches in a timely manner. And some vendors fail even to acknowledge in a timely manner vulnerability reports submitted privately by researchers. So, many security researchers believe that it's necessary to force vendors to fix their products and therefore elect to publish vulnerabilities. In an effort to avoid giving attackers opportunity to exploit vulnerabilities, some researchers publish them only after first privately notifying the vendor and providing an opportunity to publish a patch before publication of the vulnerability.

Vendors can supply patches only for known vulnerabilities, so a fully patched computer remains vulnerable to attacks that are unknown to the vendor. Moreover, vendors require time to produce patches even for known vulnerabilities. So fully patched computers also remain vulnerable to known attacks for which vendors have not yet released patches. The interval between publication of a vulnerability and availability of a related patch is a time of especially high vulnerability. During the interval, vendors race to produce effective patches, while attackers race to produce effective exploits. This race generally favors the attackers, who do not have to test and analyze their exploits the same way that vendors must test and analyze their patches. So publication of a vulnerability amounts to initiation of a countdown to the widespread availability and use of exploits targeting the vulnerability.

Moreover, vulnerabilities are sometimes privately known and exploited well in advance of their publication. Vulnerabilities for which no patch is yet available are known as 0-day vulnerabilities or simply 0-days ("oh days"). The same term is often used to refer to attacks that target 0-day vulnerabilities. Attacks that target 0-days are a particularly potent form of attack, because even systems whose administrators have assiduously kept current with all vendor patches are vulnerable to them. Fortunately, most attacks do not target 0-days. The National Institute of Standards cites CERT data indicating that 95 percent of attempted network intrusions target vulnerabilities for which patches are available. [3] However, patching is ineffective against the remaining 5 percent of network attacks, which target 0-day vulnerabilities.

[3] Procedures for Handling Security Patches , NIST Special Publication 800-40, p. 2, available at http://csrc.nist.gov/ publications /nistpubs/800-40/sp800-40.pdf.

1.1.4 Protecting Against 0-Days

Ordinary computer users may be content merely to patch their computers regularly, a practice that can protect them against 95 percent of attempted network intrusions. However, administrators of sensitive systems generally cannot afford to allow their systems to remain vulnerable to the 5 percent of attempted intrusions that target 0-day vulnerabilities. Although patching is, by definition, an ineffective defense against attacks targeting 0-day vulnerabilities, several types of defenses are more or less effective in protecting against them.

Defense by Layers

No software is known to be free of defects, and no means of producing defect-free software is known. Thus, no means of network or host defense that depends on the correct operation of software can be fully reliable. Hence, practical defense consists of implementing multiple defensive measures in hopes that if one defensive measure fails, one or more other measures will prove effective. This principle is known as defense by layers .

A corollary principle holds that imperfections in a defense mechanism do not preclude its use, since all defense mechanisms are considered to be imperfect. Instead, rational decisions concerning which defense mechanisms an organization should deploy are based on risk assessment and cost-benefit analysis.


1.1.5 Network and Host Defenses

Because hosts are generally subject to a variety of vulnerabilities for which no patch exists or has been installed, hosts must be protected against attack. Two basic sorts of defenses are employed:


Network defenses

A defensive facility that protects an entire network


Host defenses

A defensive facility that protects a single host

1.1.5.1 Network defenses

Network defenses are often more convenient to deploy than host defenses, because a single network defense facility defends all hosts on a network. Host defenses, in contrast, must be implemented on each host to be protected. The two most widely used network defenses are firewalls and network intrusion detection systems. Neither is generally effective in protecting against 0-day attacks.

Network firewalls

Firewalls restrict the traffic flowing into and out of a network. The most basic sort of firewall restricts traffic by IP address. More sophisticated firewalls allow only designated application-layer protocols or requests having a specified form. For instance, some firewalls can block web client access to malformed URLs of the sort often associated with attacks. However, most currently deployed firewalls do not examine the application layer of traffic. Such firewalls are generally ineffective in protecting against 0-day attacks launched against ports to which the firewall is configured to allow access.

Network intrusion detection and prevention systems

Intrusion detection systems don't prevent attacks from succeeding; they merely detect them. To do so, they monitor network traffic and generate an alert if they recognize an attack. They typically use a database of signatures or rules to recognize the attacks. Thus, an intrusion detection system may not generate an alert for a particular 0-day attack, since the attack may not match any rule or signature within the system's database. Some intrusion detection systems do not rely on a database of signatures or rules. Instead they alert the user to unusual traffic. However, anomaly-based intrusion detection systems are not yet in widespread use.

An intrusion prevention system attempts to detect and prevent attacks. However, like anomaly-based intrusion detection systems, intrusion prevention systems are not yet in widespread use.

1.1.5.2 Host defenses

Host defenses may be more effective than network defenses in detecting or preventing 0-day attacks. Host defenses are more varied than network defenses. Some popular host defenses are:

  • Host firewalls

  • Host intrusion detection systems

  • Logging and auditing

  • Memory protection

  • Sandboxes

  • Access control lists

Host firewalls and intrusion detection systems

Firewalls and intrusion detection systems can be deployed on individual hosts as well as at the network level. Because host-based firewalls operate similarly to network-based firewalls, they are seldom more effective than network-based firewalls in protecting against 0-day attacks. Host-based intrusion detection systems are sometimes more effective in recognizing novel attacks than their network-based cousins. However, like their cousins, they detect rather than prevent attacks, so they are not an adequate solution to the 0-day problem.

Logging and auditing

Logs and other audit trails can provide indications or clues that an attack has succeeded. However, properly monitoring logs requires considerable effort, and many system administrators fail to take the time to regularly review logs. But even when logs are regularly monitored, they merely detect rather than prevent attacks.

Memory protection

One technique that is often effective in protecting against 0-day attacks is memory protection. Here are some of the most popular memory protection schemes:


Stack canaries

Based on a concept originated by Crispin Cowan, a stack canary is a memory word containing a designated value, pushed onto the stack when a routine is called. When control returns to the calling routine, it verifies that the value of the stack canary has not been modified. Buffer overflow attacks that target the stack are likely to modify the value of the stack canary and therefore may be detected .


Nonexecutable stack

Buffer overflow attacks that target the stack generally inject code into the stack and compromise the target host by executing the injected code. Since most programs don't require that stack contents be executable, buffer overflow attacks can be complicated or even thwarted by preventing execution of code residing on the stack. Many common microprocessors ”including those having the Intel x86 architecture ”can be configured to prohibit execution of stack contents.


Random assignment of memory

Many exploits depend on knowledge of the specific memory locations occupied by the components of vulnerable programs. Specially modified compilers or loaders can randomize the addresses of memory into which program components are loaded, thereby breaking exploits that depend on fixed memory assignments.

Well-designed and well-implemented memory protection schemes tend to be effective even against attacks on 0-day vulnerabilities. However, some specific implementations of memory protection schemes can be circumvented relatively easily. In other cases, such as that of Microsoft's "security error handler" function added to its C++ compiler, the scheme itself is the source of vulnerabilities. [4]

[4] See "Microsoft Compiler Flaw Technical Note," by Chris Ren, Michael Weber, and Gary McGraw, and "Cigital Warns of Security Flaw in Microsoft .NET Compiler," both available at http://www. cigital .com/news/index.php?pg=art&artid=70.

SELinux does not incorporate memory protection facilities. However, SELinux consistently interoperates well with such facilities. Therefore, SELinux users can generally employ memory protection features when their operating system provides them.

Sandboxes

Yet another approach to defending hosts against 0-day attacks is running programs, especially services, within contexts called sandboxes that limit their capabilities. Sandboxing is common for programs running under Unix and Unix-like operating systems such as Linux, which includes the chroot command that creates such a sandbox. Sandboxing is also used for Java programs running within popular web browsers.

Sandboxing generally doesn't prevent the exploitation of an 0-day vulnerability. But, the attacker who successfully exploits an 0-day vulnerability in a sandboxed program gains access to only the capabilities afforded by the sandbox. Therefore, the sandbox limits the damage resulting from a successful attack.

However, sandboxes are software entities and thus are equally as imperfect: an attacker who gains access to a sandbox may be able to attack and escape it. In general, under Unix and Unix-like operating systems, it's possible for attackers to escape chroot sandboxes that contain programs running as the root user. However, sandboxes that contain programs running as a non-root user are less vulnerable. SELinux provides a special sort of sandbox, known as a domain , that is very difficult for attackers to escape ”even if the domain contains programs running as the root user.

Access-control lists

An especially flexible form of sandbox is provided by mechanisms known as access-control mechanisms . In their simplest form, access-control mechanisms are found in every multiuser operating system that protects the files and resources owned by one user from unauthorized access by other users.

Access-control mechanisms are implemented by associating access-control lists (ACLs) with objects (e.g., files and directories), thereby limiting access to the protected objects. Essentially, the most familiar form of an ACL consists of three elements:

  • A list of operations

  • A list of subjects (users)

  • A mapping that specifies which subjects (users) are authorized to perform which operations on the protected object

By associating an ACL with a file, for example, you can specify the users permitted to access the file. The familiar Unix chmod command accomplishes exactly this result. Representing many sorts of system objects, such as devices and FIFOs, this simple mechanism enables system administrators and users to limit access to most system objects. ACLs can also specify access by subjects other than users, such as programs. Although several commercial operating systems based on Unix include ACLs, Linux does not. SELinux, on the other hand, goes beyond ACLs in providing a special type of access control known as mandatory access control (MAC). The following section explains MAC and contrasts it with the type of access control commonly used by Linux.

1.1.6 Discretionary and Mandatory Access Control

Most operating systems have a built-in security mechanism known as access control . Two main types of access control are commonly used:


Discretionary Access Control (DAC)

Discretionary access controls are specified by the owner of an object, who can apply, modify, or remove them at will.


Mandatory Access Control (MAC)

Mandatory access controls are specified by the system. They cannot be applied, modified, or removed ”except perhaps by means of a privileged operation.

1.1.6.1 Discretionary access control

Linux employs discretionary access control. Under discretionary access control, a program runs with the permissions of the user executing it. For instance, if I log in as the user mccartyb and execute the program mutt to read my email, the program executes under my user ID and is capable of performing any operation that I'm permitted to perform. In particular, it can read and write files in my home directory and its subdirectories, such as the sensitive files holding SSH information. Of course, mutt doesn't need to access such files and generally wouldn't do so. But, by exploiting a vulnerability in mutt , an attacker may coerce mutt to access or modify sensitive files, thereby compromising the security of my user account.

Obviously, mutt doesn't need to be able to perform every operation that I'm permitted to perform. It has a well-defined purpose that requires only a handful of permissions, mostly related to network access. Granting mutt a broad array of permissions is inconsistent with the principle of least privilege. From the standpoint of the principle of least privilege, giving a program all the privileges of the user running the program is wretchedly excessive and highly risky.

Under discretionary access control, a compromised program jeopardizes every object to which the executing user has access. The risk is particularly great for programs that run as the root user, because the root user has unrestricted access to system files and objects. If an attacker can compromise a program running as the root user, the attacker can often manage to subvert the entire system.

Therefore, discretionary access control provides a rather brittle sort of security. When subjected to a sufficiently potent attack, discretionary access control shatters, giving the attacker a virtually free hand.

1.1.6.2 Mandatory access control

SELinux supplements the discretionary access control mechanism of Linux with mandatory access control. Under mandatory access control, each program runs within a sandbox that limits its permissions. A compromised program jeopardizes only the permissions available to the program. These are generally a small subset of all the permissions afforded the user executing the program.

Generally speaking, mandatory access controls are much more effective than Unix-style discretionary access controls, for the following principal reasons:

  • Mandatory access controls are often applied to agents other than users, such as programs, whereas Unix-style discretionary access controls are generally applied only to users.

  • Mandatory access controls cannot be overridden by the owner of the object to which they apply.

  • Mandatory access controls may be applied to objects not protected by ordinary Unix-style discretionary access controls, such as network sockets and processes.

Thus, the mandatory access control facilities of SELinux provide stronger security than the discretionary access control facilities of Linux. Under SELinux, programs are generally assigned privileges according to the principle of least privilege; that is, they're generally granted permission to perform only a limited set of necessary operations. Therefore, an attacker who compromises a program running as the root user on an SELinux system does not generally gain an effective beachhead from which to successfully attack the entire system. Instead, the attacker gains control of only the compromised program and a handful of related operations.



SELinux. NSA's Open Source Security Enhanced Linux
Selinux: NSAs Open Source Security Enhanced Linux
ISBN: 0596007167
EAN: 2147483647
Year: 2003
Pages: 100
Authors: Bill McCarty

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net