1.2. Identifying Risks

 < Day Day Up > 

We are going to consider a variety of risks that we might face and then talk about how we can respond to them. In this chapter, we lay the groundwork for many general risks. In each of the application-specific chapters, we will identify application-specific risks that make concrete examples from the concepts here. We are not going to give you advice on how to identify lightning strikes as a risk and use FreeBSD or OpenBSD features to help protect your data against them. Instead, we think mostly about hackers and other malicious adversaries and how you identify and assess the damage they can wreak on your infrastructure.

1.2.1. Attacks

An attack against a system is an intentional attempt to bypass system security controls or organizational policies to affect the operation of the system (active attack) or gain access to information (passive attack). Attacks can be classified into insider attacks in which someone from within an organization who is authorized to access a system uses it in an unauthorized way, or outsider attacks, which originate outside of the organization's security perimeter, perhaps on the Internet at large.

Attacks motivate system administrators, supervisors, and organizational leaders into caring about security, but where's the damage in an attack? What kind of major assault is required to wreak the kind of havoc required to get all these people worked up? Despite our definition of attack, we still do not have enough information to determine for a given system, where the risks lie.

In order for active and passive attacks to succeed, something must be at fault. Attacks necessarily leverage fundamental behavioral problems in software, improper configuration and use of software, or both. In this chapter, we examine these classes of attacks including the special-case denial of service (DoS) attack.

Remember that while attacks are necessarily intentional, system disruptions due to software failure or misconfiguration probably are not. In either case, the end result is the same: system resources are abused. In the latter case, however, this does not come as a consequence of someone's malevolence. Fortunately, defending against attacks tends to help defend against unintentional disruptions, too.


At a slightly higher level, it is also important to distinguish between attacks that originate locally on the system and those that may be carried out over a network. Keep in mind that, although this distinction is important when thinking about mitigating risk, local attacks can quickly become network-exploitable if the system can be breached over the network in any way.

In order to provide system security for a single DNS server, a firewall, or a farm of web servers, it is important to understand what you are actually defending against. By understanding the classes of attacks you will come under, and the technology behind these attacks, you will be in a better position to maintain a secure configuration.

1.2.2. Problems in Software

Attacks that exploit problems in software are the most common. Security forums such as Bugtraq and Full Disclosure focus on software problems. These are somewhat high volume lists and over the years have recorded thousands of vulnerabilities in software. The kinds of vulnerabilities recorded vary as widely as the associated impact.

In order to be able to understand a security advisory and react accordingly, it is imperative that you understand the common types of software-based vulnerabilities. This knowledge will prove useful when you build and configure your systems, resulting in a more well-thought-out deployment of your infrastructure.

1.2.2.1 Buffer overflows

Probably the most widely publicized software security flaw is the buffer overflow. Buffer overflows are discussed on highly technical security lists, mass media, and as a result of this press coverage, by company executives. Buffer overflows are the result of (sometimes trivial) tactical coding errors that often could have been prevented. Exploits can be devastating. Buffer overflows are made even more dangerous when the software is reachable over the Internet. For instance, the Morris Worm (1988), used a buffer overflow in the fingerd utility as one of its exploit mechanisms. The Code Red worm, which infected hundreds of thousands of computers in 2001, exploited a buffer overflow in Microsoft's IIS web server to do its damage.

So what is a buffer, and how does it overflow? A buffer is a portion of memory set aside by a program to store data. In languages like C, the buffer generally has some predefined size. The C program will allocate the exact amount of memory required to accommodate this buffer. The developer must ensure that the data put into the buffer never exceeds the capacity of the buffer. If, through programming error and some unexpected input, too much data is inserted into the buffer, it will "overflow" overwriting adjacent memory locations that store other information.

The developer may feel that there is no pressing need to ensure the data being placed into a buffer will fit. After all, if it does not fit, the program will probably crash. The user did something stupid, and the user will pay for it. In Example 1-1, if the user inputs more than 20 characters in response to the gets(3) function, the buffer will not be able to store all the data.

Example 1-1. A simple buffer that can overflow
int main( ) {         char buffer[20];         gets (buffer);         printf("%s\n", buffer); }

The excess data will be written sequentially in memory past the end of the buffer into other parts of the processes' memory space. There are not necessarily security ramifications in this case this example is purely academic.

Taken at face value, running past the end of a buffer seems like a pretty innocuous problem. The data ends up in other parts of the processes' memory space thereby corrupting the processes' memory and eventually crashing the program, right? Well, attackers can use specially crafted data to intentionally overflow the buffer with instructions that ultimately will be executed by the operating system.

One popular attack strategy is to overwrite the return location of the function containing the exploitable buffer. Thus at the end of the function, instead of continuing to execute the program right after that function was called, the program's execution continues from some other point in memory. With a little knowledge and effort, an attacker can first inject malicious code (e.g., shellcode) which, if executed, would give the attacker access to the system. Then by setting the return location to point to their injected shellcode, the operating system will happily read the overwritten memory containing the new return location and execute the shellcode at the end of the function. This is only one example of a how a buffer overflow can lead to some additional access on the system.

Fortunately the BSD operating systems have been extensively audited to protect against these, and similar, software flaws. OpenBSD includes built-in protections that prevent programs from writing beyond their allocated memory space thus preventing the execution of arbitrary code. This kind of protection can be very useful since not all of the third-party programs you install on your systems will have undergone the kind of scrutiny the operating system went through.

For non-programmers, this might seem complicated and difficult to digest. Computer science geeks will probably find this very familiar. Application security experts and others well versed in exploiting software vulnerabilities may already be wondering why we are not covering heap-based overflows in addition to the trivial example above. For the curious, additional resources are plentiful online. To succeed as a security-minded system administrator, understanding buffer overflows to this level should be sufficient.

1.2.2.2 SQL injection

Although buffer overflow attacks get a great deal of attention, there are many other problems in software that give rise to vulnerabilities. With the increasing number of deployed web-based applications, attackers have become quite savvy at attacking web sites. Web-based attacks may target the web server software, a web-based application, or data accessed by the application. Some web-based attacks may even go right to the core operating system and attempt to exploit the host itself. The most common forms of web-based attacks, however, are those that lead to the manipulation of application data or escalation of application privileges.

Complex web applications often work directly with vast quantities of data. This is almost a guarantee when the web application is used by a financial institution or a data warehousing company but also common with smaller e-commerce sites, online forums, and so on. The only reasonable way to manage a large volume of data is by storing it in a database. When the application accesses the database, it must communicate using some defined database query language, most often Structured Query Language (SQL).

SQL is a tremendously easy language to learn, and web developers often feel comfortable working with it after only a few days or weeks of exposure. Hence, there is an abundance of web applications that dynamically construct SQL statements as part of their response to user requests. In general, these statements are constructed based on what a site visitor has selected, checked, and/or typed into form fields on a web page.

Unfortunately, developers are often unaware of how their SQL query can be abused if the input from the user is not properly sanitized. An attacker can often inject her own SQL statements as part of the input to the web application in an effort to completely alter the dynamically generated SQL query sent to the database. These queries may result in data being changed in the database or may give the attacker the ability to view data she is not authorized to view.

There are, of course, ways to defend against SQL injection attacks from within web applications. One common approach is to parse every value provided by the user. Make sure it doesn't contain any undesirable characters like backticks, quotes, semi-colons, and so on. Also ensure that the valid characters are appropriate for the value being returned. To get around the problem completely, developers may be able to use stored procedures and avoid dynamically creating SQL.

1.2.2.3 Other software problems

We have only scratched the surface of application level vulnerabilities. What follows is a brief synopsis of additional software programming errors that have been known to lead to security problems.


Format string error

Format strings are used by some languages, notably C, to describe how data should be displayed (e.g., by printf(3) and derivatives). Unfortunately attackers can sometimes manipulate these format strings to write their own data to memory on the target host. This attack is like a buffer overflow in that it will allow an attacker to run his own code on the victim host. However, it is not actually overflowing a buffer so much as abusing a feature in the programming language.


Race conditions

Race conditions exist when there is contention for access to a particular resource. Race conditions are common software engineering problems, but they can also have security ramifications. For instance, assume an application wants to write password information to a file it creates. The application first checks to see if a file exists. Then the application creates the file and starts to write to it. An attacker can potentially create a file after the application checks for the file's existence but before the application creates the file. If the attacker does something interesting, like create a symbolic link from this protected file to a world readable file in another directory, sensitive password information (in this example) would be disclosed. This specific kind of race condition is referred to as a time-of-check-to-time-of-use or TOCTTOU vulnerability.


Web-based attacks

There are a staggering number of web-based attacks available in the hacker's arsenal. SQL injection, and other code injection attacks, comprise one class of web-based attack. Through poor web application design, weak authentication, and sloppy configuration, a variety of other attacks are possible. Cross-site scripting (XSS) attacks attempt to take advantage of dynamic sites by injecting malicious data. These attacks often target site visitors; web developers and site administrators may be left unaware. Handcrafted URLs can sometimes bypass authentication mechanisms. Malicious web spiders can crawl the Web, signing your friends and family up for every online mailing list available. These types of attacks are so common that there are many security companies that focus specifically on web-based security.

1.2.2.4 Protecting yourself

As a system administrator or security engineer, you generally do not have control over the security within the software installed on your servers.

FreeBSD and OpenBSD are open source operating systems, as is most application software they run. Although you certainly have the option of modifying the source code of applications, you probably do not want to do that. System administrators rarely have time to properly develop and maintain software. As a result, managing custom code will usually lead to software maintenance nightmares.


You may be able to choose one server product over another: while building a mail relay, you might consider Sendmail, Postfix, and qmail. To help guide your decision, you might want to evaluate the particular software's security track record. Be wary, for much like mutual funds, past trends are not a reliable indicator of future performance. With custom, internally developed code you usually have no options at all; your company has developed custom software and your systems must run it.

At this point, you may be wondering if there is any way to mitigate the risks associated with vulnerabilities in software. As it turns out, there is. Being aware of vulnerabilities is a good first step. Subscribe to mailing lists that disclose software vulnerabilities in a timely fashion. Have a response plan in place and stay on top of patching. Watch your systems, especially your audit trails, and be aware when your systems are behaving unnaturally. Finally, be security-minded in your administration of systems. What this entails is described later in this chapter and embodied by the rest of this book.

1.2.3. Denial of Service Attacks

DoS attacks are active they seek to consume system resources and deny the availability of your systems to legitimate users. The root cause of a system or network being vulnerable to a DoS attack may be based on a software vulnerability, as a result of improper configuration and use, or both. DoS attacks can be devastating, and depending on how they are carried out, it can be very difficult to find the source. DoS attacks have a diverse list of possible targets.

1.2.3.1 Target: physical

DoS attacks can occur at the physical layer. In an 802.11 wireless network, an attacker can flood the network by transmitting garbage in the same frequency band as the 802.11 radios. A simple 2.4 GHz cordless phone can render an 802.11 network unusable. With physical access to an organization's premises, cutting through an Ethernet cable can be equally devastating on the wired side.

1.2.3.2 Target: network

At the data link and network layers, traffic saturation can interfere with legitimate communications. Flooding a network with illegitimate and constantly changing arp requests can place an extreme burden on networking devices and confuse hosts. Attempting to push a gigabit of data per second through a 100 Mbps pipe will effectively overrun any legitimate network traffic. Too much traffic is perhaps the quintessential example of a DoS attack.

Network-level DoS attacks can stop nearly all legitimate incoming and/or outgoing traffic thereby shutting down all services offered on that network. If a network-level DoS attack is conducted using spoofed source IP addresses, victims must often work with their ISPs to track down the source of the flood of data. Even still, this process is extremely time consuming and may not even be possible depending on the capabilities of the ISP. Worse yet, distributed denial of service (DDoS) attacks use different attacking hosts on different networks, making it nearly impossible to block all the sites participating in the attacks. DDoS attacks are difficult to defend against, especially at the host level.

1.2.3.3 Target: application

Even at the application level, a DoS attack can be devastating. These DoS attacks generally use up some finite resource on a host such as CPU, memory, or disk I/O. An attacker may send several application requests to a single host in order to cause the application to consume an excessive amount of system resources. She may simply exploit a bug in code once that causes the application to spiral out of control or simply crash. Some services that fork daemons at every new connection may be subject to a DoS if tens or hundreds of thousands of connections are made within a short period of time.

These DoS situations may not always come as a result of an attack. Sometimes, an unexpected and sudden increase in the number of legitimate requests can render a service unusable.

1.2.3.4 Protecting yourself

Physical and network-based DoS attacks are difficult to defend against and are out of the scope of host-based security. At the operating system level, you can do little to mitigate the risks associated with these kinds of attacks. Generally, some form of physical access controls help with physical attacks and specialized network devices like load balancers assist with network-based attacks.

IDS hosts may be used to help detect these kinds of attacks and automatically update firewall or router configurations to drop the traffic. Although this may protect one service, if the sheer volume of data is too much, blocking it at your firewall will not be useful. You will need to coordinate with your ISP.

Application-level attacks are something the security-minded system administrator can do something about. If you've been reading about the other forms of attacks, you might already have an idea of the kinds of mitigation techniques you can use against DoS attacks: secure architecture and build, controlled maintenance, and monitoring logs. Any mitigation techniques you have in place to protect from software vulnerabilities and avoid improper configuration and use will help make DoS attacks more difficult. Additional anomaly detection specific to identifying DoS attacks will go even farther.

1.2.4. Improper Configuration and Use

Even if the software you are using is bulletproof, that does not mean that you are home free. Even good software can go bad when configured or used in an insecure fashion. Security options can often be disabled, user roles can be jumbled together allowing excessive access, and passwords can be sent across the network in the clear. High-quality software configured poorly, can be as vulnerable as poor-quality software doing its best.

1.2.4.1 Sloppy application configuration

One of the most dangerous forms of improper use and configuration comes at the hands of the administrator. The security of a host is directly affected by the security of the applications running on that host. Careless configuration of installed services will almost certainly lead to trouble.

Let's say you need to build a mail server that supports user authentication so that your users can send mail from foreign networks. If you fail to provide an encrypted session over which the authentication information can travel, you will be exposing your organization's usernames and passwords.

You might also need to deploy a set of DNS servers. Without careful configuration restricting who is allowed to query the servers and how, you may be opening yourself up to denial-of-service attacks or worse. Again, careful configuration will help mitigate these risks.

Chapters 5-8 focus on providing common services from FreeBSD and OpenBSD systems. By understanding the application and the risks associated with providing service, you will be able to carefully configure and deploy these services safely.

1.2.4.2 Protecting yourself

Insecure configuration and use comes in many shapes and sizes. There are nearly an infinite number of ways to misconfigure a host or an application and compromise its security.

Again, as with application security, auditing is vital. But in this case, it is not simply to catch attackers. In order to maintain a controlled configuration on production hosts, you must have a configuration management process. At a technical level this means structured change control, and possibly even revision control procedures. In even more formal environments, a daily meeting where production changes are discussed and approved can go a long way toward keeping configuration changes sane. Auditing hosts that are under configuration management allows you to detect when changes have been made. Any unauthorized changes will be discovered and can be backed out before they cause security problems.

Change control procedures are more extensively discussed in Chapter 4.

1.2.4.3 Accounts and permissions

At the heart of the Unix security model are users and groups. The concept is pretty straightforward. Files and directories on a Unix filesystem are protected by user, group, and other (often referred to as world) permissions. The user represents the finest resolution of access control. Users can also be members of a group that has specific access rights over filesystem data. To make things more flexible, a user can be a member of multiple groups. Finally, the world permissions apply to all users on the system, regardless of group membership. Permissions are further broken down into access modes specific to the owner, the group, and world. Each class may have read (r), write (w), and/or execute (x) rights to the file. These filesystem permissions are likely familiar to anyone with background in Unix operating systems. However, despite being well understood, it is imperative that filesystem permissions are closely monitored.

One particularly dangerous set of permissions can make user and group ownership sticky. The set-user-identifier permission (setuid or suid) causes a program to assume the user ID of the owner of the file, not the person who executed the file. The setuid permission is represented by an s in place of the user execute bit. For instance, the traceroute(8) program that is included with FreeBSD 5.x is setuid by default.

-r-sr-xr-x   1 root  wheel     23392 Jun  4 21:57 traceroute

When a normal, non-root user runs TRaceroute, the process nevertheless runs as root. This is done because traceroute needs access to low level network capability that a normal user does not have. Similarly, the set-group-identifier (setgid or sgid) permission can be set to change the group owner of a file or directory.

While all this can be useful for making programs work in certain ways, it can lead to mistakes. The setuid bit should only be applied to programs that absolutely need it. setuid root files are often the target of attackers who try to make the program do something it should not. If a setuid program can be made to execute arbitrary code, or clobber files, it will do so as root. This could easily lead to escalated privileges on the system. Programs should never have setuid bits applied to them after the fact the fact that they will be running setuid should be part of the application design process.

To find setuid and setgid files on your BSD system, run the following command:

% find / -type f \( -perm -2000 -o -perm -4000 \) -print

If non-root users on your system don't need to run these setuid executables, the setuid and/or setgid bit can often be safely removed. OpenBSD administrators will be happy to know that the OpenBSD team has spent considerable time reducing the number of setuid binaries on the system, and using privilege separation to mitigate the risks of the remaining ones.

The BSD-based operating systems also have a special group: wheel. In order for users to use the su(1) command to launch a root shell, they must be in the wheel group. Other than controlling access to the root account, several files and devices on the operating system are group-owned by wheel. Be very careful about who you add to this group.

According to Eric Raymond's jargon file (http://www.catb.org/~esr/jargon/), the wheel group was derived from the term "big wheel," which means "a powerful person." When Unix hosts were less common and far more expensive, being in the wheel group was really a position of power. With the advent of free BSD-based operating systems that run on cheap x86-based hardware, being in the wheel group on your home PC does not seem like the honor it once was.


In general, permissions on files and directories should always be carefully controlled and audited. Administrators are often tempted while debugging problems to change the permissions on files or directories to 777 (everyone can read, write, and execute the file or change into the directory) in trying to determine whether the problem is permissions related. In some cases, especially in test environments, this may not be a terrible thing to do as long as permissions are quickly restored to normal. Unfortunately, in many cases the administrator will have made several changes at once; when the application starts working again he might be tempted to leave everything as it is. This can lead to a very dangerous situation, especially on production systems.

1.2.4.4 Passwords and other account problems

Beyond permissions, accounts themselves can be the source of security problems. First and foremost, weak passwords are a bane to any security-minded system administrator. Make an effort to enforce strong passwords on your systems and for your applications.

Strong Passwords

Strong passwords are generally at least eight characters in length and should contain a variety of letters (both lowercase and uppercase), numbers, and special characters. Avoid passwords that closely resemble dictionary words, proper nouns, or other words and numbers with any personal significance. Here are a few quick examples.

Donut is clearly a lousy password. The introduction of some number substitution in D0nut1 may make you feel better but will not really improve the strength of the password very much. The password sremrofsnart may look good at first glance, but it is merely transformers spelled backward. Likewise, substitution in sremrofsn@rt does not significantly improve the password. A mnemonic such as Tsij6wl! (This song is just 6 words long!) is a better all-round password than the others. Arbitrary choices of letters, numbers, and special characters that are easy to type are often also good candidates. Finally, passwords should be something easy enough to remember that they do not have to be written down.

Your goal in enforcing strong passwords that satisfy the aforementioned requirements is to prevent passwords from being cracked or guessed. To better understand this concept, explore password crackers and develop an understanding of how they operate. These are the tools that will be used against your password database, given the opportunity.


As time passes, even good passwords can lose strength, in a way. There is a continually increasing chance that a given password has been stolen. This can be accomplished by sniffing traffic on the network, watching users type them in, or compromising a host and running a password-cracking program against /etc/master.passwd. After some period of time, your password should be changed. For some organizations this period is annual, for others it is monthly.

Improper use of accounts is another common problem on production machines. Administrators will sometimes create a single account for many people to use. This happens often in technical support groups where turnover is high. Rather than finding an easier way to perform account maintenance, an administrator may make a single account for all tech support people and give everyone the password. It is also commonplace to use system accounts with a generic name such as www and let several web developers log in under this user ID.

The practice of assigning multiple human beings to one account on the system makes auditing impossible. There is now no way to know what human being performed what actions on a given host because, from the perspective of the operating system, there is only one user involved. Changing passwords on these kinds of accounts is also a hassle as it must be done whenever someone leaves, and everyone must be told the new password immediately.

We continue to discuss managing user permissions in Chapter 4.

1.2.5. Network Versus Local Attacks

We know that attacks can target faults in software, faults in configuration, or both, and this helps us tell how attacks might succeed, but it's also important to consider where attacks come from. A buffer overflow that allows for code execution on a host can lead to a host being compromised. However, if the buffer overflow vulnerability exists in some setuid binary that is only accessible to logged-in users, the risk associated with the vulnerability is severely limited. In fact, any vulnerability that requires local access is difficult to exploit, unless would-be attackers can gain access to the host.

On the other hand, if a buffer overflow exists in software that is reachable from the network, ftpd for example, then the danger is much greater. Simply being connected to the Internet exposes this vulnerability to everyone on the Internet. The risk associated with remotely exploitable vulnerabilities is often so much greater than with local exploits that administrators tend to respond much more quickly to these kinds of problems.

Just because network-based attacks are more dangerous does not mean you should ignore, or even put off fixing, local vulnerabilities. System administrators, due to lack of time, carelessness or both, sometimes patch the most critical vulnerabilities early and leave the rest "for later." Eventually, a few months or years down the road, this practice is likely to lead to a compromise. It may become possible to combine a vulnerability that had previously only been locally exploitable with another, remotely exploitable, vulnerability to compromise the system. An attacker may be able to break into the system through this service's vulnerability and gain a shell prompt. At this point, just one of those "minor," locally exploitable vulnerabilities is exactly what the attacker needs to gain escalated privileges and compromise the host. From there, other presumed minor vulnerabilities (in services accessible only if you are already behind the firewall, for instance) become prime targets and a means to compromise the rest of your network.

The lesson here is to follow your instincts in patching the most critical vulnerabilities first. However, if you fail to also patch minor software issues in a timely manner, they will be exactly what the attacker needs to own your network.

1.2.6. Physical Security

From boulders to armed guards, physical security can take many different forms. However in most organizations, physical security and information security are controlled by two separate parts of the organization. Firewall administrators usually do not take care of giving out physical keys for the office doors. Sometimes it's tough to remember that physical security is an integral part of computer security.

If physical security breaks down, nearly all computer security constructs are rendered useless. An attacker who has physical access to a host has completely bypassed any network protections in front of the host. No one has invented a firewall (yet) that will detect a physical intruder, remove itself from a rack, and beat the intruder senseless.

Once physical security has been breached, an attacker can remove hard drives, or force a machine to boot to alternate media in order to subvert the core operating system. This will obviously cause a service interruption for the host, but a truly motivated attacker with physical access may not care about completely destroying a host in order to steal data or simply wreak havoc.

While this may seem abundantly obvious, security professionals often lose sight of the importance of physical security. We constantly weigh risks, decide which firewall configuration is best, or determine how best to handle groups on a server. However, decisions like that may be pointless if the data center holding your hosts is in an unlocked or an unattended building. Remember to take physical security into account when weighing risk. It would be a shame to get ipfw or pf configured on your BSD firewall only to see some guy running down the street with it the next day.

1.2.7. Summary

At this point, you should have a pretty good idea of the attacks your system will face after you attach it to a network. Throughout this book we point out how you can defend against these attacks: where you should go for software updates, how to keep track of and respond to security advisories, and how to be diligent and careful in your system administration practices. In the next chapter, we'll describe in detail some of the building blocks the BSD operating systems provide to further mitigate the risks of system compromise. Before we get there, though, we close the loop on the topic of risk.

     < Day Day Up > 


    Mastering FreeBSD and OpenBSD Security
    Practical Guide to Software Quality Management (Artech House Computing Library)
    ISBN: 596006268
    EAN: 2147483647
    Year: 2003
    Pages: 142
    Authors: John W. Horch

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net