Security Zones


As you have already witnessed in one of the basic network design patterns, we might place external web servers onto the same network as public DNS servers and mail relays. This makes sense because we want to limit which resources an attacker can access directly if he succeeds at compromising one of the systems. We can use many techniques to achieve resource segmentation on different layers of defense, all of which share the underlying principle of security zones.

A security zone is a logical grouping of resources, such as systems, networks, or processes, that are similar in the degree of acceptable risk. For instance, we might place web servers in the same security zone as public DNS and mail relay servers because all these systems have to be accessible from the Internet and are not expected to store sensitive information. If, on the other hand, we use the mail server to host data that is more sensitive than what is stored on the public web and DNS servers, we would consider placing it into a separate network, thus forming another security zone. It is a common best practice for organizations to place critical systems, such as a company's Human Resources servers or a university's grade databases, behind internal firewalls.

The notion of a security zone is not limited to networks. It can be implemented to some extent by setting up servers dedicated to hosting similar applications. To create an effective design, we need to understand how to group resources into appropriate security zones. This approach mimics the design of a large ship that is split into multiple watertight compartments to resist flooding. If one of the sections is compromised, other areas retain a chance of maintaining their integrity.

A Single Subnet

Let's look at how we can create security zones within a single subnet by using servers that are dedicated to particular tasks as well as those that are shared among multiple applications. In an attempt to minimize the number of systems that need to be set up and maintained, designers are often tempted to create servers that aggregate hosting of multiple services. This configuration is often effective from a cost-saving perspective, but it creates an environment that is more vulnerable to intrusion or hardware failure than if each service were running on a dedicated server.

Consider a scenario in which a single Internet-accessible Linux box is used to provide DNS and email services. Because both of these services are running on the same server, an exploit against one of them could compromise security of the other. For example, if we were using BIND 8.2.2, an unpatched "nxt overflow vulnerability" would allow a remote attacker to execute arbitrary code on the server with the privileges of the BIND process (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-1999-0833).

Hopefully, in this scenario, we already configured the BIND server to run as the limited user nobody; that way, the attacker would not directly gain root privileges through the exploit. Having local access to the system gives the attacker an opportunity to exploit a whole new class of vulnerabilities that would not be triggered remotely. For instance, if the mail-processing part of our server relies on Procmail, the attacker might be able to exploit the locally triggered "unsafe signal handling" vulnerability in Procmail 3.10 to gain root-level access to the server (http://www.securityfocus.com/bid/3071). If the vulnerable BIND application were not on this system, however, the attacker would not be able to take advantage of the Procmail vulnerability because only a local user can exploit it.

Security Zones Within a Server

What should you do if you do not have the budget to purchase servers that are dedicated to performing only one task each? In the previous example, the organization segments BIND from the rest of the server in a primitive manner by running BIND as the user nobody instead of root. That is a good start because it doesn't allow the attacker to immediately obtain administrative access to the system by compromising BIND. This technique of dedicating limited access accounts to applications is appropriate in many circumstances on UNIX as well as Windows-based systems.

A more robust way of separating a daemon such as BIND from the rest of the system involves the use of the chroot facility, which is available on most UNIX operating systems. In a way, chroot allows us to set up multiple security zones within a single server by creating isolated subsystems within the server, known as chroot jails.

How Chroot Works

Relative isolation of the chroot jail is accomplished by changing the perspective of the "jailed" process on what its root directory is. Most applications that run on the server locate files with respect to the system's root file system, identified as /. A chroot-ed process considers its / to be the root directory of the jail and will not be able to access files above the jail's root directory. For example, BIND's core executable often resides in /usr/local/sbin/named and loads its configuration file from /etc/named.conf. If BIND is set up to operate in a chroot jail, located in /usr/local/bind-chroot, the named process will be started from /usr/local/bind-chroot/usr/local/sbin/named. This process will think it is accessing /etc/named.conf to load its configuration, although, in reality, it will be accessing /usr/local/bind-chroot/etc/named.conf.


A chroot-ed application is typically not aware that it is operating in an isolated environment. For the application to function properly, we need to copy the required system libraries and devices into the chroot jail because the application will not have access to OS components outside the jail. An attacker who exploits a chroot-ed process will have a hard time accessing resources outside the chroot jail because file system access will be severely limited, and the environment will not have most of the tools necessary to cause serious damage. Procedures for setting up chroot are available throughout the Web and are often specific to the application you are trying to isolate.

Some applications rely on too many OS components to make setting up a chroot jail for them practical or beneficial. Additionally, in numerous documented cases, problems with the implementation or the configuration of a chroot-ed environment have allowed an attacker to break out of the chroot jail. For examples of such vulnerabilities, search for "chroot" in the Common Vulnerabilities and Exposures (CVE) database at http://cve.mitre.org.

Finally, not all operating systems provide chroot facilities. Such caveats make it difficult to set up fault-proof isolation for security zones within a single server. However, dedicated system accounts and chroot-like facilities are effective at complementing other zoning techniques on the server and network levels.

Security Zones via Dedicated Servers

A more effective method of reliably separating one application from another involves dedicating a server to each application. (This technique is often considered to be among information security's best practices.) As in most designs that incorporate security zones, the purpose of dedicated servers is to help ensure that a compromise of one infrastructure component does not breach the security of the other. If an attacker exploits a vulnerability on one server, either in an application or an OS module, the other server still has a chance of withstanding an attack. This configuration slows down the attacker's progress, giving the system's administrator more time to detect and respond to the attack.

For example, many organizations need to maintain web and mail servers that are accessible from the Internet. Such web servers are often used to host the company's public website, which typically combines static and dynamically generated content. The mail server is generally used to accept email messages via SMTP, which are then delivered to the company's internal users. Many companies use web servers like the one in this example primarily for marketing purposes and do not store confidential information on the web server's file system. The mail server, on the other hand, might store confidential data in the form of sensitive email messages from the company's partners and clients. Therefore, it makes sense to split the two services into separate security zones to provide a degree of isolation for applications that differ in their degree of acceptable risk. In many cases, business needs might allow us to purchase multiple servers but prohibit us from placing them on separate networks because of budget constraints. Setting up an additional network costs money and time that some organizations cannot justify spending.

Even if the company does not consider the mail service to be more confidential than the web service, it can justify splitting them into separate servers because web services tend to be more vulnerable than mail services. Functionality offered by web server applications tends to be more feature rich and less predictable than the functionality of mail applications. As a result, history shows that web services are exploited more frequently than mail services.

Note

As you might recall from Chapter 12, "Fundamentals of Secure Perimeter Design," risk is a function of the resource's data sensitivity and of the likelihood that it will be compromised. Because risk is the primary driving force behind the need for security zones, we must look at both sensitivity and vulnerability when deciding how to separate resources. Even if the data sensitivity of two services is the same, differences in the likelihood of a compromise can warrant placing them into different security zones.


Providing resource isolation solely through the use of dedicated servers is often sufficient when differences in acceptable risk of resources are not significant. Under more varying conditions, however, we might need to increase the extent of isolation the design provides. In the next section, we explore situations in which the nature of the resources, along with business needs, require us to separate systems by using multiple subnets.

Multiple Subnets

Using multiple subnets provides a reliable means of separating resources because communications between systems on different subnets are regulated by devices that connect the subnets. Tools and expertise for implementing such segmentation are widely available. After all, much of perimeter defense concentrates on using routers and firewalls to control how traffic passes from one subnet to another.

In addition to creating security zones by enforcing access control restrictions on traffic across subnets, routers and firewalls limit the scope of network broadcast communications. Broadcasts can have significant effects on network performance as well as on resource security.

Broadcast Domains

A broadcast domain is a collection of network nodes that receives broadcast packets and typically matches the boundaries of a subnet. Subnets can be used in network design to limit the size of network broadcast domains. Splitting a network into two or more subnets decreases the number of hosts that receive network broadcasts because routing devices are not expected to forward broadcast packets. Broadcasts have security implications because they are received by all local hosts. Decreasing the size of a broadcast domain also brings significant performance advantages because network chatter is localized to a particular subnet, and fewer hosts per broadcast domain means fewer broadcasts.

A Doomed Network?

When the PC game Doom first came out, it quickly showed up on LANs throughout the world, from corporate networks to college computer labs. Doom was one of the earliest multiplayer shoot-'em-up games. The game allowed players to easily establish game sessions over the network. Network administrators quickly discovered detrimental effects that Doom v1.1 had on a LAN's performance. It turned out that, probably in an unbridled enthusiasm to release the first version of Doom, its coders programmed the game to use broadcasts for all communications among players. In tests (yes, someone performed such tests), a four- player Doom session was shown to generate an average of 100 packets per second and increase the network load by 4%.1 If administrators couldn't ban Doom from the network, they had to rely on broadcast domain boundaries to prevent Doom communications from engulfing the whole network.


We mentioned network broadcasts in Chapter 6, "The Role of a Router," in the context of disabling propagation of broadcasts through a router. This was done primarily to prevent Smurf-type attacks, which could use a single packet sent to a broadcast address to elicit replies from multiple hosts on the network. ARP uses the ability of broadcasts to deliver packets to all hosts in the broadcast domain when a system on the Ethernet segment does not know the MAC address of the host to which it wants to send an Ethernet frame. In this case, the sender typically issues an ARP request to the MAC address ff:ff:ff:ff:ff:ff, which is always the Ethernet broadcast address. The Ethernet media delivers this discovery packet to all hosts on the segment; the exception is that the system that holds the sought-after IP address replies with its MAC address.

Because ARP traffic travels without restraints within a broadcast domain, a malicious system could manipulate MAC-to-IP-address mappings of another host with relative ease. Most ARP implementations update their cache of MAC-to-IP-address mappings whenever they receive ARP requests or replies. As illustrated in Figure 13.1, an attacker who is on the broadcast domain could poison system A's cache by sending it a crafted ARP packet that maps host B's IP address to the attacker's MAC address. As a result, all traffic that system A tries to send using host B's IP address is redirected to the attacker. Tools such as Dsniff (http://www.monkey.org/~dugsong/dsniff/) and Ettercap (http://ettercap.sourceforge.net/) are available for free and are effective at automating such attacks. One way to defend against ARP cache poisoning is to enforce proper authentication using higher-level protocols, such as Secure Shell (SSH). Controlling the size of broadcast domains also limits a site's exposure to such attacks.

Figure 13.1. When performing an ARP cache poisoning attack, the attacker convinces system A to use the attacker's MAC address instead of system B's MAC address.


As you can see, IP communication in Ethernet environments is closely tied to MAC addresses. Systems can send network layer broadcasts by destining IP datagrams to broadcast addresses such as 255.255.255.255, NET.ADDR.255.255, and so on. In this case, the underlying data link layer, such as the Ethernet, is responsible for delivering the datagram to all hosts in the broadcast domain. Ethernet accomplishes this by setting the destination address of the Ethernet frame to ff:ff:ff:ff:ff:ff.

Note

On Ethernet-based TCP/IP networks, IP broadcasts are translated into Ethernet broadcasts to the ff:ff:ff:ff:ff:ff MAC address. As a result, all hosts in the broadcast domain receive broadcast datagrams, regardless of the operating system or the application that generated them.


Network layer broadcasts are frequently seen in environments that host Windows systems because Windows often relies on broadcasts to discover services on the network.

Note

To quiet a chatty windows NetBIOS network, you can disable broadcasts by configuring a WINS server. In Windows 2000 (and after) network, the NetBIOS protocol can be disabled and DNS can be used as the sole means of name resolution. For information on NetBIOS name resolution, configuring WINS, or disabling NetBIOS functionality in Windows 2000 and later operating systems, take a look at http://www.microsoft.com/resources/documentation/Windows/2000/server/reskit/en-us/prork/prcc_tcp_gclb.asp.


The following network trace demonstrates such NetBIOS-over-TCP/IP (NBT) packets directed at all hosts on the local subnet 192.168.1.0 (tcpdump was used to capture this traffic):

[View full width]

192.168.1.142.netbios-ns > 192.168.1.255.netbios-ns:NBT UDP PACKET(137): QUERY; REQUEST; BROADCAST 192.168.1.142.netbios-ns > 192.168.1.255.netbios-ns:NBT UDP PACKET(137): QUERY; REQUEST; BROADCAST 192.168.1.142.netbios-ns > 192.168.1.255.netbios-ns:NBT UDP PACKET(137): QUERY; REQUEST; BROADCAST

If you fire up a network sniffer even on a relatively small subnet that hosts Windows systems, you are likely to see similar NBT broadcast datagrams at the rate of at least one per second. Because all nodes in the broadcast domain must process such datagrams, a system devotes CPU resources to processing broadcasts whether it needs to or not. We talk more about performance implications of network broadcasts in Chapter 17, "Tuning the Design for Performance." From a security perspective, broadcast communications are likely to leak information about the application that generated them because all hosts in the broadcast domain will be "tuned in." One purpose of splitting networks into smaller subnets is to limit the amount of traffic that each node processes due to broadcasts.

Security Zones via Subnets

In perimeter security, the most powerful devices for enforcing network traffic restrictions are located at subnet entry points and usually take the form of firewalls and routers. As a result, we frequently use subnets to create different security zones on the network. In such configurations, communications that need to be tightly controlled are most likely to cross subnets and be bound by a firewall's or a router's restrictions.

Consider the example illustrated in Figure 13.2. We separated the network into three security zones, each defined by a dedicated subnet.

Figure 13.2. Here, subnets create three security zones: the Public Servers zone, the Corporate Workstations zone, and the Corporate Servers zone.


In this scenario, we group resources based on their primary purpose because that maps directly to the sensitivity levels of the data the system maintains. The border firewall and the internal router allow us to control access to and from network resources based on the business requirements for each zone. The zones are defined as follows:

  • The Public Servers zone contains servers that provide information to the general public and can be accessed from the Internet. These servers should never initiate connections to the Internet, but specific servers might initiate connections to the Corporate Servers zone using approved protocols and ports.

  • The Corporate Servers zone contains the company's internal servers that internal users can access from the Corporate Workstations zone. The firewall should severely restrict the servers' ability to initiate connections to other zones.

  • The Corporate Workstations zone contains internal desktops and laptops that can browse the Internet using approved protocols and ports and can connect to the Corporate Servers zone primarily for file and print services.

Access control lists (ACLs) on the internal router are set up to let only Windows network traffic from corporate workstations access the servers. (In this example, the servers are Windows based. For UNIX, you would allow Network File System (NFS), Line Printer (LPR), and related protocols.) In this scenario, we do not have business requirements for the corporate servers to initiate connections to the Internet. If the servers have peering relationships with external partner sites, the router's ACLs need to be tuned appropriately. Additionally, the organization's security policy in this example does not allow servers to download OS and software patches from external sites. Instead, patches are retrieved and verified in the Corporate Workstation zone before they are applied to relevant servers.

The firewall is configured to allow from the Internet only inbound traffic destined for systems in the Public Server zone on HTTP, DNS, and SMTP ports. These servers are not allowed to initiate connections that cross security zone boundaries except when relaying mail to the internal mail server.

Systems on the Corporate Workstations zone are allowed to browse the Web using approved protocols, such as HTTP, HTTPS, FTP, and so on. (For tighter control, we might want to set up a proxy server to help enforce restrictions on outbound traffic.) Corporate users can also connect to the Corporate Server zone in a manner controlled by the internal router. The workstations can connect to hosts in the Public Servers zone for remote administration using the SSH protocol.

So far, this example has focused on the high-level requirements for defining security zones and associated access control rules. Some additional details need to be addressed before you implement this design. Specifically, you need to pay close attention to how corporate and public systems resolve domain names and how inbound and outbound email relaying is configured.

What would happen if we hosted all corporate and publicly accessible systems on a single subnet, without defining multiple security zones? We would still be able to control how traffic traverses to and from the Internet because the Internet is considered a security zone, and we know that we have control over traffic that crosses zone boundaries. However, we would have a hard time controlling how internal systems interact with each other, primarily because internal traffic would not be crossing zone boundaries. (You can control intrazone traffic if the subnet is implemented as a VLAN, which we discuss in the "Private VLANs" section of this chapter.)

The reality is that setting up multiple security zones on the network is expensive. It requires additional networking gear, such as routers and switches, and it significantly complicates ACL maintenance on all access enforcement devices. That is partly why we are rarely able to provision a dedicated subnet for each core server of the infrastructure.

Also, it is generally much easier to justify separating publicly accessible servers from internal systems than splitting internal systems into workstation- and server-specific zones. In terms of risk, public servers are accessible to everyone on the Internet and are more likely to be compromised than internal servers. This difference in vulnerability levels often serves as the primary factor for separating public and internal resources into different security zones. The distinction between internal servers and workstations is often not as clear cut, but it still exists because workstations are more likely to be infected with malicious software as a result of a user's actions. Each organization must decide how much it is willing to invest into resource separation given its budget and business objectives.



    Inside Network Perimeter Security
    Inside Network Perimeter Security (2nd Edition)
    ISBN: 0672327376
    EAN: 2147483647
    Year: 2005
    Pages: 230

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net