We would like to spend a few pages discussing design elements that are not only commonly used, but are also representative of architectures that take into account resource separation. This section talks about setting up a mail relay to help you secure your organization's email link to the Internet. We also explore a DNS configuration known as Split DNS, which is very useful for mitigating risks associated with running a publicly accessible DNS server. Finally, we discuss ways of applying resource separation techniques to secure client stations. These scenarios are meant to demonstrate practical uses of resource separation. They will help you make decisions regarding the extent of separation that is appropriate and feasible for your organization.
A mail relay is one means to help secure your environment's email functionality. Mail can be passed into your environment using a properly configured external mail relay server to forward inbound messages to a separate internal mail system. To accomplish this, you install mail-relaying software on a bastion host that is accessible from the Internet. You then configure the relay to forward all inbound messages to the internal mail server, which, in turn, delivers them to the organization's internal users. Splitting the mail server into two components allows you to place them into separate security zones.
Justifying Mail Server Separation
Instead of implementing a store-and-forward configuration to separate mail functions into two components, we could have used a single mail server to attend to internal users and accept email from the Internet. This setup eliminates the cost of deploying and maintaining an additional host, but it increases the risk that an external attacker might get access to the company's sensitive information. As we have seen throughout this chapter, it often makes sense to separate public systems from the internal ones to sandbox an attacker who gains access to an Internet-accessible server. Additionally, modern mail systems such as Microsoft Exchange, which are commonly used within a company, are complex and feature rich. Hardening such software so that it is robust enough to be accessible from the Internet is often difficult.
Splitting the public-facing component of the server from the internal mail distribution system allows us to place these resources into separate security zones. This offers many benefits over a configuration that integrates the two components into a single system:
Implementing a Mail Relay
A common configuration for implementing a mail relay is illustrated in Figure 13.3. As you can see, we have placed the mail-forwarding agent into the Public Servers zone, which was set up as a screened subnet. The internal mail server was placed on the Corporate zone to be used by internal users when sending and receiving email messages. To ensure that messages from the Internet are delivered to the mail relay server, the organization's DNS server set the mail exchange (MX) record for the company's network to point to the relay server.
Figure 13.3. When you are implementing a mail relay, you can place the mail-forwarding agent in the Public Servers zone.
The primary function of the mail relay is to receive messages from the outside and forward them to the internal mail server. To further isolate the internal mail server from the Internet, you might want to route outbound messages through the mail relay as well. Making outbound connections is not as risky as accepting inbound ones, but fully separating the internal server from the outside helps decrease the likelihood that it will be adversely affected by a system on the Internet. In this configuration, the internal mail server accepts messages from internal workstations and servers and forwards those that are Internet-bound to the mail relay in the Public Servers zone.
One of the advantages of splitting Internet-facing mail functionality away from the internal mail server is that it allows us to use different software packages for each component of the mail infrastructure. For instance, a Microsoft Exchange server might have the desired functionality for an internal server, but you might consider it too feature loaded for a simple mail-forwarding agent. In that case, you might want to use software you feel more comfortable locking down, such as Sendmail, Postfix, or Qmail, to implement the mail relay.
Using products from different vendors for public and internal servers decreases the chances that a vulnerability in one product affects all systems. At the same time, it increases the number of software packages you need to maintain and monitor.
Specifics for configuring a mail relay to forward inbound messages to the internal server and outbound messages to the appropriate system on the Internet differ with each software vendor. In most cases, you need to specify the following parameters on the mail relay:
You should also consider implementing masquerading features on the relay server, especially if multiple internal servers need to send outbound mail through the relay. Mail masquerading rewrites headers of outbound messages to remove the name of the originating host, leaving just the organization's domain name in the From field.
The Enterprise editions of the Microsoft Exchange 2000 and Windows 2003 Server support a distributed topology that allows you to set up a front-end server that acts as a relay for mail-related communications and forwards them to the back-end server that actually maintains users' mailboxes. Specifying that the Exchange server should be a front-end server is a matter of going into Properties in the desired server object in Exchange System Manager and selecting the This Is a Front-End Server option.
Microsoft recommends that the front-end server be fully configured before placing it into the DMZ or a screened subnet:
Future changes to the front-end server might, therefore, require temporary changes to the firewall's policy to allow RPC traffic for the period when the front-end server is being reconfigured. Alternatively, you can set up an IPSec channel between the administrative workstations and the front-end server to tunnel the RPC traffic in a secure manner.
If you are setting up an Exchange front-end server, be sure to follow the Exchange lockdown instructions described at http://www.microsoft.com/technet/prodtechnol/exchange/2003/library/febetop.mspx.
If you are interested only in relaying SMTP and do not require POP, IMAP, and Outlook Web Access (OWA) functionality of Microsoft Exchange, you could use the SMTP Virtual Server built in to Microsoft's Internet Information Services (IIS) as the mail relay. Such configuration will generally cost less to deploy because you are not required to purchase the Enterprise Edition of Microsoft Exchange 2003 Server. As shown in Figure 13.4, the SMTP component of IIS offers highly configurable mail-relaying functionality, and it can be set up with most of the other functionality built in to IIS disabled. (Be sure to lock down IIS appropriately; it has a history of security compromises.)
Figure 13.4. When configuring an IIS SMTP virtual server, you can set options that specify how the system relays mail, authenticates users, and communicates with other network components.
Administrators who are not experienced in hardening Windows-based servers will probably prefer to use a UNIX system as the mail relay server. However, if you specialize in setting up and maintaining Microsoft Windows servers, you will probably benefit from using the operating system you know best. You need to strike a balance between using software that you know and deploying software from multiple suppliers across your security zones. If your organization is relatively small, you will probably benefit from not overloading your support staff with maintaining mail software from multiple vendors. Larger enterprises are more likely to benefit from using specialized software for different components of the mail system.
Splitting email functionality into two servers allows you to apply different levels of hardening to each system. The mail relay should be configured as a bastion host, stripped of all OS components and applications not required for forwarding SMTP messages. The internal mail server does not need to be hardened to the same degree because it does not communicate with hosts on the Internet. This is often advantageous. The internal mail server might need to integrate with the internal user management system, such as Microsoft Active Directory, whereas the mail relay does not need to be aware of any such nuances of internal infrastructure.
The DNS service, which maps hostnames to IP addresses, and vice versa, is a principal component of many networks. In this section, we examine a Split DNS configuration, which is also sometimes called Split Horizon DNS. This is a relatively common design pattern that calls for separating the DNS service into two components: one that is available to external Internet users, and another that is used internally within the organization.
One of the purposes of Split DNS is to limit what information about the network's internal infrastructure is available to external users. If an Internet-accessible DNS server hosts your public and internal records, an external attacker might be able to query the server for hostnames, addresses, and related DNS information of your internal systems. The attacker can issue targeted lookup requests for a specific domain, hostname, or IP address, or attempt to retrieve the complete DNS database through a zone transfer.
Another purpose of Split DNS is to decrease the likelihood that critical internal resources will be affected by a compromised DNS server. Earlier in the chapter, we looked at how buffer overflow vulnerability allowed an attacker to gain shell access on a server running BIND. Many such attacks have been found in DNS software over the past few years, as evidenced by postings to vulnerability forums and databases.
Justifying DNS Server Separation
When deciding where to place DNS servers and whether to split DNS servers into multiple security zones, consider two primary types of users of DNS services:
DNS servers catering to different audiences vary in the sensitivity of data they require for useful operation. Specifically, publicly accessible DNS servers do not have to be aware of the hostname-to-IP mappings of systems that cannot be reached from the Internet. Also, DNS servers differ in the likelihood that they will be compromised, depending on whether they can be accessed from the Internet. Differences in risks associated with different types of DNS servers point to the need to separate DNS resources into multiple security zones.
DNS Spoofing Attacks
DNS is an attractive target for spoofing, or poisoning attacks through which attackers attempt to propagate incorrect hostname-to-IP-address mappings to the DNS server. Such attacks, in various forms, have been known to affect DNS software from most vendors.
Because DNS queries are typically submitted over UDP, servers cannot rely on the transport protocol to maintain the state of the DNS connection. Therefore, to determine which response matches to which query, DNS servers embed a numeric query ID into the DNS payload of the packet. If an attacker is able to predict the query ID the DNS server used when directing a recursive query to another DNS server, the attacker can craft a spoofed response that might get to the asking server before the real one does. The DNS server usually believes the first response it receives, discarding the second one as a duplicate. Consequently, the host that uses the DNS server to look up the spoofed domain record is directed to the IP address of the attacker's choice. Predicting DNS query IDs was relatively easy in older versions of DNS software because it tended to simply increment the IDs by one after each query.
Another variation of the DNS spoofing attack is effective against servers that happily cache a DNS mapping even if they received it as additional information in response to a query that was completely unrelated to the spoofed record. By default, DNS server software that comes with Windows NT and 2000 is vulnerable to this attack unless you explicitly set the following Registry key to the REG_WORD value of 1: HKEY_LOCAL_MACHINE\System\_CurrentControlSet\Services\DNS\Parameters\SecureResponses.5 On Windows 2000 and 2003, this Registry value also can be defined using the DNS Management Console by checking the Secure Cache Against Pollution check box in properties of the server's object.6 As shown in Figure 13.5, this check box is not set by default in Windows 2000 (though it is in 2003). Microsoft DNS is not the only software that might be vulnerable to such attacks; older versions of BIND were vulnerable to such spoofing attacks as well.
Figure 13.5. The Secure Cache Against Pollution check box (shown in Windows 2000) is not enabled by default.
Implementing Split DNS
The implementation of Split DNS is relatively straightforward when it comes to servicing inbound DNS requests from the Internet. As shown in Figure 13.6, the external DNS server is located in the Public Servers zone, which is typically set up as a screened subnet or a DMZ. The external server's database only contains information on domains and systems of which the outside world should be aware. Records that need to be accessible only by internal users are stored on the internal DNS server. This design decreases the possibility that an attacker can obtain sensitive information by querying or compromising the external DNS server.
Figure 13.6. In Split DNS configurations, public and internal DNS records are hosted using two servers, each located in different security zones.
The internal DNS server, in addition to maintaining authoritative records for internal systems, needs to handle requests from internal hosts for DNS information about systems on the Internet. How does the internal server answer queries about external systems for which it does not have authoritative information? We can configure the internal server to forward such queries to another DNS server that performs the recursive query. Frequently, the DNS server located in the Public Servers zone plays this role. Alternatively, we can configure the internal DNS server to perform recursive queries.
Consider a scenario in which the internal DNS server forwards queries for Internet records to our external server. In this case, the internal DNS server is maximally isolated from the Internet because, in addition to never accepting connections from external hosts, it never initiates connections to systems on the Internet. The external DNS server, of course, needs to be configured to accept recursive queries only if they come from the internal DNS server. Unfortunately, in this scenario, the internal server relies on the server in the Public Servers zone to handle such requests. That is the same server we deem to be under increased risk because it accepts DNS requests from external hosts. If an attacker compromises the external DNS server or manages to spoof its DNS records, the internal DNS server might receive fabricated answers to its queries.
An alternative configuration permits the internal DNS server to perform recursive queries, in which case it initiates DNS connections to hosts on the Internet. In this scenario, the queries do not have to go through a server in the Public Servers zone, which bypasses a potentially weak link in the DNS resolution process. Unfortunately, by allowing the internal server to make connections to systems on the Internet, we increase the possibility that the internal server is directly affected by an external system. For example, a malicious DNS server could exploit a buffer overflow condition by carefully crafting a response to a DNS query. An attacker could use the server's ability to initiate connections to the Internet to establish a covert channel for communicating across the network's perimeter.
The server that makes outbound connection requests is under the increased risk that it will be directly affected by an attack. Besides exploiting vulnerabilities such as buffer overflows, attackers could exploit DNS-specific weaknesses of the server (for example, by poisoning the server's DNS cache in response to a query). You can protect yourself against known vulnerabilities of this sort by staying updated with the latest version of your DNS software and by configuring it in accordance with the vendor's and industry's best practices. You also need to ensure that additional defense mechanisms are in place to mitigate the risks associated with unknown attacks.
If you use a server in the Public Servers zone to process outbound DNS requests, you are not protected against attacks such as DNS cache poisoning because spoofed information might be propagated to the internal DNS server. After all, a DNS server in the Public Servers zone is more likely to be compromised because it accepts requests from external hosts and is located on a subnet with other publicly accessible servers.
If you are willing to accept the risk that a compromise to the external DNS server might impact the ability of your internal users to resolve Internet hostnames, consider relaying outbound DNS requests through the server in the Public Servers zone. In a best-case scenario, you would actually use three DNS servers: one for servicing external users, one for answering queries for internal domains, and one for performing recursive queries about Internet systems. Unfortunately, this alternative is relatively expensive to set up and maintain and is quite uncommon.
As we mentioned previously, resources should be placed together based on their level of acceptable risk. One example of a set of resources in most business environments that share a similar acceptable risk is the client network, where end-user workstations and their ilk reside. When local area networks (LANs) first started springing up, it was not uncommon for servers and clients to share the same flat network structure. However, as networks became more and more complicated and businesses had a need for outside users to contact their servers, it became apparent that in most environments, servers needed to be split off into their own security zones. Now, in a world filled with Internet worms, mail-transported viruses, and the like, it is often more likely for a client to propagate a virus than a server. Because this places clients at a similar level of acceptable risk, it makes sense that they all share a security zone of their own. Clients differ in their level of insecurity, ranging from LAN-connected desktops, wandering laptops, VPN and dialup remote connectors, and, finally, wireless clients. In this section, we will discuss the advantages of separating these client types into their own zones for the benefit of your organization.
LAN-connected desktops usually have the lowest risk of any of our network's clients. Although most such clients have Internet access and the potential to propagate viruses and the like, at least we as administrators have the capability to force these stations to subscribe to our organization's security policy. We can verify that they are properly patched, with up-to-date virus definitions, and locked down to the best of our abilities, even creating automated processes that verify that all is well on a daily basis.
Despite all our efforts as administrators, the client PC can still be a liability to our network's security. By keeping clients in a separate security zone from internal and/or Internet available servers, we limit the chances of having our clients affect the availability of our server networks. Although having a firewall between our clients and servers requires additional administration to upkeep the firewall policy, it facilitates a "chokepoint" to control communications between them and to help mitigate client risks.
Wandering Laptops, VPN and Dialup Users
A more complicated challenge in many network environments is the client we can't control. Whether physically located at our site or connecting in through a dialup connection or the Internet, it can be very difficult to force these hosts to subscribe to our security policy. It is a best practice to segregate all remote users into their own security zone, preferably behind a firewall. This will prevent these hosts from contacting resources they shouldn't, while logging their access to resources they should. Though dividing these clients into a separate security zone affords an additional level of protection, it does not confirm that the hosts follow your security policy. How can you be sure that the sales rep who is plugging in to a conference room network jack has the latest virus updates? Or that a VPN or dialup user has carefully patched all known vulnerabilities for her version of operating system? One answer is an initiative like Cisco's Self-Defending Network, as mentioned in Chapter 24, "A Unified Security Perimeter: The Importance of Defense in Depth." It uses Network Admission Control to confirm that any host that connects to your network meets certain criteria (patched, up-to-date virus definitions, certain version of the OS, and so on) before allowing access. This can even be applied to a host plugging in to a random jack in your office. For more information on NAC and the Self-Defending Network, check out Chapter 10, "Host Defense Components."
The Wireless Client
The wireless client is by far the most vulnerable of the clients in this section. It has all the vulnerabilities and concerns of other clients, plus is exposed to possible anonymous connection without physical access being necessary. Depending on your requirements, you might consider limiting the way every wireless node communicates with each other. Alternatively, in a more cost-effective manner, you might group wireless nodes with similar security risks into their own security zones. As shown in Figure 13.7, we group all wireless laptops into a single security zone because for the purposes of this example, our laptops do not significantly differ in acceptable risk exposure. (For this example, each laptop is as likely to be compromised as the other because of its use and configuration, and each laptop contains data of similar sensitivity.) At the same time, we consider wireless nodes to be more vulnerable to attacks than hosts on the wired segment and therefore decide to separate wireless and wired systems by placing them into different security zones.
Figure 13.7. To accommodate differences in risk, we isolate wireless systems by placing them into a dedicated security zone.
In this scenario, wireless and wired machines are hosted on different subnetsone representing the Corporate zone, and another representing the Wireless zone. We use an internal firewall to control the way that traffic passes between the two subnets. The firewall ensures that even if an attacker is able to gain Layer 2 access to the Wireless zone or its hosts, her access to the Corporate zone will be restrained by the firewall's rule set. (Additional defense mechanisms will still need to be in place to protect against attacks that use protocols the firewall doesn't block.) Even traffic that passes the firewall will be logged, giving us an audit trail of any attempted attacks. We could have used a router instead of a firewall, which would also segregate broadcast domains and dampen the attacker's ability to perform attacks against systems in the Corporate zone. We elected to use a firewall because its access restriction mechanisms are generally more granular than those of a router; however, if your budget does not allow you to deploy an internal firewall, a properly configured router might provide a sufficient level of segmentation. As always, this decision depends on the requirements and capabilities of your organization.
This separation is imperative, not only because of the client issues of the laptops themselves, but also the vulnerabilities inherent in wireless technology. Wireless is the first Layer 2 network medium to which we need to worry about attackers having remote anonymous access. Having wireless connectivity available outside of your physical perimeter is like having a live network jack outside of your building! Hence, the call for all the additional security mechanisms employed with wireless networks. In any event, by dividing all wireless connectivity into its own zonea "wireless DMZ"Layer 3 access controls can be applied to wireless hosts connecting to your wired network. With new security concerns such as wireless DoS threatening us on the horizon, adding a chokepoint between your wireless connectivity and the rest of your network is crucial to a secure network.
Note that wireless access points typically have hub-like characteristics. This means that any wireless node that gains Layer 2 access to the access point might be able to promiscuously monitor network traffic on all ports of the access point. Placing a firewall between wireless and wired hosts does not protect you against such attacks because the firewall can only control traffic that crosses security zones you defined. To mitigate risks of wireless-to-wireless attacks, you would probably need to employ personal firewallscoupled with robust VPN solutions such as IPSec to encrypt and authenticate wireless trafficin a manner similar to protecting wired traffic that travels across potentially hostile networks. For more information on wireless security, refer to Chapter 14, "Wireless Network Security."