Chapter 13. DNS Security

   

DNS is complex, and can be difficult to understand. This complexity is compounded by often conflicting advice on how DNS should be managed. Most of this advice is accurate, depending on your needs, but it is important to understand that not all advice applies equally to all situations.

As with any other part of the network, there are several aspects of DNS security that need to be addressed:

  • The domain name

  • The authoritative DNS server

  • Individual zone files

  • The caching DNS server

Before understanding the idea behind DNS security strategies, it is important to know a little of the history of DNS. When the Internet was still a project, called ARPANET, run by DARPA, administrators realized they needed an easy way for machines to communicate with each other. To resolve this problem a file called hosts.txt was created and stored on a server run by the InterNIC. The purpose of the hosts.txt file was to map a host name to an address allowing servers connected to the DARPA network to talk to each other. Administrators would download the hosts .txt file from the InterNIC machines every night and have the latest information.

NOTE

The hosts.txt file obviously did not scale well, but it became an ingrained part of most operating systems. Chances are if you do a search for the file hosts.txt on your machine, you will find a remnant of that file.


In the 1980s ARPANET adopted TCP as the official protocol, and in 1983 Jon Postel released RFC 880, containing a proposed plan for developing DNS. Paul Mockapetris also introduced RFCs 882 and 883 outlining a domain name infrastructure. The ISI, a department of the University of Southern California was tapped in RFC 990 to manage the root name servers and SRI International was tapped to manage the first top-level domains (TLDs), which included: .arpa, .com, .edu, .org, .mil, and .gov.

NOTE

The first .com domain ever registered was symbolics.com on March 15, 1985.


In 1992 the Defense Information Systems Agency (DISA) transferred control of the .com, .org, .edu, .gov, and the .net domains to the National Science Foundation (NSF). In 1993 the NSF outsourced control of these domains to Network Solutions, a division of SAIC. Network Solutions began charging for domain names in 1995. When the contract between the NSF and Network Solutions expired in 1998, a new organization was formed . The Internet Corporation for Assigned Names and Numbers (ICANN) was created to open up the registration for TLDs. ICANN also coordinates the generic top-level domain (gTLD) and the country code top-level domain (ccTLD) system, and is responsible for ensuring the root name servers function properly.

The DNS architecture is often compared to a tree. While that analogy is not too far off it does not go far enough. To get a better idea of how DNS works think of the ugliest tree on the face of the earth. The tree has hundreds of limbs that grow off the trunk, each spreading out in a different direction. The branches are often ensnarled, and each limb appears to have a different type of leaf. Some branches have maple leaves, while others have oak leaves , still others look like they belong to weeping willows. Every sort of leaf, flower, or fruit imaginable hangs off this tree. If you have no trouble picturing this tree, then DNS should be a snap.

The DNS tree starts with the root domain, which is ".". All other domains stem from there, so the gTLD .com, is really .com. The same applies to .edu, .net, and so on. From the TLDs spring domain names, such as example.com, and a fully qualified domain name (FQDN), www.example.com for example, stems from the domain names.

Information about domain names is stored on domain name servers. There are different name servers assigned for each branch of a domain name. The root domains are hosted on the root name servers. There are currently 13 root name servers spread throughout the world; their names, IP addresses, and location are mapped out in Table 13.1.

Table 13.1. The Root Name Servers

FQDN

IP Address

Location

Owner

A.ROOT-SERVERS.NET

198.41.0.4

Herndon, VA

Network Solutions

B.ROOT-SERVERS.NET

128.9.0.107

Marina del Rey, CA

USC, ISI

C.ROOT-SERVERS.NET

192.33.4.12

Herndon, VA

PSINet (Cogent Communications)

D.ROOT-SERVERS.NET

128.8.10.90

College Park, MD

University of Maryland

E.ROOT-SERVERS.NET

192.203.230.10

Mountain View, CA

NASA

F.ROOT-SERVERS.NET

192.5.5.241

Palo Alto, CA

ISC

G.ROOT-SERVERS.NET

192.112.36.4

Vienna, VA

DISA

H.ROOT-SERVERS.NET

128.63.2.53

Aberdeen, MD

Army Research Laboratory

I.ROOT-SERVERS.NET

192.36.148.17

Stockholm, Sweden

NORDUnet

J.ROOT-SERVERS.NET

198.41.0.10

Herndon, VA

Network Solutions

K.ROOT-SERVERS.NET

193.0.14.129

London, England

RIPE

L.ROOT-SERVERS.NET

198.32.64.12

Marina del Rey, CA

ICANN

M.ROOT-SERVERS.NET

202.12.27.33

Tokyo, Japan

WIDE

The root name servers maintain information about all the ICANN-approved gTLDs and ccTLDs. The information they have is collected from a master database maintained by the authority for the particular TLD. For example, the .com gTLD is maintained by VeriSign Global Registry Services (GRS). VeriSign GRS has a database of information that has to be retrieved by the root name servers periodically, so that information about the .com gTLD is available to everyone. The root name servers receive similar periodic updates from the authoritative registry servers for all the ICANN-approved TLDs.

The ICANN-approved label is important, because there is nothing preventing anyone else from starting a registry that competes with the ICANN-approved registry. In fact, every couple of years a new one, New.net is the current example, seems to surface. These competing registries generally allow individuals, or companies, to register domains with different TLDs, such as .tech, or . kids . The problem with a registry service not associated with ICANN is that the vast majority of the people on the Internet will not be able to access the domain names, because most people query the root name servers for information. These domains are not ICANN-approved, so they are not part of the data that is stored on the root name servers. A name server administrator can adjust their name servers to query the servers maintained by the alternate registry, but most won't.

How does the whole process work? When a user wants to visit a domain name, for example, types www.example.com into a web browser, the request is first sent to a caching name server. The caching name server has a list of the root name servers, usually in a file called named.root, root.hints, or db.cache, with information similar to what is in Table 13.1. The caching name server queries one of the root name servers for information about the domain example.com . The root name server answers the caching name servers with information about example.com ; specifically , it tells the caching name server what the authoritative name servers for example.com are.

Authoritative name servers are generally either run by an ISP or are located within a company premises, and they provide information about a domain name. Each domain should have at least two authoritative name servers, in two different locations. The two authoritative name servers will have the same information about a domain name. The information is stored in a zone file.

The caching name server, having gotten the authoritative name servers from the root name servers, sends a query to the authoritative name server for information about www.example.com . Assuming the authoritative name servers are configured correctly, the IP address for www.example.com will be sent to the caching name server, which passes it to the user who made the original query. This process is illustrated in Figures 13.1 and 13.2.

Figure 13.1. The first part of the DNS process. a network user queries a caching name server, which queries a root name server.

graphics/13fig01.gif

Figure 13.2. The caching name server uses the host information from the root name servers to query ns1.example.com, which returns an IP address for www.example.com . This address is passed onto the original user, which queries the name server.

graphics/13fig02.gif

The distributed, tree-like, nature of DNS is its greatest asset. Information is stored in a redundant manner: There are 13 root name servers, with identical information; there are at least two authoritative name servers for each domain, and most companies have at least two caching name servers. This type of data replication provides a robust infrastructure, giving DNS a high level of availability. For the most part, even if multiple servers in the DNS process fail, a user will still be able to reach the intended destination.

It is extremely important to remember that distribution is crucial to the security of DNS. It is essential that authoritative name servers reside on two different networks, preferably in two different geographic locations (i.e., not the same office). This is known as the two-network rule. Many companies have fallen victim to DNS attacks because their DNS servers were sitting on the same network segment. If an attacker cannot exploit a security hole within the DNS server, then he or she only needs to launch a DoS attack against that network segment. It is significantly more difficult to launch a DoS attack against DNS servers in different locations, especially if one of those servers is located within your ISP's data center.

In addition to following the two-network rule, many DNS experts recommend running different operating systems on each server. The strategy is the same as the two-network rule: If an attacker is able to exploit a security weakness in one DNS server, the same weakness may not exist on the second server.

The distributed nature of DNS is also its greatest liability. With so many servers involved in the DNS process, there are many potential areas for security breaches:

  • How does the caching name server know that the root name server has the right information?

  • If it does get the right information, how does the caching name server know that the information from the authoritative name servers is accurate?

  • How does the user know that the information coming from the caching DNS server is the right information?

These liabilities, part of the challenge of securing DNS, exist in part because DNS, as a protocol, has been around for so long. DNS is 20 years old and was not originally designed with security in mind. Obviously, things have changed, and there are ways to secure DNS, but not all of these methods are in widespread use. Because DNS is crucial to the infrastructure of the Internet, it is important to ensure that all aspects of your DNS process are secure.

   


The Practice of Network Security. Deployment Strategies for Production Environments
The Practice of Network Security: Deployment Strategies for Production Environments
ISBN: 0130462233
EAN: 2147483647
Year: 2002
Pages: 131
Authors: Allan Liska

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net