Split DNS, NAT, and Network Hiding


Many companies don't want to share the complete DNS information for their internal network with the world for security reasons. Another reason is that they might be using a NATing Internet gateway and that the complete DNS information contains hosts with IP numbers that are not routable over the Internet. So, they make this DNS information unavailable from the outside. But, they still want some hosts to be seen and used by the outside. They therefore provide that DNS information to the world.

The network management principle is known as network hiding, and the accompanying DNS configuration is known as split DNS. Many will argue that network hiding is an exercise in futility, that scores of things cross over from the internal network to the Internet that will give away names and internal addresses of hosts on the inside. Something as simple as a mail header section contains oodles of information it contains hostnames, perhaps IP numbers, the name and version of MTA software, and maybe even details about how mail service and the network is structured on the inside. In Listing 8.1, the host usrlms004.prenhall.com is a hidden host: the name is not in DNS. But we still know its IP address, 168.146.69.20. And we know what MTA it runs NPlex which, from what I can tell after a short Web search, runs on Microsoft Windows and appears to have been discontinued by its maker, who has a Web site at http://www.isocor.ie. Another possible security breach occurs if Web browsers on the inside network have Java, ActiveX, or scripting enabled. In this situation, a simple little program can also give away such information to an outsider.

Listing 8.1 A Mail Header with a Story to Tell
 Received: (qmail 19833 invoked from network); 18 Jul 2000 18:31:03 -0000 Received: from mail.linpro.no (HELO linpro.no) (195.0.166.2)   by nfsd.linpro.no with SMTP; 18 Jul 2000 18:31:03 -0000 Received: (qmail 13477 invoked by uid 5032); 18 Jul 2000 18:31:03 -0000 Received: (qmail 13474 invoked from network); 18 Jul 2000 18:31:03 -0000 Received: from usrlms006.prenhall.com (198.4.159.40)   by mail.linpro.no with SMTP; 18 Jul 2000 18:31:03 -0000 Received: from usrlms004.prenhall.com (168.146.69.20)   by usrlms006.prenhall.com (NPlex 2.0.123) for janl@linpro.no;   Tue, 18 Jul 2000 14:31:01 -0400 Received: by usrlms004.prenhall.com (NPlex 2.0.119);   Tue, 18 Jul 2000 14:32:19 -0400 X400-Content-Identifier: Website Message-Id: <"/GUID:QDdUMkrJc1BGq7ADQt4IXPg*/G=Lucy/S=Sky/              OU=mcp{095}exch/O=pearsontc/PRMD=pearson/             ADMD=telemail/C=us/"@MHS> 

With BIND 8 and earlier versions, this required two different sets of nameservers. BIND 9 implements a view feature that enables you to present the two views of the network from one set of nameservers (see Chapter 16 for more information about BIND 9).

Consider the following scenario. The principle is simple: penguin.bv wants to set up a firewall with private IP numbers on the inside. It is going to set up internal DNS servers for the inside hosts and external DNS servers for the rest of the network. The external DNS will hold only the addresses of the mail, FTP, Web, and DNS servers. The internal DNS will have that and all the internal hostnames, as well as the external ones. The global DNS will have knowledge of only the external penguin.bv DNS servers, and will never query any servers but those about penguin.bv names. That way, no one on the outside will be able to get information about the inside (see Figure 8.2).

Figure 8.2. Internal queries go to the internal server, which knows everything about the private network and resolves queries about the Internet in the normal manner. Queries from the Internet are answered by the external DNS, which has only limited knowledge of the penguin.bv zones.

graphics/08fig02.gif

Inside the firewall all the hosts are set up to query the internal DNS servers. When they receive queries, they are resolved in the normal manner: They answer queries about penguin.bv hosts from their own zone files, and when the query is about something on the outside, the query proceeds from the root up, as always. Inside a large organization, you will set up either satellite cache servers or more slaves for the zones. If cache servers are set up, they must be set to forward only to the authoritative servers so that queries about the internal zone are resolved from an internal zone file:

 options {     …     // Forward only to internal authoritative servers     forward only;     forwarders {         10.0.0.2;         10.1.0.2;     };     … } 

Why Forward-Only?

If the cache servers are set up to forward first or without forwarding, they either partially or completely fail to work. When set to forward first, your BIND evaluates what it thinks of the answers from the forwarded-to host, and discounts them entirely if they seem unreliable. So, suddenly internal resolving could fail for no obvious reason. If you use forward-only, BIND slaves itself to your authoritative servers and never doesn't ask them.

If the cache does not ask the internal authoritative servers for names in the penguin.bv zone, it has no other way to resolve the name than to consult its cache and root.hints and resolve the name from the root up. The query then ends up at the external DNS servers because these are the only servers that can be found from the outside. The external DNS servers have no knowledge of the internal names and the query then fails, which is not what we wanted. So, forward-only is obligatory on the nonauthoritative servers.

The external DNS servers are the ones penguin.bv registers with its TLD registrar; therefore, only the external servers are listed in the TLD and only they can be found by anyone outside penguin.bv asking for them. Because the external DNS servers contain zones that are a subset of the real zones, listing only those hosts that need to be known by the outside, they cannot give any secrets away. The external DNS servers are also configured in a completely normal manner. All in all, split DNS is quite simple: The only differences between the internal and external servers are the contents of their zone files and whether they are registered with the TLD registrar. The only magic involved is that the internal hosts and caches must query only the internal servers something which is easily arranged.

Split DNS on a Firewall

Firewalls, NAT, and split DNS are different sides of the same story. Quite often, when you first get a firewall, you start performing network hiding and need a split DNS setup. It is tempting to not only use the firewall as a DNS proxy, but also to serve both internal and external zones from it. Doing so makes things simpler and reduces the number of machines required. Fortunately, this is possible using two instances of BIND and the listen-on option. However, it is much simpler to do this with BIND 9.0.0's views feature, which I mentioned earlier in the chapter, and you will soon see why.

Of the two instances of BIND, one is for the inside listening on the inside network interface, and one is for the outside listening on the outside interface. If the penguin.bv firewall has 192.168.55.1 as its inside address and 10.0.0.2 as its outside address, you configure the internal instance like this:

 acl internalifs { // Internal interfaces     192.168.55.1; }; options {     …     listen-on port 53 { internalifs; };     … } 

The external configuration simply replaces the address (and name) in the interface ACL. Otherwise, each BIND is configured as described previously: the external with the reduced zones and the internal with the complete zones. This enables you to get two nameservers for the price of one.

Several interfaces might be counted as internal or external, depending on your network. They should all be listed, but then again, it might not matter whether the clients consistently use only one address for each purpose.

That is not the end of it, however. As I said previously, this should all be performed inside a chroot jail, and to add a twist, preferably one chroot jail for each instance of BIND. Each jail will contain a separate named.conf and the appropriate zone files to accompany it. The chroot setup and scripts must be modified accordingly. This is a difficult bit to get right because a simple substitution of the ndc and named executables such as I describe in Chapter 15 will not work. Dave Lugo has made some notes at http://www.etherboy.com/dns/chrootdns.html about how to do this. They are a bit Linux specific, but replacing the init script and only using the init-script to manipulate named (and not ndc) is not a bad idea at all. All manual, simple invocations of named and ndc would fail in any case, and having two sets of ndc and named wrappers is a bit messy.

With BIND 9, you simply run one BIND and configure two different views that clients see based on their IP addresses. The 192.168.55 network is internal, and everything else is external:

 acl internal {     192.168.55/24;     localhost; }; view internal_penguin {     match { internal; };     zone "penguin.bv" {         type master;         file "pz-internal/penguin.bv";         // Other zone statements here     };     // Other internal zones here }; view external_penguin {     match any;     zone "penguin.bv" {         type master;         file "pz/penguin.bv";         // Other zone statements here     ];     // Other external zones here }; 

When a query arrives, the zones are checked in order for a match against the origin address of the query. In the previous configuration, any host not matching the first internal view falls through to the second view, which matches any origin address. This is quite a bit neater than the options required to give the two different BIND 8s as outlined earlier.

Large Networks and Split DNS

In Chapter 3, I describe forwarding, and inside a large network, using forward-only to a few resolving, recursive servers is an obvious solution. As the network grows, you can add levels of forwarding (but not many), such as used in Australia, also described in Chapter 3. This obviously has some scaling advantages because the chance of finding a cached reply increases as the query proceeds along the forwarding chain. The disadvantage though, is that all queries that cannot be satisfied from a cache end up at one of the resolving servers at the end of the forwarding chain. This is contrary to how DNS usually works forwarding centralizes resolving, rather than distributing it as is intended. With a large enough network, the load on the central forwarding servers can become considerable. It can be eased by having several resolving servers and listing them in the forwarders statement in random order in the satellite servers. Adding resolving servers with full resolving powers and access to the Internet will scale linearly, if you manage to distribute the load evenly.

With this setup, you can still delegate subdomains to other servers and perform the usual tasks for large networks. But performance can become less than optimal. If you do it incorrectly, queries from a subdomain about a name in that subdomain proceed all the way to the central forwarding servers before being resolved by the normal method with the subdomain's own authoritative nameserver. To perform it correctly, the hosts within a delegated subdomain should have their own nameservers and be authoritative for the zone so that they are capable of resolving the names in the subdomain right away instead of forwarding the query.

At some point, the pressure of the query concentration that forwarding results in can become unmanageable. At that point, it's time to implement internal root servers. In Chapter 11, "DNS on a Closed Network," I describe such private, internal root servers. An internal root setup has the same scaling properties as DNS as used on the Internet, which is pretty good. The disadvantage, or what used to be the disadvantage, of an internal root setup is that it cannot resolve names outside your private network the Internet DNS effectively becomes unreachable. Two ways exist to work around this, though, and the second is my favorite.

Using Application Proxies

One way, which fits in well with strict security policies, is using application proxies that have access to the Internet DNS namespace for all traffic to the outside. For example, the organization is likely to have one, or several, Web proxy machines that all Web content is fetched via. These Web proxies can be configured to use an Internet DNS rather than the internal DNS. Web proxies, such as Squid, even have configuration file options to override /etc/resolv.conf so that they can resolve things independently of the rest of the applications on the machine. Similarly, this can be done with mail and other application proxies set up around the perimeter of the internal network. Careful tricks with wildcard MX records and other records as needed can be used with this setup to achieve pretty much anything.

Using BIND

This method works only with BIND 8.2 and later versions. A new zone type was introduced that enables you to override the global forwarding policy for specific subdomains. This is exactly what we want.

First, you install an internal root setup exactly as described in Chapter 11, with internal hint files and everything. Then, by default, you forward all queries to an Internet-aware resolving server. As before, the code is

 options {     // Forward to servers with Internet root.hints files     forward only;     forwarders {         10.0.0.2;         10.1.0.2;     }; }; 

Next, you override the forwarding for resolution for the local domains, penguin.bv, and the reverse zones:

 zone "penguin.bv" {     type forward; }; zone "10.in-addr.arpa" {     type forward; }; 

This cancels the global forwarding policy forcing normal, non-forwarded, hint-using resolution of these domains and their subdomains. The scalability problem of a forwarding hierarchy is solved. Of course, you can add as many forward zones as you need, as the corporation grows into a multinational conglomerate acquiring companies that are folded into the network. However, this is not a perfect solution because maintaining and distributing the forwarding exception list can grow into a tedious task.

BIND 9.0.0 does not support forward zones, but a future version will.

Now that forward zones are available, the disadvantages that formerly dissuaded people from using internal roots are now either vanished or so small that I would now prefer to set up internal roots rather than organize a forwarding hierarchy when the need for scaling arises. It is less work and will not need to be heavily reorganized in any foreseeable future.



The Concise Guide to DNS and BIND
The Concise Guide to DNS and BIND
ISBN: 0789722739
EAN: 2147483647
Year: 1999
Pages: 183

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net