Firewalls Are Policy

The yin and yang of perimeter security policy can be referred to as access and control. When you come to fully understand these, it is hard to think of an access control list (ACL) in the same way. The point of a network is to provide access. Access pertains to accessibilityproviding service, performance, and ease of use. Control focuses on denial of unauthorized service or accessseparation, integrity, and safety. At one point as a community, we thought that two basic perimeter policy models existed:

  • Everything is denied except that which is specifically permitted.

  • Everything is permitted except that which is specially denied.

That sounds good, but it is bogus. In truth, one policy exists:

  • Everything is denied except that which is specifically permitted or that which gets in anyway.

Let's illustrate this with the simple case of making the access control decision on the destination port. For example, if the destination is TCP port 27374 (the default port for SubSeven 2.2), and 27374 isn't on the allow list, control is applied, and the packet is dropped. Internally, what is happening? The firewall scoots to the second 16-bit field in the TCP header and grabs the destination port to compare it to its access control policy. What if the packet is a fragment? Only the first fragment has protocol information. Let's say this is the runt, the last fragment of the original datagram, but it arrives first. We aren't going to make an access control decision on 27374; it isn't there. To be sure, we can make many decisions when dealing with a fragment:

  • Consult our state table to see if this is part of an existing connection.

  • Buffer the fragment, reassemble the datagram, and then make the access control decision.

  • Let the fragment through, but engage rate limiting to minimize harm.

  • If outbound ICMP unreachables are disabled, let the fragment through.

  • Drop the fragment and make the sender retransmit.

Firewall rules look simple when we are looking at a Check Point FireWall-1 GUI, but underneath there might be many complex decisions and assumptions. Complexity is the enemy of enforceable, consistent policy. Sometimes the result is that we are actually granting access when we think we are applying control. This is a case of unenforceable policy.

Access and Security Illustrated

Several years ago, I was talking with a perimeter analyst who was responsible for the site security policy for the Naval Space Command. He was overseeing the installation of a new application gateway firewall, a Sidewinder from Secure Computing. He wasn't sure what his organization should set for a security policy, so he decided to block everything and sit by the phone to see who called. This succeeded handily in providing security, but it fell a bit short in providing access. To this day, when I am reviewing an architecture, I cannot help but remember this approach.

Active Policy Enforcement

You can argue with your security officer or your boss, but you can't argue with the firewall. The firewall is a genuine policy-enforcement engine, and like most policy enforcers, it is none too bright. Much of this chapter is devoted to unenforceable policy. We are going to show that sometimes the problem is the firewall's limitations, but sometimes the firewall doesn't stand a chance. If you believe that a firewall can protect you, by the end of this section you should have some serious doubt. Often, the firewall is unable to enforce the site's policy; if you do not have defense in depth, you are running at a high risk.

Unenforceable Policy

One thing you should try to be sensitive to is unenforceable policy. We will first paint the picture clearly with nontechnical organizational examples, but then show how it is possible to create situations in which policy is unenforceable with perimeter systems. At one time, the U.S. Government had a policy that mandated "no personal use of Government computers." During the time of mainframes (ancient computers that were the size of a house with less processing power than an Intel 386), that was probably enforceable.

Times changed. By 1985, almost all technical workers had at least one personal computer or workstation on their desktops. The world had changed, but the policy hadn't. When people have their own operating and file systems, the rule of no personal use is unenforceable. Have you ever known you needed to finish that documentation, but on your way to bring up the word processor, clicked your email icon to check your mail for a second and neglected to get back to the document for an hour? Or brought up Google to look up one thing, saw something else that looked interesting, and never found the original fact? With tools like this, is "no personal use" possible? No way! That becomes an unenforceable policypolicy that is written but cannot be enforced. Unenforceable policy, whether unrealistic administrative policy or failed perimeter policy enforcement, is not a good thing.

Unofficial Official Policy

I still remember working for the Defense Mapping Agency, now the National Imagery and Mapping Agency (NIMA). Just before the Christmas holidays, we used to load a game of Star Trek on the Digital PDP 1170s for about two days. The game was primitive by today's standards, but these were huge graphics terminals used to edit maps. Playing Star Trek on these computers was the coolest thing I had ever seen. After the Christmas party, we would remove the game and search the file system for any file that was the same size as the game in case it had been copied and renamed.

I asked the system administrator, "Why can't we leave it on there for slow nights when we get our work done early?" He informed me that the head of the topography department was a powerful and authoritarian man and I really didn't want to cross him. He created an unofficial policy that we could play Star Trek for two days, but it had to be completely removed after the Christmas party. This is known as an administrative control.

Administrative controls don't work. During the years I was at the Mapping Agency, the game would pop up now and again. We would find the game and remove it, but there was no way to actively and consistently enforce the Star Trek policy.

The Effect of Unenforceable Policy

If you have an unenforceable administrative policy, then people are encouraged to either ignore it or push the rules. In fact, one of the reasons that attacks are so widespread is that many laws against them are virtually unenforceable, especially because some courts have ruled that the reconnaissance phase, scanning, is legal. Another classic unenforceable policy is a requirement to report all malicious code infections. After the problem is cleaned up, the tendency is to move on with life. The security office has no way of knowing how many infections the organization has. One Navy group went to a central antivirus console so that infections were automatically reported by the workstations. It went from seven reports the year before to more than 1,000 with the console. As a general rule, any policy that does not have a method to collect information and controls, the tools we use for enforcement, is probably unenforceable.

If You Were Told to Enforce "No Personal Use," Could You Do It?

I was once asked this question. It would be hard to get to a 100% solution, but I could block all incoming or outgoing traffic that wasn't to or from a .mil, .gov, or .int (NATO) address, and that would take care of a lot. This is some serious control!

In the case of no personal use, just like our simple example of making the access control decision on the destination port, we have those complicated cases such as fragmentation to deal with. Users might have the following types of questions:

  • What if my wife sends me an email? Is it okay to read it?

  • Can I check on my stocks at lunch?

The answer is, "Yes, these things are okay." The U.S. Government has retreated to a position called "limited personal use." Limited personal use is enforceable through a number of firewalls and other perimeter tools. One of the better examples of a limited personal use policy can be found at In essence, this says that you can use your Government computer for personal use. Don't ask, don't tell, and don't overdo the personal use. Don't send chain letters, fundraise, or pass sexual, racist, or illegal files.

If you were assigned to enforce limited personal use, could you do it? Subscription-based services that have sets of banned URLs are available. You can load a set for sites that are banned because they have sexual content, and another set for hate speech, and so on. These go by names such as CYBERsitter and Net Nanny and are available for desktops and firewalls. They are known to be inflexible; they tend to apply control when they should be allowing access. For a while, it was a common sport on the Internet to make these tools look bad because they stopped web queries for "breast feeding" and so forth. Also, sometimes they allow access when they should apply control, such as a URL they don't know when the site is a bit cagey. That is why you have to pay the money for the subscription; if you want the tool to work, you have to keep it up to date. Most K12 school systems employ these tools on their perimeters, and there we see one of the most extreme examples of the harmful effect of unenforceable policy. Kids become heroes by going through an open proxy to download porn directly to the library workstations. The good news is that progress is on the horizon, with content-aware tools such as MIMEsweeper for Web from Clearswift, but these tools are expensive and come with their own headaches. Are you starting to believe that complexity is the enemy of enforceable, consistent policy?

We have gone from administrative controls, such as manually searching for banned software, to perimeter tools that have protection-oriented security controls, such as blocking banned URLs. In our next section, we explore the ways we can create or find unenforceable policy in the perimeter. These problem vectors include coding errors, lack of understanding what the policy should be, web tunneling, email attachments, disks in briefcases, and every kind of backdoor you can imagine.

Vectors for Unenforceable Policy

If unenforceable policy is a problem because it enables people to access things that we would prefer to control, then we want to minimize it. On the organizational, administrative level, we can review our policies to see if they meet the criteria of good policy that we discuss later in this chapter. On the technical side, we can use tools such as PacketX and hping2 to throw crazy packets at the perimeter and see what gets through. What kind of packets? Try short fragments, or TCP packets with odd code combinations or every possible option set. This can alert us to how the assumptions and design decisions underneath the rules we are able to write are working. In addition to a fire-walking-type assessment, it is a good idea to ask yourself what vectors might allow unenforceable policy to manifest itself. We are the most likely culprits. Sometimes we forget how firewall rules are processed, or we add them willy-nilly.

Unwittingly Coding Unenforceable Policy

Have you ever heard the saying, "I know it is what I asked for, but it isn't what I wanted!"? This happens to firewall administrators, the folks who write firewall rules, all too often. Many times we get what we asked for, but not what we wanted, when our firewall has complex firewall rules. After all, a seemingly simple set of rules has underlying assumptions and rules, so a complex set of rules makes it pretty likely that a firewall administrator might accidentally arrange the rules in such a way that the firewall cannot enforce the policy that the administrator thinks he has defined. This is the reason you hear recommendations such as "never have more than 20 rules." That sounds good, but what if you live in the real world? You might need a bit more than 20 rules.

Firewall administrators become aware of unwittingly coding unenforceable firewall policy when they run into their first case of self-inflicted denial of service. Such denial of service often happens simply because we fail to create a match before the default deny allstyle rule. The following are some examples of common errors you might make, with the first example showing the incorrect way to allow HTTP and FTP traffic:

 allow tcp from any to any 80 allow tcp from any to any 21 deny tcp from any to any 

The classic mistake here is forgetting FTP's data channel on port 20. That is easy, and in a three-rule set, we pick it up in seconds. In a 40-rule set, however, it might not be so easy.

Another simple mistake you might make is to write a broad match before a deny. The administrator intends to stop HTTP and FTP, but he writes an allow rule first and the intended deny rules are never processed. This is easy to see in a three-rule set, but it is much harder in a large rule set.

 allow tcp from any to any deny tcp from any to any 80 deny tcp from any to any 21 

If you have a fairly large rule set, pour a cup of coffee, sit down, pay close attention to the keyword any, and ensure that you know exactly what kind of matching your firewall has (best fit, or the first or last rule to match wins). You are off to the races!

No Up-front Policy

The simple mistakes we just examined are why firewall books and instructors always stress that the first thing to do is to examine your site's policy and then create the rule set. If you just turn the firewall on and start adding rules, it is pretty easy to stuff an allow after a deny, or vice versa. It really pays off to write a rule set from the ground up. If you are not comfortable with the policy-first methodology we show in this book, create your own rule set, test it, test it some more, and stick with it. However, even with good rules that are properly organized, a policy can be subverted or made unenforceable through those two Mack trucksized holes living at TCP ports 80 and 25.

TCP Port 80

Most of us configure our firewalls to allow outbound port 80 (HTTP, or the World Wide Web). If you go to your favorite search engine and do a search on "tunnel port 80," you will find enough to curl your hair. From GNU httptunnel to custom web tunnels to emerging Internet standards, an abundance of tools and techniques is available to encapsulate any kind of network traffic imaginable in packets that appear to be HTTP. Applications such as instant messaging (IM) and peer-to-peer (P2P) file sharing clients can typically use a variety of ports, including port 80, so that they can find a way out through firewalls.

Many client applications and tunneling tools aren't just using port 80; they are actually encoding their traffic in HTTP with get, put, POST, and markup language tags. Can the fake or encapsulated traffic be detected? Sometimes it can, but it is pretty difficult, and keyword searches or content inspectors are the best shot. This is a case where your organizational policy really matters. Either you are going to allow HTTP tunneling or you are not. Tunneling is usually for the purpose of evading the firewall, so let's say you don't. If you do catch someone, then your organizational policy needs to state clearly that the individual's head will be mounted on a pole outside the main entrance of the building as a deterrent to others. Port 80 tunneling generally requires intent by someone on the inside; email, however, is the most amazing policy destruction technology of all time.


The primary policy problems with email include users sending sensitive information or binary attachments, automated forwarding, and over-responsive email clients.

Sensitive Information

I did a project for the U.S. Military once in which I collected nothing but the source, destination, and subject lines of outbound email for a month. I ran that through a keyword search for sensitive technologies. I will never forget watching the color drain from the face of a battle-tested senior officer as I showed him the results. Fortunately, it was only a 4.4-billion-dollar-a-year weapons program; it would be a real shame if we were talking serious money. This organization had an unenforceable policy: "Never send sensitive information unencrypted over the Internet." However, these merry tricksters didn't give their users any way to encrypt; they were against Pretty Good Privacy (PGP), and they had been implementing Public Key Infrastructure (PKI) for about five years.

As email has become a primary means of communication, we have become more familiar with it and less aware of the risks. As a rule of thumb, before an employee has finished drinking his first cup of coffee, he will attach and send any file you ask for and never remember that he did it.

Don't you just love those cute greeting card programs that people send back and forth? Ever wonder if they might do more than whir or chirp? Malicious materials in email can be detected by content scanners at the perimeter, especially antivirus software. (Some organizations use two types of scanners, because one scanner may pick up a virus and the other may miss it.) The Royal Canadian Mounted Police has the handle on binary attachments. Whether documents or programs, the Royal Canadian Mounted Police refuses them all and sends polite notes from its perimeter saying it doesn't accept attachments. Most of us lean way too far in the direction of access over control when it comes to email.

Outlook is the quintessential unenforceable policy engine; if it receives an email from some unknown party, it happily accepts the email's programming instructions. If someone is running Outlook internally, it is probably impossible to secure the perimeter.

Let's say you are in some super-duper secure place, such as the CIA or NSA. In the above-ground world, some wacky macro virus like Melissa variant 2,000,012 is jumping from address book to address book, and suddenly the same behavior starts on your classified network that is airgapped from the Internet! What happened? It's a good bet that infected media is being passed among systems.

Lessons That Melissa Taught Us

Before Melissa and Lovebug, not everyone understood how dangerous Outlook's behavior was. I still remember the day I saw the future and shook my head in disbelief. A friend was testing a personal firewall. Someone had sent her an email message with a URL. Outlook kindly retrieved the URL as soon as she highlighted the message so she didn't have to wait for the picture of a flower to which the URL pointed. When Outlook tried to get the flower, her ZoneAlarm alerted. I asked myself, "If Outlook will do that, what else will it do?" Even today as I write this, years after many crazy security flaws and Microsoft Office macro exploits, the answer seems to be, "Anything the sender wants it to." In this form-over-function world, I suppose organizations will continue to choose HTML-aware, macro-extendable programs such as Outlook, but I could live with plain, printable ASCII text in email if I had to.

Very Large, Very High-Latency Packets

When we do site security assessments, one of the things we like to do is the hurricane backup test. The idea is simple: A class five hurricane is expected to hit the site in 35 hours. Senior management directs that they get a backup of all the data out to a safe location in advance of the hurricane. After some initial scurrying, they start to run backups and load backup tapes. A classic old trick is to wait till they are about loaded and then ask, "Did you get all the data?" They usually nod yes. "What about the data on the user's local drives?" "That's the user's responsibility," they reply. "We back up the servers." "Ummm, and where will you be without your top salesman's contact list, or your CIO's notes?" In general, there is hopping around and a discussion of running to Costco to buy all the zip drives and disks they have. After some flapping, the hurricane only an hour away, we have to leave with whatever backups we have.

The first time we did this and watched the van loaded with all the tapes head off to discover whether cold backup sites really work, the guy standing next to me commented, "Wow, when disks are in motion, you can think of them as very large, very high-latency packets!" As a security analyst, this is a significant vector to defeat our perimeter's active policy enforcement. VLVHLPs fit in shirt pockets, briefcases, any number of form factors. One thing is certain: Every variation of sneaker netphysically moving data around on foot or by vehiclehas the capability to evade perimeter defenses. In fact, several of us on the team have worked in a number of secure facilities where disks are supposed to be registered as they go in and out, and they supposedly have spot checks, but in 20 years, we have never been stopped.

When the terrorists attacked the World Trade Center on September 11, 2001, several cold site vans were circling the blocks as administrators raced to get the VLVHLPs out of the trade center and surrounding buildings. We live in a time of increasing threat. If we are responsible for business continuity, we should think in a far smaller time horizon than 35 hours.


Backdoors make our security policy unenforceable by evading our perimeter defenses. Everyone knows that modems can breach perimeter defenses, especially when they are connected to operating systems that support IP forwarding. Many countermeasures are available for this, ranging from proactively scanning your phone lines to using digital phone lines. Wireless adds a whole new dimension of challenge; cell phones can surf the web, forward faxes, and connect laptops to the Internet. With 802.11 in wide use, organizations would be wise to walk their physical perimeters looking for signals. Don't be fooled by the box that claims they are only good for about a hundred meters. People with amplifiers and modified antennas can get significant range. You can run these tests yourself with a free copy of NetStumbler ( and an inexpensive wireless network card. We need to think about our perimeter as a physical bubble drawn around our physical assets and be ready to test for breaches in a number of ways, from physical access caused by disks moving across the perimeter to RF signal. Access and security do not just apply to the computer or network, but to the organization as a whole. This is why the one true firewall maxim is, "Everything is denied except that which is specifically permitted or that which gets in anyway."

At this point, you should have a lot to think about. If you think of other vectors for unenforceable policy, we would love to hear from you! It is important to be alert for the situations in which policy cannot be enforced. These situations have a tendency to lead to chaos; they encourage people to either ignore the rules or push the envelope on the rules.

    Inside Network Perimeter Security
    Inside Network Perimeter Security (2nd Edition)
    ISBN: 0672327376
    EAN: 2147483647
    Year: 2005
    Pages: 230

    Similar book on Amazon © 2008-2017.
    If you may any questions please contact us: