Security Configuration Myths

Security Configuration Myths

Security configuration changes and guides have been around for about 10 years in the Windows world, longer in other areas. The original Windows NT 4.0 guides published by the U.S. National Security Agency and SANS were basically just lists of changes, with a little bit of rationale behind each setting, but no overall cohesiveness. They were a response to a demand for what we call the "big blue 'secure-me-now' button." The problem is that such a button does not exist. If it did, the vendor would ship it.

There is a lot at stake in security configuration guidance. It is easy to understand why people are clamoring for it. Everyone can see the benefit in turning on some setting and blocking an attack. In some environments, doing so is not even an option. A system must be configured in accordance with some security configuration or hardening guide to be compliant with security policy. In other environments, security configuration guidance is strongly encouraged. We believe that it is very important before you start making security tweaks, however, that you understand some of the fundamental problems with security tweaks. These are what we call the myths.

Before we start sounding like we hate security guides (which we do not), let us point something out: the authors have taken part in authoring, co-authoring, or editing almost all the commonly available guides for Windows in the past 10 years. Guides are valuable , done right. To do them right, you must understand what they cannot do, however. That is why the myths are important.

WARNING: This section is somewhat (OK, very) cynical . Take it with a grain of salt and laugh at some of the examples we give. Do not lose sight, however, of the message we are trying to get across. These are myths, and you need to be careful of falling into the trap of believing them. If you can avoid that, you can focus your efforts on the things that make a real difference instead of being lured into staring at a single tree and failing to see the security forest, like so many others.


Myth 1: Security Guides Make Your System Secure

Hang on, why is this a myth? Is not the basic purpose of a security guide to make you secure? Yes, that is the general idea. However, remember the definition of secure from Chapter 1, "Introduction to Network Protection"? The term secure connotes an end state. We will never actually get there. Security is a process, to be evaluated on a constant basis. There is nothing that will put you into a "state of security." Unfortunately many people (surely none of you readers though) seem to believe that if you just apply some hardening guide your system will now be secure. This is a fallacy for several reasons.

First, consider any of the recent worms, Sasser, Slammer Blaster, Nimda, Code Red, ILOVEYOU, and friends , etc., etc., ad infinitum ad nauseum. Not a single one of them would have been stopped by any security settings. That is because these worms all exploited unpatched vulnerabilities (for unpatched users). While most of the guides tell you that you need the patches applied, we have seen many systems that had the guides installed and whose owners therefore believed the patch was less important. If you are unsure of which patches to install, the proper answer is "all of them." Ideally, you should have more of a process around patch management, however. Turn to Chapter 3, "Rule Number 1: Patch Your Systems," for a lengthy discussion on that. Few settings can prevent your network from getting attacked through unpatched vulnerabilities.

Second, recall how the network in Chapter 2, "Anatomy of a Hack: The Rise and Fall of Your Network," was attacked. Would a guide have stopped that attack? No. There are a few things the attacker did that would have been more difficult but none of them would have stopped the attack. For instance, a security guide might have disabled anonymous enumeration so we would have had to use a domain account instead (which we had though). A guide might also have turned off storage of LM hashes, which would have made cracking passwords much harder. However, as pointed out in Chapter 11, "Passwords and Other Authentication MechanismsThe Last Line of Defense," cracking passwords is, strictly speaking, unnecessary. That's it! That is all the guides would have stopped. None of the other methods of attack would have been stopped by what the security guides typically change.

This is largely because security guides are meant to be simplistic, whereas sophisticated attacks are complex. Security guides provide a great starting point, but to really improve your security you need to do a lot more. Generally , you need to resort to complex measures to stop complex attacks, and complex measures do not package well in the form of a security template.

A security guide does not make your system secure. At best, it provides an additional bit of security over the other things you have already done, or will already do, to the system, as explained in other chapters. At worst, it compromises your security. For instance, a guide may very well compromise the availability portion of the Confidentiality-Integrity-Availability triad by destabilizing the system.

Myth 2: If We Hide It, the Bad Guys Will Not Find It

If only we had a dime for every time we have seen someone try to hide their system Hiding the system so rarely helps. Some examples are in order. For instance, some people advocate turning off SSID broadcast in wireless networks. Not only does this mean you now have a network that is not compliant with the standard, your clients will also prefer a rogue network with the same name over the legitimate one. Oh, and it takes only a few minutes to actually find the network anyway, given the proper tools. Another example is changing the banners on your Web site so the bad guys will not know it is running IIS. First, it is relatively simple to figure out what the Web site is running anyway. Second, most of the bad guys are not smart enough to do that, so they just try all the exploits, including the IIS ones. Yet another one is renaming the Administrator account. It is a matter of a couple of API calls to find the real name. Our favorite is when administrators use Group Policy to rename the Administrator account. They now have an account called Janitor3, with a comment of "Built-in account for administering the computer/domain." This is not really likely to fool anyone .

Renaming or hiding things is generally speaking much more likely to break applications than it is to actually stop an attack. Attackers know that administrators rename things, and go look for the real name first. Poorly written applications assume the Program Files directory is in a particular place, that the Administrator account has a particular name depending on region, and so on. Those applications will now break. Arguably, they were already broken, but the result is that they no longer function.

Myth 3: The More Tweaks, the Better

Security guides contain a lot of settings, and why not, there are a lot to choose from. Windows Server 2003 contains 140 security settings in the Group Policy interface, and that does not count ACLs, service configuration, encrypting file system (EFS) policies, IPsec policies, and so on. The "best" configuration for these for every environment is nebulous at best. Therefore, a number of people take the approach that if you only make more changes you will be more secure. We distinctly remember a very memorable headline from late summer 2003 (in the northern hemisphere). It read "Dell Will Sell Systems That Are Secure by Default." Dell had just announced they would start selling Windows 2000 systems configured with the CIS Level 1 benchmark direct from the factory. The article went on to point out that this guide applies "over 50 security changes significantly improving the default security of Windows 2000."

Well, there were a couple of problems with that statement. First, the benchmark only made 33 changes, not "over 50." Second, only three of them had any impact on security at all. And third, although Dell may have configured some security settings on the system, it was being sold without the latest service pack slipstreamed, which would seem, at least to us, to be a basic requirement for security. Do not get us wrong, it is encouraging to see vendors that step back and look at older operating systems and evaluate whether they can be made more secure than what was considered prudent several years ago when they were first released. The problem, however, is that first this was presented as a way to get a "secure" system, when there is obviously no such thing. Second, the vendor had missed many of the basic requirements for a protected system.

Many settings people make have no real impact on security. Consider, for instance, the "Restrict floppy access to locally logged on user only" setting. It ensures that remote users cannot access any floppy disks via the network. However, this setting works if and only if (IFF) a user is currently logged on to the system hosting the floppy ( otherwise , the setting does not take effect), and a share has been created for the floppy disk (not done by default), and the ACL on the share specifies that the remote user can get to it, and the system has a floppy drive in the first place, and there is a disk in it. Most systems sold today do not even have a floppy disk drive, not to mention how unlikely the other requirements are to occur together. We are inclined to say that this setting has no impact on security whatsoever.

We are also very fond of the NetworkHideSharePasswords and NetworkNoDialIn settings that several of the guides have advocated for years. The former is designed to ensure that when you set a share password it is obscured in the user interface dialog; if you are running Windows 95. The setting has not worked since then. (Windows NT, including Windows 2000, Windows XP, and Windows Server 2003, has never supported share passwords.) Of course, even on Windows 95, the setting would have been much more effective had it been spelled correctly (network\hidesharepasswords). The latter setting, also misspelled , controlled modem dial-in permissions, also on Windows 95. In spite of the fact that these settings have never worked on any Windows NT-based operating system, there are still "security auditors " running around explaining to management that the security guys are not doing their job unless these two settings are configuredon Windows 2000 and even Windows XP. Far too often, the guides we see are taken directly from obsolete and technically inaccurate documents for other, obsolete, operating systems. Then they are made a requirement by people who do not understand security or the operating system they are trying to protect. Actually designing security to a threat model seems to be a luxury when it is so much easier to just charge exorbitant consulting fees for parroting back what someone else, who also did not understand the product, claimed was correct.

There are some basic ground rules:

  • Requiring settings that are already set by default do not improve security.

  • Settings that only modify behavior already blocked elsewhere do not improve security (although in some cases defense in depth is appropriate so long as you do not break required functionality in the process).

  • Settings that destabilize the system do not improve security.

  • Misspelled settings do not improve security.

  • Settings that do not work on the relevant product do not improve security.

If you are one of the unfortunate people who get evaluated based on the number of settings you make, go ahead and make a bunch of these meaningless changes. Heck, invent a few of your own (everyone else seems to). Here are a few you could use without breaking anything:

  • HKLM\Software\Microsoft\Windows NT\CurrentVersion\DisableHackers=1 (REG_DWORD)

  • HKLM\Wetware\Users\SocialEngineering\Enabled=no (REG_SZ)

  • HKCU\Wetware\Users\CurrentUser\PickGoodPassword=1 (REG_BINARY)

  • HKLM\Hardware\CurrentSystem\FullyPatched=yes (REG_SZ)

  • HKLM\Software\AllowBufferOverflows=no (REG_SZ)

Make sure you set proper ACLs on them, too. This way you can show that you are actually doing much more than anyone else. If you also create a pie chart showing how much you are improving return on investment (ROI) with your careful management of security, your promotion into useless management overhead (UMO) is a virtual certainty !

Meanwhile, the rest of us will focus on actually improving security through designing security measures to a threat model.

Myth 4: Tweaks Are Necessary

Some people consider tweaks a necessity, claiming that you cannot have a secure (read "protected") system without making a bunch of tweaks. This is an oversimplification. Tweaks block things you cannot block elsewhere. For instance, if you have two systems on a home network behind a firewall, or a corporate system that has IPsec policies that only allow it to request and receive information from a few well-managed servers, tweaks are mostly not necessary to improve security. Those systems will be perfectly fine without making any tweaks.

Even on highly exposed systems, most of the tweaks are not necessary. In eWeek's Open Hack IV competition in 2002 (see http://msdn.microsoft.com/library/en-us/dnnetsec/html/openhack.asp), we built what was probably the most protected network we have ever built. In all, we made only four Registry tweaks, a couple of ACL changes, and set a password policy. The rest of the protection for those systems was based on proper network segmentation (see Chapter 8, "Security Dependencies"), a solid understanding of the threats (Chapter 9, "Network Threat Modeling"), turning off unneeded services, hardening Web apps (see Writing Secure Code , 2nd Edition, by Howard and LeBlanc, MS Press, 2003), and properly protecting the SQL and Web servers (see Chapter 14, "Protecting Services and Server Applications"). Of course, this was a specialized system with very limited functionality, but it still shows that less is often more.

Proper understanding of the threats and realistic mitigation of those threats through a solid network architecture is much more important than most of the security tweaks we turn on in the name of security.

Myth 5: All Environments Should At Least Use <Insert Favorite Guide Here>

One size does not fit all. Every environment has unique requirements and unique threats. If there truly was a guide for how to secure every single system out there, the settings in it would be the default. The problem is that when people start making these statements, they fail to take into account the complexity of security and system administration. As mentioned in Chapter 1, administrators usually get phone calls only when things break. Security breaks things; that is why some security- related settings are turned off by default. To be able to protect an environment, you have to understand what that environment looks like, who is using it and for what, and what the threats are that they have decided need mitigated. Security is about risk management, and risk management is about understanding and managing risks, not about making a bunch of changes in the name of making changes solely to justify one's own existence and paycheck.

At the very least, an advanced system administrator should evaluate the security guide or policy that will be used and ensure that it is appropriate for the environment. Certain tailoring to the environment is almost always necessary. These are not things that an entry-level administrator can do, however. Care is of the essence when authoring or tailoring security policies.

Myth 6: "High Security" Is an End Goal for All Environments

High security, in the sense of the most restrictive security possible, is not for everyone. As we have said many times by now, security will break things. In some environments, you are willing to break things in the name of protection that you are not willing to break in others. Had someone told you on September 10, 2001 that you needed to arrive three hours ahead of your flight at the airport to basically be strip-searched and have your knitting needles confiscated, you would have told them they are insane. High security (to the extent that airport security is truly any security at all and not just security theater) is not for everyone, and in the world we lived in until the morning of September 11, 2001, it was not for us. After planes took to the skies again, few people questioned the need for more stringent airport security.

The same holds true of information security. Some systems are subjected to incredibly serious threats. If they get compromised, people will die, nations and large firms will go bankrupt, and society as we know it will collapse. Other systems are protecting my credit card numbers , for which I am liable up to $50 if they get compromised. The protective measures that are used on the former are entirely inappropriate for the latter; however, we keep hearing that "high security" is some sort of end goal toward which all environments should strive. These types of statements are an oversimplification that contributes to the general distrust and disarray in the field of information security today.

Myth 7: Start Securing Your Environment by Applying a Security Guide

You cannot start securing anything by making changes to it. Once you start changing things the environment changes, and the assumptions you started with are no longer valid. We have said this many times, but to reiterate, security is about risk management; it is about understanding the risks and concrete threats to your environment and mitigating those. If the mitigation steps involve taking a security guide and applying it, so be it, but you do not know that until you analyze the threats and risks.

Myth 8: Security Tweaks Can Fix Physical Security Problems

A fundamental concept in information security states that if bad guys have physical access to your computer, it is not your computer any longer! Physical access will always trump software securityeventually. We have to qualify the statement, however, because certain valid software security steps will prolong the time until physical access breaches all security. Encryption of data, for instance, falls into that category. However, many other software security tweaks are meaningless. Our current favorite is the debate over USB thumb drives . In a nutshell , after the movie The Recruit , everyone woke up to the fact that someone can easily steal data on a USB thumb drive. Curiously, this only seems to apply to USB thumb drives, though. We have walked into military facilities where they confiscated our USB thumb drives, but let us in with 80 GB i1394 hard drives. Those are apparently not as bad.

One memorable late evening, one author's boss called him frantically asking what to do about this problem. The response: head on down to your local hardware store, pick up a tube of epoxy , and fill the USB ports with it. While you are at it, fill the i1394 (FireWire), serial, parallel, SD, MMC, memory stick, CD/DVD- burner , floppy drive, Ethernet jack, and any other orifices you see on the back, front, top, and sides of the computer, monitor, keyboard, and mouse with it, too. You will also need to make sure nobody can carry the monitor off and make a photocopy of it. You can steal data using all of those interfaces.

The crux of the issue is that as long as there are these types of interfaces on the system, and bad guys have access to them, all bets are off. There is nothing about USB that makes it any different. Sure, the OS manufacturer can put a switch in that prevents someone from writing to a USB thumb drive. That does not, however, prevent the bad guy from booting to a bootable USB thumb drive, loading an NTFS driver, and then stealing the data.

In short, any software security solution that purports to be a meaningful defense against physical breach must persist even if the bad guy has full access to the system and can boot in to an arbitrary operating system. Registry tweaks and file system ACLs do not provide that protection. Encryption does. Combined with proper physical security, all these measures are useful. As a substitute for physical security, they are usually not.

Myth 9: Security Tweaks Will Stop Worms/Viruses

Worms and viruses (hereinafter collectively referred to as malware ) are designed to cause the maximum amount of destruction possible. Therefore, they try to hit the largest numbers of vulnerable systems and, hence, they tend to spread through one of two mechanisms: unpatched/unmitigated vulnerabilities and stupid unsophisticated users. Although there are some security tweaks that will stop malware (Code Red, for instance, could have been stopped by removing the indexing services extensions mappings in IIS), the vast majority of them cannot be stopped that way because they spread through the latter vector. Given the choice of dancing pigs and security, users will choose dancing pigs every single time. Given the choice between pictures of naked people frolicking on the beach and security, roughly half the population will choose naked people frolicking on the beach. Couple that with the fact that users do not understand our security dialogs and we have a disaster. If a dialog asking the user to make a security decision is the only thing standing between the user and the naked people frolicking on the beach, security does not stand a chance.

Myth 10: An Expert Recommended This Tweak as Defense in Depth

This myth has two parts . Let us deal with the defense-in-depth aspect first. As we discussed in Chapter 1, defense- in-depth is a reasoned security strategy applying protective measures in multiple places to prevent unacceptable threats. Unfortunately, far too many people today use the term defense in depth to justify security measures that have no other realistic justification. Typically, this happens because of the general belief in myth 3 (more tweaks are better). By making more changes, we show the auditors that we are doing our job, and therefore they chalk us up as having done due diligence.

This shows an incredible immaturity in the field, much like what we saw in western "medicine" in the middle ages. Medics would apply cow dung, ash, honey, beer, and any number of other things, usually in rapid succession, to wounds to show that they were trying everything. Today, doctors (more typically nurses actually) clean the wound, apply a bandage and potentially an antibiotic of some kind, and then let it heal. Less is very often more, and using defense in depth as a way to justify unnecessary and potentially harmful actions is inappropriate.

The first part of this statement is one of our favorites. As a society, we love deferring judgment to experts, because, after all, they are experts and know more than we do. The problem is that the qualification process for becoming an expert is somewhat, shall we say, lacking. We usually point out that the working definition of a security expert is "someone who is quoted in the press." Based on the people we often see quoted, and our interaction with those people, that belief seems justified. It is no longer actions that define an expert, just reputation; and reputation can be assigned. Our friend Mark Minasi has a great statement that we have stolen for use in our own presentations. To be a security consultant, all you have to know is four words: the sky is falling. Having been security consultants and seen what has happened to the general competence level in the field, this statement certainly rings true. There are many, many good security consultants, but there are also many who do not know what they need to and, in some cases, fail to recognize that and then charge exorbitant amounts of money to impart their lack of knowledge and skills on unsuspecting customers.



Protect Your Windows Network From Perimeter to Data
Protect Your Windows Network: From Perimeter to Data
ISBN: 0321336437
EAN: 2147483647
Year: 2006
Pages: 219

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net