Security Principles to Live By

Security Principles to Live By

This section of the chapter describes concepts to keep in mind when you design and build your application. Remember: security is not something that can be isolated in a certain area of the code. Like performance, scalability, manageability, and code readability, security awareness is a discipline that every software designer, developer, and tester has to know about. After working with various development organizations, we ve found that if you keep the following design security principles sacrosanct, you can indeed build secure systems:

  • Establish a security process

  • Define the product security goals

  • Consider security as a product feature

  • Learn from mistakes

  • Use least privilege

  • Use defense in depth

  • Assume external systems are insecure

  • Plan on failure

  • Fail to a secure mode

  • Employ secure defaults

  • Remember that security features != secure features

  • Never depend on security through obscurity

Numerous other bumper sticker words of wisdom could be included in this list, but we ll focus on these because we ve found them to be among the most useful.

Establish a Security Process

Until you define a process for designing, coding, testing, deploying, and fixing systems in a secure manner, you will find you spend an inordinate amount of time on the last aspect of the process: fixing security bugs. Establishing a process is important because secure systems encompass a security process as well as the product itself. You cannot have a secure product without a security process. Issues to consider include how secure systems are designed, developed, tested, documented, audited, and controlled. The control consideration should cover both management control and revision control of specifications, code, documentation, and tests.

Define the Product Security Goals

Defining security goals should be done in a way that requires as little product-specific knowledge as possible and includes whatever product-specific knowledge is required. You should create a document that answers such questions as the following:

  • Who is the application s audience?

  • What does security mean to the audience? Does it differ for different members of the audience? Are the security requirements different for different customers?

  • Where will the application run? On the Internet? Behind a firewall? On a cell phone?

  • What are you attempting to protect?

  • What are the implications to the users if the objects you are protecting are compromised?

  • Who will manage the application? The user or a corporate IT administrator?

  • What are the communication needs of the product? Is the product internal to the organization or external, or both?

  • What security infrastructure services do the operating system and the environment already provide that you can leverage?

Much of this information can be gleaned from the threat model, which is covered in this chapter in Security Design by Threat Modeling.

Consider Security as a Product Feature

If performance is not a feature of your product, you ll probably create a slow product. Likewise, if security is not a design feature, you will produce an insecure product. As I mentioned earlier, you cannot add security as an afterthought.

Recently I reviewed a product that had a development plan that looked like this:

Milestone 0: Designs Complete

Milestone 1: Add core features

Milestone 2: Add more features

Milestone 3: Add security

Milestone 4: Fix bugs

Milestone 5: Ship product

Do you think this product s team took security seriously? I knew about this team because of a tester who was pushing for security designs from the start and wanted to enlist my help to get the team to work on it. But the team believed it could pile on the features and then clean up the security issues once the features were done. The problem with this approach is that adding security at M3 will probably invalidate some of the work performed at M1 and M2, and some of the bugs found during M3 will be hard to fix and, as a result, will remain unfixed, making the product vulnerable to attack.

important

Adding security later often requires architectural changes, not just simple code changes or small design changes. It can be difficult to get such deep changes implemented at later points in the product cycle. Changes must be hacked in, leading to a much more cumbersome and frequently still insecure application.

This story has a happy ending: the tester contacted me before M0 was complete, and I spent time with the team helping them to incorporate security designs into the product during M0 and to weave the security code into the application during all milestones, not just M3. For this team, security became a feature of the product, not a stumbling block and something to tack on as time permitted. Also interesting were the number of security-related bugs in the product: there were very few compared with the products of other teams who added security later, simply because the product features and the security designs protecting those features were symbiotic. They were designed and built with both in mind from the start.

important

Security should be a design consideration of your product. Make sure the designs and specifications outline the security requirements and threats to your system.

Learn from Mistakes

We ve all heard that what doesn t kill you makes you stronger, but I swear that in the world of software engineering we do not learn from mistakes readily. This is also true in the world of security. Some of my favorite quotations regarding learning from past mistakes include

History is a vast early warning system.

Norman Cousins (1915 1990), American editor, writer, and author

Those who cannot remember the past are condemned to repeat it.

George Santayana (1863 1952), Spanish-born American philosopher and writer

There is only one thing more painful than learning from experience and that is not learning from experience.

Archibald McLeish (1892 1982), American poet

If you find a security problem in your software or in your competitor s products, learn from the mistake. Ask questions like these:

  • How did the security error occur?

  • Is the same error replicated in other areas of the code?

  • How could we have prevented this error from occurring?

  • How do we make sure this kind of error does not happen in the future?

Approach every bug as a learning opportunity. Unfortunately, in the rush to get products to market, we tend to overlook this important step, and so we see the same security blunders occur time and again. Failure to learn from mistakes increases the probability that you will make the same costly mistake again.

A Hard Lesson in Learning

About three years ago, an obscure security bug was found in a product I was close to. Once the fix was made, I asked the product team some questions, including what had caused the mistake. The development lead indicated that the team was too busy to worry about such a petty, time-wasting exercise. During the next year, outside sources found three similar bugs in the product. Each bug took about 100 man-hours to remedy.

I presented this to the new development lead the previous lead had moved on and pointed out that if four similar issues were found in the space of one year, it would be reasonable to expect more. He agreed, and we spent four hours determining what the core issue was. The issue was simple: some developers had made some incorrect assumptions about the way a function was used. So we looked for similar instances in the entire code base, found four more, and fixed them all. Next we added some debug code to the function that would cause the application to stop if the false assumption condition arose. Finally, we sent e-mail to the entire development organization explaining the issue and the steps to take to make sure the issue never occurred again. The entire process took less than 20 man-hours.

The issue is no longer an issue. The same mistake is sometimes made, but the team catches the flaw quickly because of the newly added error-checking code. Finding the root of the issue and spending time to rectify that class of bug would perhaps have made the first development lead far less busy!

tip

As my dad once said to me, You can make just about any mistake once. But you d better make sure you learn from it and not make the same mistake again.

Use Least Privilege

All applications should execute with the least privilege to get the job done and no more. I often analyze products that must be executed in the security context of an administrative account or, worse, as a service running as the Local System account when, with some thought, the product designers could have not required such privileged accounts. The reason for running with least privilege is quite simple. If a security vulnerability is found in the code and an attacker can inject code into your process (or run a Trojan horse or virus), the malicious code will run with the same privileges as the process. If the process is running as an administrator, the malicious code runs as an administrator. This is why we recommend people do not run as a member of the local administrators group on their computers, just in case a virus or some other malicious code executes.

Stepping onto the Logged On as Admin Soapbox

Go on, admit it: you re logged on to your computer as a member of the local administrators group, aren t you? I m not. I haven t been for over two years, and everything works fine. I write code, I debug code, I send e-mail, I sync with my Pocket PC, I create documentation for an intranet site, and do myriad other things. To do all this, you don t need admin rights, so why run as an admin? If I want to do something special, which requires admin privileges, I either use the runas command or provide a shortcut on the desktop and check the Run As Different User option (Microsoft Windows 2000) or the Run With Different Credentials option (Windows XP) on the Properties page of the shortcut. When I run the application, I enter my local administrator username and password. That way only the application I m using runs as an admin. When the application closes, I m not admin any more. You should try it you will be much safer from attack!

When you create your application, write down what resources it must access and what special tasks it must perform. Examples of resources include files and registry data; examples of special tasks include the ability to log user accounts on to the system or debug processes. Often you ll find you do not require many special privileges to get any tasks done. Once you have a list of all your resources, determine what might need to be done with those resources. For example, a user might need to read and write to the resources but not create or delete them. Armed with this information, you can determine whether the user needs to run as an administrator to use your application. The chances are good that she does not.

For a humorous look at the principle of least privilege, refer to If we don t run as admin, stuff breaks in Appendix D, Lame Excuses We ve Heard. Also, see Chapter 5, Running with Least Privilege, for a full account of how you can often get around requiring dangerous privileges.

tip

If your application fails to run unless the user (or service process identity) is an administrator or the system account, determine why. Chances are good that elevated privileges are unnecessary.

Use Defense in Depth

Defense in depth is a straightforward principle: imagine your application is the last application standing, and every defensive mechanism protecting you has been destroyed. Now you must protect yourself. For example, if you expect a firewall to protect you, build the system as though the firewall has been compromised.

Play along for moment. Your users are the noble family of a castle in the 1500s, and you are the captain of the army. The bad guys are coming, and you run to the lord of the castle to inform him of the encroaching army and of your faith in your archers, the castle walls, and the castle s moat. The lord is pleased. Two hours later you ask for an audience with the lord and inform him that the marauders have broken the defenses and are inside the outer wall. He asks how you plan to further defend the castle. You answer that you plan to surrender because the bad guys are inside the castle walls. A response like yours doesn t get you far in the armed forces. You don t give up you keep fighting until all is lost or you re told to stop fighting.

It s the same in software. Just because some defensive mechanism has been compromised doesn t give you the right to concede defeat. This is the essence of defense in depth; at some stage you have to defend yourself. Don t rely on other systems to protect you; put up a fight because software fails, hardware fails, and people fail. Defense in depth helps reduce the likelihood of a single point of failure in the system.

important

Always be prepared to defend your application from attack because the security features defending it might be annihilated. Never give up.

Assume External Systems Are Insecure

Assuming external systems are insecure is related to defense in depth the assumption is actually one of your defenses. Consider any data you receive from a system you do not have complete control over to be insecure and a source of attack. This is especially important when accepting input from users. Until you can prove otherwise, all external stimuli have the potential to be an attack.

Here s a variant: don t assume that your application will always communicate with an application that limits the commands a user can execute from the user interface or Web-based client portion of your application. Many server bugs take advantage of the ease of sending malicious data to the server by circumventing the client altogether.

Plan on Failure

As I ve mentioned, stuff fails and stuff breaks. In the case of mechanical equipment, the cause might be wear and tear, and in the case of software and hardware, it might be bugs in the system. Bugs happen plan on them occurring. Make security contingency plans. What happens if the firewall is breached? What happens if the Web site is defaced? What happens if the application is compromised? The wrong answer is, It ll never happen! It s like having an escape plan in case of fire you hope to never have to put the strategy into practice, but if you do you have a better chance of getting out alive.

tip

Death, taxes, and computer system failure are all inevitable to some degree. Plan for the event.

Fail to a Secure Mode

So, what happens when you do fail? You can fail securely or insecurely. Failing to a secure mode means the application has not disclosed any data that would not be disclosed ordinarily, that the data still cannot be tampered with, and so on. Or you can fail insecurely such that the application discloses more than it should or its data can be tampered with. The former is the only proposition worth considering if an attacker knows that he can make your code fail, he can bypass the security mechanisms because your failure mode is insecure.

Also, when you fail, do not issue huge swaths of information explaining why the error occurred. Give the user a little bit of information, enough so that the user knows the request failed, and log the details to some secure log file, such as the Windows event log.

For a microview of insecure failing, look at the following (pseudo)code and see whether you can work out the security flaw:

DWORD dwRet = IsAccessAllowed(...); if (dwRet == ERROR_ACCESS_DENIED) { // Security check failed. // Inform user that access is denied. } else { // Security check OK. // Perform task. }

At first glance, this code looks fine, but what happens if IsAccessAllowed fails? For example, what happens if the system runs out of memory, or object handles, when this function is called? The user can execute the privileged task because the function might return an error such as ERROR_NOT_ ENOUGH_MEMORY.

The correct way to write this code is as follows:

DWORD dwRet = IsAccessAllowed(...); if (dwRet == NO_ERROR) { // Secure check OK. // Perform task. } else {  // Security check failed. // Inform user that access is denied. }

In this case, if the call to IsAccessAllowed fails for any reason, the user is denied access to the privileged operation.

A list of access rules on a firewall is another example. If a packet does not match a given set of rules, the packet should not be allowed to traverse the firewall; instead, it should be discarded. Otherwise, you can be sure there s a corner case you haven t considered that would allow a malicious packet, or a series of such packets, to pass through the firewall. The administrator should configure firewalls to allow only the packet types deemed acceptable through, and everything else should be rejected.

Another scenario, covered in detail in Chapter 12, Securing Web-Based Services, is to filter user input looking for potentially malicious input and rejecting the input if it appears to contain malevolent characters. A potential security vulnerability exists if an attacker can create input that your filter does not catch. Therefore, you should determine what is valid input and reject all other input.

An excellent discussion of failing securely is found in The Protection of Information in Computer Systems, by Jerome Saltzer and Michael Schroeder and available at web.mit.edu/Saltzer/www/publications/protection.

Another way to help reduce the risk of security vulnerabilities is to be secure out of the box so that little work is required by the administrator to secure your application. Not only fail to a secure mode, but also design a secure-by-default system. That s next!

Employ Secure Defaults

Employing secure defaults is one of the most difficult yet important goals for an application developer. You need to choose the appropriate features for your users hopefully, the feature set is based on user feedback and requirements and make sure these features are secure. The less often used features should be off by default to reduce potential security exposure. If a feature is not running, it cannot be vulnerable to attack. I generally apply the Pareto Principle, otherwise known as the 80-20 rule: which 20 percent of the product is used by 80 percent of the users? The 20 percent feature set is on by default, and the 80 percent feature set is off by default with simple instructions and menu options for the enabling of features. ( Simply add a DWORD registry value, where the low-order 28 bits are used to denote the settings you want to turn off are not simple instructions!) Of course, someone on the team will demand that a rarely used feature be turned on by default. Often you ll find the person has a personal agenda: his mom uses the feature, he designed the feature, or he wrote the feature.

Some months back I performed a security review for a development tool that was a few months from shipping. The tool had a really cool feature that would install and be enabled by default. After the development team had spent 20 minutes explaining how the feature worked, I summed it up in one sentence: Anyone can execute arbitrary code on any computer that has this software installed. The team members muttered to one another and then nodded. I said, That s bad! and offered some advice about how they could mitigate the issue. But they had little time left in the development cycle to fix the problem, so someone responded, Why don t we ship with the feature enabled and warn people in the documentation about the security implications of the feature? I replied, Why not ship with the feature disabled and inform people in the documentation about how they can enable the feature if they require it? The team s lead wasn t happy and said, You know people don t read documentation until they really have to! They will never use our cool feature. I smiled and replied, Exactly! So what makes you think they ll read the documentation to turn the feature off? In the end, the team pulled the feature from the product a good thing because the product was behind schedule!

Another reason for not enabling features by default has nothing to do with security: performance. More features means more memory used; more memory used leads to more disk paging, which leads to performance degradation.

important

As you enable more features by default, you increase the potential for a security violation, so keep the enabled feature set to a minimum. Unless you can argue that your users will be massively inconvenienced by a feature being turned off, keep it off and provide an easy mechanism for enabling the feature if it is required.

Backward Compatibility Will Always Give You Grief

Backward compatibility is another reason to ship secure products with secure defaults. Imagine your application is in use by many large corporations, companies with thousands, if not tens of thousands, of client computers. A protocol you designed is insecure in some manner. Five years and nine versions later, you make an update to the application with a more secure protocol. But the protocol is not backward compatible with the old version of the protocol, and any computer that has upgraded to the current protocol will no longer communicate with any other version of your application. The chances are slim indeed that your clients will upgrade their computers anytime soon, especially as some clients will still be using version 1, others version 2, and so on, and a small number will be running the latest version. Hence, the weak version of the protocol lives forever!

tip

Be ready to face many upgrade and backward compatibility issues if you have to change critical features for security reasons.

Backward Incompatibility: SMB Signing and TCP/IP

Consider the following backward compatibility problem at Microsoft. The Server Message Block (SMB) protocol is used by file and print services in Windows and has been used by Microsoft and other vendors since the LAN Manager days of the late 1980s. A newer, more secure version of SMB that employs packet signing has been available since Microsoft Windows NT 4 Service Pack 3 and Windows 98. The updated protocol has two main improvements: it supports mutual authentication, which closes man-in-the-middle attacks, and it supports message integrity checks, which prevent data-tampering attacks. Man-in-the-middle attacks occur when a third party between you and the person with whom you are communicating assumes your identity to monitor, capture, and control your communication. SMB signing provides this functionality by placing a digital signature in each SMB packet, which is then verified by both the client and the server.

Because of these security benefits, SMB signing is worth enabling. However, when it is enforced, only computers employing SMB signing can communicate with one another when using SMB traffic, which means that potentially all computers in an organization must be upgraded to signed SMB a nontrivial task. There is the option to attempt SMB signing when communication between two machines is established and to fall back to the less secure nonsigned SMB if that communication fails. However, this means that an attacker can force the server to use the less secure SMB rather than signed SMB.

Another example is that of Transmission Control Protocol/Internet Protocol (TCP/IP), which is a notoriously insecure protocol. Internet Protocol Security (IPSec) remedies many of the issues with TCP/IP, but not all servers understand IPSec, so it is not enabled by default. TCP/IP will live for a long time, and TCP/IP attacks will continue because of it.

Remember That Security Features != Secure Features

When giving secure coding and secure design presentations to software development teams, I always include this bullet point on the second or third slide:

Security Features != Secure Features

This has become something of a mantra for the Secure Windows Initiative team. We use it to remember that simply sprinkling some magic security pixie dust on an application does not make it secure. We must all be sure to include the correct features and to employ the correct features correctly to defend against attack. It s a waste of time using Secure Socket Layer/Transport Layer Security (SSL/TLS) to protect a system if the client-to-server data stream is not what requires defending. (By the way, one of the best ways to employ correct features correctly is to perform threat modeling, our next subject.)

Another reason that security features do not necessarily make for a secure application is that those features are often written by the security-conscious people. So the people writing the secure code are working on security features rather than on the application s core features. (This does not mean the security software is free from security bugs, of course, but chances are good the code is cleaner.)

Never Depend on Security Through Obscurity

Always assume that an attacker knows everything that you know assume the attacker has access to all source code and all designs. Even if this is not true, it is trivially easy for an attacker to determine obscured information. Other parts of this book show many examples of how such information can be found.

Three Final Points

First, if you find a security bug, fix it and go looking for similar issues in other parts of the code. You will find more like it. And don t be afraid to announce that a bug has been found and fixed. Covering up security bugs leads to conspiracy theories! My favorite quote regarding this point is from Martialis:

Conceal a flaw, and the world will imagine the worst.

Marcus Valerius Martialis, Roman poet (C. 40 A. D. C. 104 A. D.)

Second, if you find a security bug, make the fix as close as possible to the location of the vulnerability. For example, if there is a bug in a function named ProcessData, make the fix in that function or as close to the function as feasible. Don t make the fix in some faraway code that eventually calls ProcessData. If an attacker can circumvent the system and call ProcessData directly, or can bypass your code change, the system is still vulnerable to attack.

Third, if there is a fundamental reason why a security flaw exists, fix the root of the problem. Don t patch it over. Over time patchwork fixes become bigger problems because they often introduce regression errors. As the saying goes, Cure the problem, not the symptoms.



Writing Secure Code
Writing Secure Code, Second Edition
ISBN: 0735617228
EAN: 2147483647
Year: 2005
Pages: 153

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net