Design Phase

Design Phase

As with all software development, it's important to get security things right during the design phase. No doubt you've seen figures that show it takes ten times more time, money, and effort to fix a bug in the development phase than in the design phase and ten times more in the test phase than in the development phase, and so on. From my experience, this is true. I'm not sure about the actual cost estimates, but I can safely say it's easier to fix something if it doesn't need fixing because it was designed correctly. The lesson is to get your security goals and designs right as early as possible. Let's look at some details of doing this during the design phase.

Security Questions During Interviews

Hiring and retaining employees is of prime importance to all companies, and interviewing new hires is an important part of the process. You should determine a person's security skill set from the outset by asking security-related questions during interviews. If you can pinpoint people during the interview process as candidates with good security skills, you can fast-track them into your company.

Remember that you are not interviewing candidates to determine how much they know about security features. Again, security is not just about security features; it's about securing mundane features.

During an interview, I like to ask the candidate to spot the buffer overrun in a code example drawn on a whiteboard. This is very code-specific, but developers should know a buffer overrun when they see one.

More Info
See Chapter 5, "Public Enemy #1: the Buffer Overrun," for much more information on spotting buffer overruns.

Here's another favorite of mine: "The government lowers the cost of gasoline; however, they place a tracking device on every car in the country and track mileage so that they can bill you based on distance traveled." I then ask the candidate to assume that the device uses a GPS (Global Positioning System) and to discuss some of these issues:

  • What are the privacy implications of the device?

  • How can an attacker defeat this device?

  • How can the government mitigate the attacks?

  • What are the threats to the device, assuming that each device has embedded secret data?

  • Who puts the secrets on the device? Are they to be trusted? How do you mitigate these issues?

I find this a useful exercise because it helps me ascertain how the candidate thinks about security issues; it sheds little light on the person's security features knowledge. And, as I'm trying hard to convince you, how the candidate thinks about security issues is more important when building secure systems. You can teach people about security features, but it's hard to train people to think with a security mind-set. So, hire people who can think with a hacking mind-set.

Another view is to hire people with a mechanic mind-set, people who can spot bad designs, figure out how to fix them, and often point out how they should have been designed in the first place. Hackers can be pretty poor at fixing things in ways that make sense for an enterprise that has to manage thousands of PCs and servers. Anyone can think of ways to break into a car, but it takes a skilled engineer to design a robust car, and an effective car alarm system. You need to hire both hackers and mechanics!

More Info
For more on finding the right people for the job, take another look at "Interviewing Security People" in Chapter 1, "The Need for Secure Systems."

Define the Product Security Goals

You need to determine early who your target audience is and what their security requirements are. My wife has different security needs than a network administrator at a large multinational corporation. I can guess the security needs that my wife has, but I have no idea what the requirements are for a large customer until I ask them what they are. So, who are your clients and what are their requirements? If you know your clients but not their requirements, you need to ask them! It's imperative that everyone working on a product understands the users' needs. Something we've found very effective at Microsoft is creating personas or fictitious users who represent our target audience. Create colorful and lively posters of your personas, and place them on the walls around the office. When considering security goals, include their demographics, their roles during work and play, their security fears, and risk tolerance in your discussions. Figure 2-3 shows an example persona poster.

By defining your target audience and the security goals of the application, you can reduce "feature creep," or the meaningless, purposeless bloating of the product. Try asking questions like "Does this security feature or addition help mitigate any threats that concern one of our personas?" If the answer is no, you have a good excuse not to add the feature because it doesn't help your clients. Create a document that answers the following questions:

  • Who is the application's audience?

  • What does security mean to the audience? Does it differ for different members of the audience? Are the security requirements different for different customers?

  • Where will the application run? On the Internet? Behind a firewall? On a cell phone?

  • What are you attempting to protect?

  • What are the implications to the users if the objects you are protecting are compromised?

  • Who will manage the application? The user or a corporate IT administrator?

  • What are the communication needs of the product? Is the product internal to the organization or external, or both?

  • What security infrastructure services do the operating system and the environment already provide that you can leverage?

  • How does the user need to be protected from his own actions?

figure 2-3 a sample persona poster showing one customer type.

Figure 2-3. A sample persona poster showing one customer type.

On the subject of the importance of understanding the business requirements, ISO 17799, "Information Technology - Code of practice for information security management,"-an international standard that covers organizational, physical, communications, and systems development security policy-describes security requirements in its introduction and in section 10.1, "Security requirements of systems," and offers the following in section 10.1.1:

Security requirements and controls should reflect the business value of the information assets involved, and the potential business damage, which might result from a failure or absence of security.

NOTE
ISO 17799 is a somewhat high-level document, and its coverage of code development is sketchy at best, but it does offer interesting insights and assistance to the development community. You can buy a copy of the standard from www.iso.ch.

More Info
If you use ISO 17799 in your organization, most of this book relates to section 9.6, "Application access control," section 10.2, "Security in application systems," and to a lesser extent 10.3, "Cryptographic controls."

Security Is a Product Feature

Security is a feature, just like any other feature in the product. Do not treat security as some nebulous aspect of product development. And don't treat security as a background task, only added when it's convenient to do so. Instead, you should design security into every aspect of your application. All product functional specifications should include a section outlining the security implications of each feature. To get some ideas of how to consider security implications, go to www.ietf.org and look at any RFC created in the last couple of years' they all include security considerations sections.

Remember, nonsecurity products must still be secure from attack. Consider the following:

  • The Microsoft Clip Art Gallery buffer overrun that led to arbitrary code execution (www.microsoft.com/technet/security/bulletin/MS00-015.asp).

  • A flaw in the Solaris file restore application, ufsrestore, could allow an unprivileged local user to gain root access (www.securityfocus.com/advisories/3621).

  • The sort command in many UNIX-based operating systems, including Apple's OS X, could create a denial of service (DoS) vulnerability (www.kb.cert.org/vuls/id/417216).

What do all these programs have in common? The programs themselves have nothing to do with security features, but they all had security vulnerabilities that left users susceptible to attack.

NOTE
One of the best stories I've heard is from a friend at Microsoft who once worked at a company that usually focused on security on Monday mornings - after the vice president of engineering watched a movie such as "The Net," "Sneakers," or "Hackers" the night before!

I once reviewed a product that had a development plan that looked like this:

Milestone 0: Designs complete

Milestone 1: Add core features

Milestone 2: Add more features

Milestone 3: Add security

Milestone 4: Fix bugs

Milestone 5: Ship product

Do you think this product's team took security seriously? I knew about this team because of a tester who was pushing for security designs from the start and who wanted to enlist my help to get the team to work on it. But the team believed it could pile on the features and then clean up the security issues once the features were done. The problem with this approach is that adding security at M3 will probably invalidate some of the work performed at M1 and M2. Some of the bugs found during M3 will be hard to fix and, as a result, will remain unfixed, making the product vulnerable to attack.

This story has a happy conclusion: the tester contacted me before M0 was complete, and I spent time with the team, helping them to incorporate security designs into the product during M0. I eventually helped them weave the security code into the application during all milestones, not just M3. For this team, security became a feature of the product, not a stumbling block. It's interesting to note the number of security-related bugs in the product. There were very few security bugs compared with the products of other teams who added security later, simply because the product features and the security designs protecting those features were symbiotic. The product was designed and built with both in mind from the start.

Remember the following important points if you decide to follow the bad product team example:

  • Adding security later is wrapping security around existing features, rather than designing features with security in mind.

  • Adding any feature, including security, as an afterthought is expensive.

  • Adding security might change the way you've implemented features. This too can be expensive.

  • Adding security might change the application interface, which might break the code that has come to rely on the current interface.

IMPORTANT
Do not add security as an afterthought!

If you're creating applications for nonexpert users (such as my mom!), you should be even more aware of your designs up front. Even though users require secure environments, they don't want security to "get in the way." For such users, security should be hidden from view, and this is a trying goal because information security professionals simply want to restrict access to resources and nonexpert users require transparent access. Expert users also require security, but they like to have buttons to click and options to select so long as they're understandable.

I was asked to review a product schedule recently, and it was a delight to see this:

Date

Product Milestone

Security Activities

Sep-1-2002

Project Kickoff

Security training for team

Sep-8-2002

M1 Start

Oct-22-2002

Security-Focused Day

Oct-30-2002

M1 Code Complete

Threat models complete

Nov-6-2002

Security Review I with Secure Windows Initiative Team

Nov-18-2002

Security-Focused Day

Nov-27-2002

M2 Start

Dec-15-2002

Security-Focused Day

Jan-10-2003

M2 Code Complete

Feb-02-2003

Security-Focused Day

Feb-24-2003

Security Review II with Secure Windows Initiative Team

Feb-28-2003

Beta 1 Zero

Priority 1 and 2 Security Bugs

Mar-07-2003

Beta 1 Release

Apr-03-2003

Security-Focused Day

May-25-2003

M3 Code Complete

Jun-01-2003

Start 4-week-long security push

Jul-01-2003

Security Review (including push results) III

Aug-14-2003

Beta 2 Release

Aug-30-2003

Security-Focused Day

Sep-21-2003

Release Candidate 1

Sep-30-2003

Final Security Overview IV with Secure Windows Initiative Team

Oct-30-2003

Ship product!

This is a wonderful ship schedule because the team is building critical security milestones and events into their time line. The purpose of the security-focused days is to keep the team aware of the latest issues and vulnerabilities. A security day usually involves training at the start of the day, followed by a day of design, code, test plan and documentation reviews. Prizes are given for the "best" bugs and for most bugs. Don't rule out free lattes for the team! Finally, you'll notice four critical points where the team goes over all its plans and status to see what midcourse corrections should be taken.

Security is tightly interwoven in this process, and the team members think about security from the earliest point of the project. Making time for security in this manner is critical.

Making Time for Security

I know it sounds obvious, but if you're spending more time on security, you'll be spending less time on other features, unless you want to push out the product schedule or add more resources and cost. Remember the old quote, "Features, cost, schedule; choose any two." Because security is a feature, it has an impact on the cost or the schedule, or both. Therefore, you need to add time to or adjust the schedule to accommodate the extra work. If you do this, you won't be "surprised" as new features require extra work to make sure they are designed and built in a secure manner.

Like any feature, the later you add it in, the higher the cost and the higher the risk to your schedule. Doing security design work early in your development cycle allows you to better predict the schedule impact. Trying to work in security fixes late in the cycle is a great way to ship insecure software late. This is particularly true of security features that mitigate DoS attacks, which frequently require design changes.

NOTE
Don't forget to add time to the schedule to accommodate training courses and education.

Threat Modeling Leads to Secure Design

We have an entire chapter on threat modeling, but suffice it to say that threat models help form the basis of your design specifications. Without threat models, you cannot build secure systems, because securing systems requires you to understand your threats. Be prepared to spend plenty of time working on threat models. They are well worth the effort.

Build End-of-Life Plans for Insecure Features

"Software never dies; it just becomes insecure." This should be a bumper sticker, because it's true. Software does not tire nor does it wear down like stuff made of atoms, but it can be rendered utterly insecure overnight as the industry learns new vulnerabilities. Because of this, you need to have end-of-life plans for old functionality. For example, say you decide that an old feature will be phased out and replaced with a more secure version currently available. This will give you time to work with clients to migrate their application over to the new functionality as you phase out the old, less-secure version. Clients generally don't like surprises, and this is a great way of telling them to get ready for change.

Setting the Bug Bar

You have to be realistic and pragmatic when determining which bugs to fix and which not to fix prior to shipping. In the perfect world, all issues, including security issues, would be fixed before you release the product to customers. In the real world, it's not that simple. Security is one part, albeit a very important part, of the trade-offs that go into the design and development of an application. Many other criteria must be evaluated when deciding how to remedy a flaw. Other issues include, but are not limited to, regression impact, accessibility to people with disabilities, deployment issues, globalization, performance, stability and reliability, scalability, backward compatibility, and supportability.

This may seem like blasphemy to some of you, but you have to be realistic: you can never ship flawless software, unless you want to charge millions of dollars for your product. Moreover, if you shipped flawless software, it would take you so long to develop the software that it would probably be outdated before it hit the shelves. However, the software you ship should be software that does what you programmed it to do and only that. This doesn't mean that the software suffers no failures; it means that it exhibits no behavior that could render the system open to attack.

NOTE
Before he joined Microsoft, my manager was one of the few people to have worked on the development team of a system designed to meet the requirements of Class A1 of the Orange Book. (The Orange Book was used by the U.S. Department of Defense to evaluate system security. You can find more information about the Orange Book at http://www.dynamoo.com/orange.) The high-assurance system took a long time to develop, and although the system was very secure, he canceled the project because by the time it was completed it was hopelessly out of date and no one wanted to use it.

You must fix bugs that make sense to fix. Would you fix a bug that affected ten people out of your client base of fifty thousand if the bug were very low threat, required massive architectural changes, and had the potential to introduce regressions that would prevent every other client from doing their job? Probably not in the current version, but you might fix it in the next version so that you could give your clients notice of the impending change.

I remember a meeting a few years ago in which we debated whether to fix a bug that would solve a scalability issue. However, making the fix would render the product useless to Japanese customers! After two hours of heated discussion, the decision was made not to fix the issue directly but to provide a work-around solution and fix the issues correctly in the following release. The software was not flawless, but it worked as advertised, and that's good enough as long as the documentation outlines the tolerances within which it should operate.

You must set your tolerance for defects early in the process. The tolerances you set will depend on the environment in which the application will be used and what the users expect from your product. Set your expectations high and your defect tolerance low. But be realistic: you cannot know all future threats ahead of time, so you must follow certain best practices, which are outlined in Chapter 3, to reduce your attack surface. Reducing your attack surface will reduce the number of bugs that can lead to serious security issues. Because you cannot know new security vulnerabilities ahead of time, you cannot ship perfect software, but you can easily raise the bug bar dramatically with some process improvements.

IMPORTANT
Elevation of privilege attacks are a no-brainer-fix them! Such attacks are covered in Chapter 4."

Security Team Review

Finally, once you feel you have a good, secure, and well-thought-out design, you should ask people outside your team who specialize in security to review your plans. Simply having another set of knowledgable eyes look at the plans will reveal issues, and it's better to find issues early in the process than at the end. At Microsoft, it's my team that performs many of these reviews with product groups.

Now let's move onto the development phase.



Writing Secure Code
Writing Secure Code, Second Edition
ISBN: 0735617228
EAN: 2147483647
Year: 2001
Pages: 286

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net