Appendix D

Appendix D

Lame Excuses We ve Heard

Now we re going to take an irreverent look at some of the excuses we ve heard over the years from developers, testers, and designers from various companies trying to weasel out of making security design changes or code fixes! The excuses are

  • No one will do that!

  • Why would anyone do that?

  • We ve never been attacked.

  • We re secure we use cryptography.

  • We re secure we use ACLs.

  • We re secure we use a firewall.

  • We ve reviewed the code, and there are no security bugs.

  • We know it s the default, but the administrator can turn it off.

  • If we don t run as administrator, stuff breaks.

Let s get started.

No one will do that!

Oh yes they will!

I once reviewed a product and asked the team whether it had performed buffer overrun tests on data the product received from a socket it opened. The team indicated that no, they had not performed such testing because no one would want to attack the server through the socket. Surely, no one would want to send malicious data at them in an attempt to attack their precious service! I reminded the team of the number of scripts available to attack various remote procedure calls (RPCs), Named Pipes, and sockets interfaces on numerous platforms and that these could be downloaded by script kiddies to attack servers on the Internet. I even offered to help their testers build the test plans. But no, they were convinced that no one would attack their application through the socket the application opened.

To cut a long story short, I created a small Perl script that handcrafted a bogus packet and sent it to the socket the product opened, thereby crashing their server! Not surprisingly, the team fixed the bug and added buffer overrun testing to their test plans!

This group s people were not being glib; they were simply naive. Bad people attack computers, servers and desktops included, every day. If you don t think it will happen to you, you should think again!

Why would anyone do that?

This is a variation of the first lame excuse. And the answer is simple: because people are out there trying to get you, and they do it because they want to see you suffer! Seriously, some people enjoy seeing others discomfited, and some people enjoy vandalizing. We see it every day in the physical world. People scribble on the side of buildings, and, sadly, some people like to pick fights with others so that they can harm them. The same holds true in the digital world. The problem in the digital world is that potentially many thousands of would-be attackers can attack you anonymously.

To sum up people attack computer systems because they can!

We ve never been attacked.

When people say this, I add one word: yet! As they say in the investment world, Past performance is no indication of future returns. This is also true in computer and software security. All it takes is for one attacker to find a vulnerability in your product and to make the vulnerability known to other attackers, and then other attackers will start probing your application for similar issues. Before you know it, you have a half-dozen exploits that need fixing.

I spent some time working closely with some product developers who said they had never been attacked so they didn t need to worry about security. Before they knew what hit them, they had seven security vulnerabilities in six months. They now have a small team of security people working on their designs and performing code reviews.

When I went to high school in New Zealand, there was a somewhat dangerous road leading up to my school. The school notified the local council that a pedestrian crossing should be built to allow pupils to cross the road safely. The council refused, citing that no one had been hurt and so there was no need to build the crossing. Eventually, a child was badly hurt, and the crossing was built. But it was too late a child was already injured.

This excuse reminds me of getting people to perform backups. Most people do so only after they have lost data. As long as a person has lost no data, the person thinks he or she is safe. However, when disaster strikes, it s too late: the damage is done.

The moral of this story is that bad things do happen and it s worthwhile taking preventive action as soon as possible. As my grandmother used to say to me, An ounce of prevention is worth a pound of cure.

We re secure we use cryptography.

Cryptography is easy from an application developer s perspective; all the hard work has been done. It s a well-understood science, and many operating systems have good cryptographic implementations. People make two major cryptographic mistakes, however:

  • They design their own encryption algorithms.

  • They store cryptographic keys insecurely.

If you ve designed your own encryption algorithm, you re not using cryptography. Instead, you re using a poor substitute that probably will be broken.

If you insecurely store the keys used by the encryption system, you are also not using cryptography. Even the best encryption algorithm using the biggest keys possible is useless if the key is easily accessible by an attacker.

Don t create your own encryption algorithms. Use published protocols that have undergone years of public scrutiny.

We re secure we use ACLs.

Many resources in Windows NT, Windows 2000, and Windows XP can be protected using access control lists (ACLs). A good, well-thought-out ACL can protect a resource from attack. A bad ACL can lead to a sense of false security and eventually attack.

On a number of occasions I ve reviewed applications that the developers claim use ACLs. On closer investigation, I found that the ACLs were Everyone (Full Control). In other words, anyone that s what Everyone means! can do anything that s what Full Control means! to this object. So the application does indeed include an ACL, but Everyone (Full Control) should not be counted because it s not secure.

We re secure we use a firewall.

This is another great excuse.

I ve heard this from a number of Microsoft clients. After looking at a client s Web-based architecture, I realize the client has little in the way of security measures. However, the client informs me that they ve spent lots of money on their firewall infrastructure and therefore they are safe from attack. Yeah, right! Firewalls are a wonderful tool, but they are only part of the overall security equation.

Further examination of their architecture shows that just about everything is Web-based. This is worrisome. A firewall is in place, but many attacks come through the HTTP port, port 80, which is wide open through the firewall. It doesn t matter that there s a firewall in place a multitude of attacks can come through the firewall and straight into the Web server!

The client then mentions that they can inspect packets at the firewall, looking for malicious Web-based attacks. Performance issues aside, I then mention that I can use SSL/TLS to encrypt the HTTP traffic now the client cannot inspect the data at the firewall.

Firewalls are a wonderful tool when used correctly and as part of an overall security solution, but by themselves they don t solve everything.

We ve reviewed the code, and there are no security bugs.

This is another of my favorite excuses. If you don t know what a security bug looks like, of course there are no security bugs! Can you certify a Boeing 747-400 for flight worthiness? Sure, we all can! It s got a bunch of wheels, two wings that droop a little (so they must be full of fuel), four engines, and a tail. It s good to go, right? Not by a long shot. There s a great deal more to check on any airplane to verify that it s safe, and it takes someone who knows what to look for to do the job correctly. The same holds true for reviewing code for security issues. You need to have the code reviewed by one or more people who understand how attackers attack code, what constitutes secure code, and what coding mistakes people make that lead to security vulnerabilities.

I remember performing a code review for an unreleased product. The specifications looked good, the small team consisted of high-caliber developers, and the test plans were complete. Now it was time to look at the code itself. Before the meeting started, the lead developer told me that the meeting was a waste of time because they had already performed code reviews looking for security issues and had found nothing. I suggested we have the meeting anyway and decide whether to continue after forty-five minutes. Suffice it to say, I found about 10 security bugs in twenty minutes, the meeting continued for the three-hour duration, and a lot of people learned a great deal that day!

There is a corollary to this lame excuse: open source. Now, I have no intention of getting into a religious debate about open-source code. But software being open source does not mean it is more secure most people just don t know what to look for. Actively looking at source code is a good thing, so long as you know what to look for and how to fix it. This is part of what David and I do at Microsoft: we review lots of code, and we know what to look for. We also act like bulldogs and make sure the bugs are fixed! It s a fun job!

We know it s the default, but the administrator can turn it off.

OK, let s cut to the chase administrators don t turn stuff off, for five reasons:

  • They often don t know what to turn off.

  • They don t know how to turn it off.

  • They don t know what will break if they do turn it off.

  • They have perfectly stable systems why change things?

  • They have no time.

Which leaves only one viable solution: design, build, test, and deploy systems that have practical yet secure defaults. Turning on a feature that could render a system vulnerable to attack should be a conscious decision made by the administrator.

We learned this hard lesson in Microsoft Internet Information Services (IIS) 5; IIS 6 now has most features turned off by default. If you want to use many features, you have to enable them. This is totally reasonable in systems that are susceptible to attack basically, any system that opens a socket!

If we don t run as administrator, stuff breaks.

I ve had lots of conversations over the years that go like this. Me: What stuff breaks? Client: Security stuff breaks! Me: What do you mean security stuff breaks ? Client: If we don t run as admin, we get access denied errors. Me: Do you think there s a good reason for that?

This is an example of not understanding the principle of least privilege. If you get an access denied, simply run the code as an administrator, or as local system, and the error goes away! This is rarely a good idea. Most day-to-day tasks do not require running as administrator. You should run with just the right privilege to get the job done and no more.

There is a side issue, however. Sometimes systems are written poorly, and people must run as administrator simply to get the job done. As software developers, we need to move away from this and support running with least privilege when nonprivileged tasks are performed. It s not that difficult to achieve, and it s a worthy goal!

I m not a member of the Local Administrators group on my laptop and haven t been for two years. Granted, when I m building a new machine, I do add myself to the Local Administrators group and then I remove myself. Everything works fine. When I want to run an administrative tool, I simply run with alternate credentials.



Writing Secure Code
Writing Secure Code, Second Edition
ISBN: 0735617228
EAN: 2147483647
Year: 2005
Pages: 153

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net