Applications can do a lot of things that will degrade your security. In the end, we will not be able to find all of them, but there are a number of them that should be a cause for concern. In most cases, to perform a complete review, you need to contract an expert, or set of experts; if you find anything that looks blatantly suspicious, however, contact the software vendor. If they cannot satisfactorily address your questions, take your business, and your money, and go elsewhere.
Since we started the attack in Chapter 2 with a faulty database front-end application, let us take a look at one of those first. The main problem to afflict database front-end applications, including the Web application we saw in Chapter 2, is SQL injection.
As we saw in Chapter 2, an SQL injection bug can be devastating. The core problem with any SQL injection issue is poor input validation. Altogether too many programmers forget or ignore the first rule of security: All user input is evil until proven otherwise !
Trustworthy user input is user input that you have determined to be trustworthy. As any administrator knows , anything that comes from a user must be considered evil, and should be treated as such. Programmers often do this backward. They take the input and then try to prove that it is bad. We discussed the unicorns in Chapter 1. That discussion applies in spades here.
NOTE: You can never prove that something is bad. To do so you would have to enumerate all the possible ways something could be bad, and you will forget at least one.
In Writing Secure Code , 2nd Edition (Howard and LeBlanc, Microsoft Press, 2003), Michael Howard and David LeBlanc pointed out the Turkish I problem, which is worth repeating here.
Suppose that you have a Web application that takes URLs as input. You want to reject file URLs, so you write some code like this:
<% if(InStr(0,UCase(input),"FILE",VbTextCompare)) then 'error condition, we are getting hacked else 'do some sensitive operation end if %>
The problem with this is that you may not find all the file URLs. Turkish, and allegedly also Azerbaijani, has four different letter I's: i, I, i I, and 1 . When you do the comparison, it will only match the first two. Then you will drop into the else statement, and the OS will translate the latter two into the former two, and you have now circumvented the check. The proper thing to do in this case would have been to look for the URLs you want to accept, not the ones you do not want. If you only want HTTP URLs, which is probably the case here, then look for those and reject all else. That will probably mean that you will reject valid input that you had not thought of, but frankly, we would much rather take that problem than getting ourselves hacked. Should you accidentally reject valid input you will usually find out very quickly from your users and can add those to the allowed list.
In database applications, poor input validations can be used in SQL injection attacks. Using an attack like this, an attacker can actually rewrite the queries that run on the database server. These are not unique to one type of database server or another. All database management systems are vulnerable to SQL injection attacks if the front-end applications are not properly written.
In the application we saw in Chapter 2, the code used to query for the username and password looks like this. (Do not worry if you do not understand the code completely. The implications of it will be clear imminently.)
//Three mistakes in this statement alone: SqlConnection conn = new SqlConnection(); conn.ConnectionString = "data source=PYN-SQL;" + "initial catalog=pubs;" + "user id=sa;" + "password=password;" + "persist security info=True;" + "packet size=4096";
This statement just opens the connection to the database server. There are three bad mistakes here. The first is in the line that says "data source." It uses a data source specified in the code rather than a system Data Source Name (DSN). This means that the parameters for the connection are hard coded in the application. If the file that holds these parameters is not adequately protected, the attacker may get information on the database server, as we get here where we find out that the name of the server is PYN-SQL. The next two mistakes are in the "user id" and "password" lines. First, we are making a connection to the database as a very privileged userthe sa, or system administrator user. Second, that user has a really bad password of password.
"Safe" Programming LanguagesIt may be worthwhile to point out here that most of the code we are demonstrating in this chapter is written in C# using ASP.NET. This is not the typical way to do things in ASP.NET. In fact, you have to try pretty hard to screw up this bad. If you just follow the standard wizards for creating database connections in ASP.NET, it will not hard code the connection information in this way, but rather use a DSN. Obviously, if the programmer chooses to do it in the unsafe way shown here, however, there is nothing ASP.NET can do to save you. For information about how to do this better, see Chapter 14, "Protecting Services and Server Applications." Keep in mind, however, that safer functions, or even safer languages, do not necessarily mean you will have safer programmers. It just means they will have to work a bit harder to screw things up. |
Keep in mind where the database credentials should not be found. We mentioned that you should use a DSN. However, we have seen apps that put them in a text file. Worse still, we saw one once that put it in a text file underneath the Web root. That means that any user on the Internet can just request the text file and then receive the database credentials in clear text.
Now consider this code snippet. This is the code that actually processes the logon:
conn.Open(); //Don't do this at home folks: SQL Query Composition string strQuery; strQuery = "select * from Users where UserName = '" + username.Text + "' and Password ='" + password.Text + "';";
This code is even worse than the code that makes the connection. Username.Text and password.Text are the form fields holding the username and the password. This code simply passes those on to the database with no validation whatsoever! The attacker is free to send anything he wants to the database.
Finding SQL injection problems is not always as straightforward. What if you do not want or cannot read the source code, or do not have access to it. In that case, you should get familiar with SQL Profiler, which comes with your SQL Server installation. SQL Profiler is a tool that lets you see exactly what SQL Server sees. If we do not have the source code, we fire up Profiler and start a new trace. You need to configure the trace to look for something, so go to the Events tab and select some things that make sense. If your application uses stored procedures, select SP:StmtStarting under the Stored Procedures node. If it uses T-SQL statements, select SQL:STMT Starting under the TSQL statements node. If you are unsure which it uses, select both of them. If you have no idea what T-SQL is, hire a consultant. You need to understand a little bit about SQL to do this.
It is not a bad idea to also audit logon events, so you may want to leave those in. When you are done, you will have a dialog similar to Figure 16-1.
Now go to the Data tab and select the columns you want in the output. If you are interested in which user context the queries execute select DBUsername and/or TargetUsername columns . Otherwise, the default settings are mostly fine for our purpose. When you are done, click Run.
Go to the Web app and start generating queries. For instance, you may want to start with a legitimate query, such as the one in Figure 16-2.
When you run this query, you should see some output happen in SQL Profiler. If you have done everything correctly, you will see something like Figure 16-3.
To be able to pass SQL injection statements, the attacker needs to be able to pass certain characters. First, he may need to pass in single quotes to terminate a string statement. Second, he may need to pass in semicolons to terminate entire SQL statements. Comment characters, which in T-SQL are double dashes, are also useful, as are operators and SQL Server stored procedures such as xp_cmdshell. What you do now is to play a little with these parameters in the form and see what the database sees. The application may strip out single quotes, but what if you URL-escape them? A single quote is hex character 27, so try using %27 if the app throws away the single quote. Sometimes these escape characters are unescaped before sending to the database server. Use the Character Map tool (in your Accessories folder on the Start menu) to find the appropriate escape codes for things such as single quotes, double quotes, semicolons, dashes, and so on.
If the input handling is done properly, the illegal characters will be stripped out before Profiler sees them. For instance, try something like what you see in Figure 16-4.
The result is shown in Figure 16-5 and should be self-explanatory.
As you can see in Figure 16-5, the database sees this query:
select * from Users where UserName = 'foo' OR 1=1;--' and Password ='';
Profiler is also nice enough to color code things for you, so you can plainly see that the stuff at the end ( ;--' and Password =''; ) is considered a comment. As we can see here, there is no input validation whatsoever. We can play with other characters as well, but in this case it is plain to see that the database will receive any query the attacker wants. This application is fundamentally flawed.
A note of caution is worthwhile here. We have seen Web applications that limit the amount of data a user can type in a form field. Input validation needs to happen on a system you control (the server), not one the attacker controls (the client). An attacker can, and will, trivially circumvent it by not using the Web application itself. Attackers frequently use a custom program to send any parameters they want. Field length limitations are client-side attempts at input validation. Client-side input validation is done mostly as a convenience to avoid having to round-trip data to the server to perform basic sanity checks. Client-side input validation does not obviate the need for server-side input validation.
WARNING: Client-side input validation is not a security feature. You must never rely on client-side input validation to keep you safe. An attacker will not use your application and therefore will not be bothered by your client-side input checks.
If you purchase Web applications, or if you are deploying Web applications from in-house developers, SQL Profiler may just have become your newest best friend. You can use it to double-check claims made by the developers and ensure that they really are telling you the truth. Remember, if SQL Profiler sees it, the database server sees it, and if the database server sees it and it is bad, you may have just been hacked.
For the interested reader, there is a wealth of information on SQL security on the Web. The OWASP project (http://www.owasp.org) is a project dedicated to Web application security and includes information on SQL injection and how to prevent it. SQL Security.com (http://www.sqlsecurity.com), run by Microsoft SQL Server MVP Chip Andrews, is a site dedicated to security in SQL Server.
One final word before we go on to the next topic; some developers will try to explain away SQL injection with claims such as "well, but we have secured the database." Any hardening of the database, including what we did in Chapter 14, is simply a band -aid on top of a known SQL injection problem. Although it is worthwhile as a defense- in-depth measure against the unknown, it should not be used as the primary defense strategy. If an app contains a SQL injection problem, it is unsafe. Period. It should not be used until it is fixed.
In a cross-site scripting attack, the Web server is not actually the victim. Rather, the victim is someone else. For instance, suppose that some bank has a cross-site scripting bug. An attacker can now lure a victim to click a link that goes to the bank, but that includes a script embedded in the link. When the victim clicks the link, the script executes as if it came from the bank, and has access to any data that the bank Web site would, such as cookies. The script could now take the content of the cookie and send it to the attacker.
Finding cross-site scripting problems is notoriously hard, particularly if you do not have access to the application source code. However, there are a couple of tell-tale signs. First, anytime you see anything that you entered in a form or in a link parameter echoed to the screen, you should be suspicious. In the Web application we showed earlier, we are clearly echoing the username to the screen. To see what else we echo, take the same approach we did for finding SQL injection problems. Send bad input. This is the second clue. To perform a cross-site scripting attack, we need to send < and > characters. Will the app strip them out? Use something like what you see in figure 16-6 to find out.
Figure 16-7 shows the result. The angle bracket went through!
As it turns out, however, there may still be something there to protect you. Try using this as the username instead: foo' OR 2>1;-- <script> alert(UR0wn3d!)</script> . If the cross-site scripting problem is unmitigated, we should now get an alert dialog when we open the page. However, in this particular case, we get what you see in Figure 16-8 instead.
This is really very cool! We did not implement any input validation. In fact, the cross-site scripting attack would have workedhad the code been written in Active Server Pages or Java Server Pages instead. However, the .NET Framework will automatically check for cross-site scripting attacks for us and throw an error if it thinks it has found one. Yet another good reason to use the Framework. For more information on how it protects you against cross-site scripting attacks, see the .NET Framework SDK at http://www.microsoft.com/downloads/details.aspx?FamilyID=9b3a2ca6-3647-4070-9f41-a333c6b9181d.
Poor database security covers a number of different concepts. In Chapter 14, we talked about how to connect to the database server, how to harden it, and how to enumerate who has permissions. One issue we have not discussed, however, is encryption. Several people have recently asked us how to encrypt data in SQL Server. The answer is that you use the application to do that. SQL Server performs all data access in the context of the service account. That means that if you use the encrypting file system (EFS) on the database files, for instance, you would have to make them available to the service account, and you really have not gained much. Data encryption is an application function. SQL Server takes in a blob and stores it. Whether the blob is a plaintext password or a an encrypted one, for instance, is irrelevant to SQL Server. It will store it fine in either case.
Authentication can be done in so many places. A full discussion of authentication can, and should, take up an entire book. However, the primary issues here are replay attacks and attacks against plaintext or poorly obfuscated credentials. Instead of trying to explain what all the possible ways to screw up are, it is easier to outline briefly what the right way to authenticate is.
First, an authentication sequence should be time-stamped to avoid replay attacks. It should also include information about the requested resource so that the sequence cannot be captured and used against a different resource. Both of these values need to be digitally signed, preferably using a public key from the authentication server, such as you may obtain through an SSL channel. Encryption of these values is not important.
Second, an authentication sequence should always use some form of challenge-response to prove the identity of the user as opposed to sending the actual credentials across the wire. Preferably, the credentials used to generate the response token should not be the same as those used to verify it. This protects against use of the hashed credentials should they be stolen off the authentication server. Chapter 11, "Passwords and Other Authentication MechanismsThe Last Line of Defense," addresses this at more length.
In general, if an accepted authentication protocol, such as NTLMv2 or, better yet, Kerberos, can be used, it should be. These protocols were developed by experts on the matter and are probably better than anything we could custom make for an application.
This is an admittedly brief discussion of authentication. For more details, refer to Matt Bishop's Computer Security: Art and Science (Addison-Wesley, 2002), which contains an excellent discussion on authentication.
Buffer overflows are a huge security problem today. A buffer overflow is where an application tries to stuff more data into a buffer than what the buffer can hold. When this happens, the excess data goes somewhere on either the stack or the heap, depending on how the buffer was allocated. From there, an attacker can usually use the buffer overflow to execute arbitrary code.
A buffer overflow that involves user input is particularly worrisome. A buffer overflow in a user application, such as most of the command-line tools is not really a problem. For instance, we have received reports that if you pass a long server name to ftp.exe, it will overflow a buffer. Frankly, this is a code quality bug, not a security bug. If you manage to exploit that, you can only make it run code as yourself. A buffer overflow is only a security bug if it allows you to run code as someone else. Otherwise, it is merely a code quality bug.
If there were a foolproof way to find buffer overflows, we would tell you about it. However, there is not, and there are experts on the subject who are still learning. Howard and LeBlanc have an excellent discussion of buffer overflows in Chapter 5 of Writing Secure Code , 2nd Edition. We refer the interested reader to that book rather than try to reiterate what they say here. They also cover other similar types of problems, such as integer overflows, format string bugs , and so on.
Some applications contain unsafe security settings. Particularly worrisome are those that are set in an unsafe state by default. Any time you deploy an application, you should ask for information on the available security settings, their default values, and what will break when you turn them on. Invariably, something will break; otherwise the settings will be on by default. You should hold vendors accountable for producing this type of information on demand.
An example of this is the authentication options in SQL Server. By default, SQL 2000 will not accept SQL authentication. However, because of that older applications that use it will break. Some products even have a security configuration guide that describes available security settings and how to use them. All current versions of Windows, as well as Exchange 2003, have one, for example.
Any application of a nonadministrative nature that cannot run as a nonadministrator should be considered broken. Administrative privileges are needed to reconfigure the OS, add users, load and unload device drivers, etc. It should not be needed to balance your checkbook . If the manufacturer claims that it is, return the application and ask for a full refund. That application is broken. Unfortunately, we will never get software that runs as a nonadministrator for nonadministrative operations unless the folks that pay money for those applications demand it.
A very large number of applications suffer from this problem. Many need to run as an administrator the first time they are executed but can run as a nonadmin after that. Although this will keep them from being Windows Logo certified, it is a more acceptable condition.
In Chapter 14, we showed a way to figure out whether an application that claims it needs admin privileges can actually run as a nonadmin. In many cases, it is possible and very worthwhile to do that. Keep in mind, however, that you may have to unlock too much. For instance, if the app needs write access to some binary, this could be used to compromise some other user. A rogue user could just replace the binary with a modified one to do his or her evil bidding.
Does the application store cleartext sensitive data or, worse yet, send cleartext data over the Internet? If it does, you have a problem. In many jurisdictions, you are now required to adequately protect customers' confidential information, and an application that fails to do so would probably put you in breach of that requirement.
To discover how the data is stored is relatively easy: just look at the data store and see what is there. To see how it traverses the network is a bit harder. The best way to find out is to break out a network sniffer. Ethereal is very good but can sometimes be difficult to configure. Microsoft's Network Monitor is a cinch to use but not quite as good. In addition, the version of Network Monitor that comes with Windows Server does not support promiscuous mode, so it will only log traffic to and from the machine where Network Monitor is running. To get promiscuous mode, you need the version that comes with Systems Management Server (SMS). If you do not have a copy of SMS, get a copy of Ethereal instead. It is free from http://www.ethereal.com.
Use the network sniffer to look at the data as it is going across the wire. If you can read it, you have a broken application. Keep in mind, however, that just because you cannot read it does not mean it is protected. Very often a programmer will obfuscate the data by running it through a base64 routine, or by XORing it with something. Neither of those is adequate protection. To protect the data, it must be encrypted, which brings us to the next topic.
If there is one thing that makes the little hairs on the back of our necks stand up, it is the statement "we do not trust any of the commercial crypto algorithms, so we invented our own." If you have a software vendor or programmer tell you that, run, do not walk, away from there. 99.9 times out of 100 they are using base64, XOR, ROT-13, or some other encoding mechanism. Collectively, these things fall under the term encraption . None of them provide sufficient protection. If an application needs to protect data under Windows, it should use the CryptoAPI with a strong protocol. AES is a good block cipher. RC4, properly used, is a reasonable stream cipher. For hashes, use nothing less than SHA-1 or SHA-256.
If the objective is to store passwords instead, the app may want to use the Credential Manager API. It is a set of APIs to store passwords for things such as Passport, other Windows systems, and so on. In actuality, it is just a thin wrapper on top of CryptoAPI, designed specifically for storing passwords.
Do not let programmers lure you into believing that they understand how to write cryptographic algorithms. The chances that they understand it better than the professionals are about the same as us winning the lottery, considering neither of us plays. Make them use proper existing algorithms, properly.
One of the biggest problems with software is a lack of a service level agreement (SLA). For instance, several vendors of large business software refuse to certify their software to run on patched systems. This leaves you with three options: (1) run it on unpatched systems and get yourself hacked, (2) patch the boxes anyway and risk breaking them and losing your expensive support contract, (3) get your money back and go buy from a vendor that cares about your security.
For a critical security vulnerability, a vendor must certify their software on a patched system within hours, or a few days at most. For an important security vulnerability, certification must take no more than two weeks. If a vendor cannot live up to that kind of SLA, they are not taking your security seriously. If they will not take due diligence to protect your systems, they probably have not taken due diligence to protect their own software either and you should reevaluate whether there are other vendors who perform better.
Note here that we have heard stories of fingerpointing at government agencies, stating that they are the ones responsible for certification of patches that are supported or required for special-purpose systems, such as medical systems. The government barely knows its own operations. It is totally unreasonable to expect it to test all special-purpose systems on every patch. It must be the vendor's responsibility to test its software on patched platforms.
The last warning flag is an unbelievable claim. Many software vendors will claim things such as "our software makes your network secure," or "our software is secure," or "our software is unbreakable ." In the Old West, they called such claims "snake oil." They are untrue. There are several facts you need to consider about such claims.
No software is secure. To realize why, remember the unicorns.
No software can make a network secure. Again, the unicorns are important. However, also keep in mind what we said earlier about the IDS service. Sometimes software intended to secure the network makes it less secure instead.
No software can stop physical attacks. Software can make physical attacks more difficult, but the only way to stop physical attacks is to use physical security
No software is unbreakable. See the first item in this list.
Although software may stop "all known attacks," is that really interesting? In most cases, patching your systems will accomplish the same thing. It is the unknown attacks that we have to worry about.
Software that uses "the strongest possible cryptography" usually does not. Make sure you understand not only what crypto it is using, but also for what and how.
Software written by "security experts" is usually not. Recall the basic definition of a security expert: someone who gets quoted in the press. If a company needs to advertise that it uses security experts to write its software, there is a really good chance that it would not recognize a real security expert should it happen to run across one.