The best compilation of common web application issues is maintained by the Open Web Application Security Project (OWASP). According to its website, it is "dedicated to enabling organizations to develop, purchase, and maintain applications that can be trusted." In short, it has a tremendous amount of information that will help you to develop a solid audit program for your web applications. OWASP is supported by companies such as VISA, Deloitte, and Foundstone.
The OWASP "top ten" have made their way into standards, such as the Payment Card Industry (PCI) standard, and these "top ten" are regarded as a set of minimum standards you should examine during an audit. Below you will find an example of how to go about an audit of the OWASP "top ten."
There are important caveats as you move through this audit program. Some of the following steps may be more important to you than other steps because of how your application is designed. We assume for the most part that there are interactions between the web server and the user, such as logging into the application or serving up user-requested data.
Keep in mind that the audience of this book varies greatly in technical abilities, and an attempt has been made to simplify as much as possible for the majority of the readers. Yo u may want to visit http://www.owasp.org/index.php/OWASP_Top_Ten_Project to determine what scope and toolset make sense in your environment.
Information must be validated before being used by a web application. Failure to validate web requests subjects the web server to increased risk from attackers attempting to manipulate input data to produce malicious results.
Discuss with the web application developer or web administrator the methodology used for input validation for the application you are testing.
There are several tools that effectively act as a proxy and allow you to see much of the content posted from your client to the remote web server. One such tool is Paros Proxy, located at http://www.parosproxy.org.
Another method used by professional web testers is to understand the movement of data during a code review. This isn't something that should be taken lightly because it may be beyond the scope of what you are trying to accomplish. There is a tradeoff that you as an auditor are going to have to make regarding the amount of effort you put into this versus the cost of the data you are protecting.
In general, two ways to look at validation methods are negative methods and positive methods. Negative methods focus on knowing what bad input to filter out based on the known bad. The problem with negative filtering is that we don't know now what tomorrow's vulnerabilities and input methods will bring. Positive filtering is much more effective and involves focusing on validating the data based on what they should be. This is similar in approach to a firewall that denies everything except what should be accepted.
Common items for positive filtering include criteria you might find in a database or other places that accept data. These include criteria such as
Data type (e.g. string, integer, and real)
Allowed character set
Minimum and maximum length
Whether null is allowed
Whether the parameter is required or not
Whether duplicates are allowed
Specific legal values (e.g., enumeration)
Specific patterns (e.g., regular expressions)
After a user is authenticated to the web server, the web server determines what kind of access the user should have and to what parts of the website the user should have access. Failure to enforce access controls (authorization) may allow an attacker to step out of authorized boundaries, accessing other users' data or administering unauthorized areas.
Discuss the policy requirements with the administrator. Failure to have a policy or other written documentation for a home-grown application is a red flag that suggests that access controls are not being enforced correctly. This is so because access controls are complicated and difficult to get right without carefully documenting and thinking through your desired results. Typical methods for bypassing given authorization include those shown in Table 8-3.
Cached or Insecure IDs
Many websites use some sort of key or ID stored on the client as a means of determining what rights the user has on the web server. If the user can guess and create a token, then he or she may have free reign on the web server.
Some websites require certain checks before allowing a user to access content deeper in the site. If the checks are not enforced, then a user can access the content directly.
These attacks attempt to backtrack and go around normal permission sets to gain access to information or files not normally accessible.
Log and configuration files, among others, may have incorrect permissions and be accessible through the web interface. Correctly setting file permissions also can help in preventing other attacks.
Client Side Caching
Clients should not cache sensitive information such as credit card and personal data. Attackers can take advantage of users' cached data and maliciously reuse this information.
Account credentials and session tokens must be protected. Attackers who can compromise passwords, keys, session cookies, or other tokens can defeat authentication restrictions and assume other users' identities.
Discuss with the administrator the authentication mechanism used to authenticate users to the web application. The web application should have built-in facilities to handle the life cycle of user accounts. Verify that help-desk functionality, such as lost passwords, is handled securely. Walk through the implementation with the administrator, and then ask the administrator to demonstrate to you the functionality.
Table 8-4 contains a shortened list of guiding principles when it comes to checking the authentication mechanism used on a website. These are taken from OWASP's website, which contains several more principles, including coding examples.
When a user enters an invalid credential into a login page, don't return which item was incorrect. Show a generic message instead such as, "Your login information was invalid!"
Never submit login information via a GET request. Always use POST.
Use SSL to protect login page delivery and credential transmission.
Remove dead code and client-side viewable comments from all pages.
Do not depend on client-side validation. Validate input parameters for type and length on the server, using regular expressions or string functions.
Database queries should use parameterized queries or properly constructed stored procedures.
Database connections should be made created using a lower privileged account. Your application should not log into the database using sa or dbadmim.
One way to store passwords is to hash passwords in a database or flat file using SHA-256 or greater with a random salt value for each password.
Prompt the user to close his or her browser to ensure that header authentication information has been flushed.
Please visit http://www.owasp.org/index.php/Authentication for an excellent overview of authentication methods and the strengths and weaknesses of each.
Cross-site scripting (XSS) allows the web application to transport an attack from one user to another end user's browser. A successful attack can disclose the second end user's session token, attack the local machine, or spoof content to fool the user. Damaging attacks include the disclosure of end-user files, installation of Trojan horse programs, redirecting the user to some other page or site, and modifying the presentation of content.
Cross-site scripting attacks are very difficult to find, and although tools can help, they are notoriously inept at locating all the possible combinations of XSS possible on a web application. By far the best method for determining if your website is vulnerable is by doing a thorough code review with the administrator.
If you were to review the code, you would search for every possible path by which HTTP input could make its way into the output going to a user's browser. The key method used to protect a web application from XSS attacks is to validate every header, cookie, query string, form field, and hidden field. Drawing on the previous discussion of positive and negative validation measures, you should make sure to employ a positive validation method.
CIRT.net contains two tools, Nikto and a Nessus plugin, that you might be able to use to help you automate the task of looking for XSS vulnerabilities on your web server. Keep in mind that these tools are not as thorough as conducting a complete code review, but at least they can provide more information to those who don't have the skill set, resources, time, and dollars to conduct a complete review. Nikto, a tool from http://www.cirt.net/code/nikto.shtml, searches literally thousands of known issues across hundreds of platforms. It's a powerful tool and should be part of your toolset. Scan items and plugins are updated frequently and can be updated automatically if desired. Commercial tools also are available that may help, such as acunetix (http://www.acunetix.com). These tools may find well-known attacks, but they will not be nearly as good as performing a solid code review.
If you don't have the internal resources available to perform a code review, particularly on a home-grown application, and you believe that the data on the website warrant a deep review, then consider hiring third-party help. There are outfits such as FishNet Security (http://www.fishnetsecurity.com) that perform this kind of work.
Buffer overflows are quick to find their way into an exploit for web servers in general. You should make sure that all applicable patches covering buffer overflows are installed on the web server to protect your web applications.
Buffer overflows aren't something you typically find by looking through the code unless you are a professional hacker paid to do this. By far the easiest method to stay on top of buffer overflows is to stay on top of the patching cycle for your systems. You have patches for the operating system, web platform, and in many cases the web application that you need to research and verify.
Discuss the patching cycle of the web servers with the web administrator to ensure that any applicable web application patches have been installed. This sounds like a repeat of step 2 above for the web platform. However, in certain cases, commercial web applications require their own patches separate from the web platform. Ensure that all known patches are installed to protect the security of the web platform and web application.
Injection attacks allow a web client to pass data through the web server and out to another system. For example, in an SQL injection attack, SQL code is passed through the web interface, and the database is asked to perform functions out of bounds of your authorization. Several websites have coughed up credit-card and Social Security card information to hackers who have taken advantage of injection attacks.
Failure to realize the power of injection attacks and to review your systems for the likelihood of being exploited may result in the loss of critical and sensitive information.
Discuss injection attacks with the administrator to ensure that he or she understands how they work, and then ask how he or she is guarding against injection attacks. There aren't any tools that will review and discover every possible injection attack on your web application, but you still can defend against them. The defense methods are a repeat of what was covered in cross-site scripting:
Validate all input using positive validation methods.
Perform a code review if possible for all calls to external resources to determine if the method could be compromised.
Commercial tools are available that may help, such as acunetix (http://www.acunetix.com). These tools are definitely powerful and may find well-known attacks, but they will not be as good as performing a solid code review.
Consider hiring third-party help if the application is particularly sensitive, you lack the resources, or you need to verify items such as regulatory compliance.
Improperly controlled error conditions allow attackers to gain detailed system information, deny service, cause security mechanisms to fail, or crash the server.
Improper error handling generally is a function of having detailed plans in place during development of the application to centralize and control all input methods. Ask the administrator how error handling was designed into the web application and how errors are handled internally as the application interfaces with other compartmentalized functions. For example, how would the web application handle an error generated by the database? Does it make a difference whether the database is hosted internally by the application as opposed to hosting the database externally on another server? How does the application handle input validation errors? What about username and password errors?
Error handling is often better controlled if it is centralized as opposed to compartmentalizing it across several interworking objects or components. If you are reviewing the code, the error handling should flow nicely and show structure. If the error handling looks haphazard and like an afterthought, then you may want to look much more closely at the application's ability to handle errors properly.
Web applications often want to obfuscate or encrypt data to protect the data and credentials. The challenge is that there are two parts to this scheme: the black box that does the magic and the implementation of the black box into your web application. These have proven difficult to code properly, frequently resulting in weak protection.
Begin the discussion with the web administrator by talking about the sensitivity of the data you want to protect. If the data are sensitive and not encrypted, then consider whether there are industry or regulatory drivers stating that the data must be encrypted, and note the issue. If data are encrypted, then discuss in detail with the developer or review documentation with the administrator to understand how the encryption mechanism was implemented into your web application. Ensure that the level of encryption is equivalent to the level of data you want to protect. If you have extremely sensitive data such as credit-card data, then you may want to have actual encryption instead of a simple algorithm that obfuscates the data.
Obfuscation simply means to find creative ways of hiding data without using a key. Encryption is considered to be much, much more secure than obfuscation. Encryption uses tested algorithms and unique keys to transform data into a new form in which there is a little or no chance of recreating the original data without a key. Sound complicated? It's that much harder to defeat properly implemented encryption than it is to defeat obfuscation.
It is possible for attackers to repeatedly connect to your web application and fill up all available resources until there is nothing left for legitimate users.
This is incredibly difficult to defend against, but there are a few things you can do. Discuss with the administrator how he or she is handling user resources. Ideally, authenticated users would have more resources, and visiting users would be limited in scope as to what they can access. Resource-intensive operations in some cases may be offloaded, such as database queries. In some cases, data may be cached so that the same operation isn't performed repeatedly, eating up your resources for an operation that just occurred.
There are also several application testers such as JMeter from Apache located at http://www.jakarta.apache.org/jmeter. Check out http://www.opensourcetesting.org/performance.php for a list of open-source stress-testing tools. Finally, make sure that your hardware and memory are sufficient for the web application.
This is a catch-all that addresses configuration management, the overarching concept of maintaining the secure configuration of the web server. Failure to maintain a secure configuration subjects the web server to lapses in technology or processes that affect the security of the web platform and web application.
Perform the web platform audit, and discuss any issues noted with the administrator. Determine if any of the issues noted are due to inadequate configuration management. Discuss the following with the administrator to ensure that proper configuration management controls are in place:
Security mailing lists for the web server, platform, and application are monitored.
The latest security patches are applied in a routine patch cycle under the guidance of written and agreed-to policies and procedures.
A security configuration guideline exists for the web servers in the environment and is strictly followed. Exceptions are carefully documented and maintained.
Regular vulnerability scanning from both internal and external perspectives is conducted to discover new risks quickly and to test planned changes to the environment.
Regular internal reviews of the server's security configuration are conducted to compare the existing infrastructure with the configuration guide.
Regular status reports are issued to upper management documenting the overall security posture of the web servers.
Having a strong server configuration standard is critical to a secure web application. These servers have many configuration options that affect security and are not secure out of the box. Taking the time to understand these options and how to configure them to your environment is fundamental to maintaining sound and secure web servers.