How to Approach Security Testing


Like any other form of testing, security testing involves determining who should do the testing and what activities they should undertake.

Who

Because security testing involves two approaches, the question of who should do it has two answers. Standard testing organizations using a traditional approach can perform functional security testing. For example, ensuring that access control mechanisms work as advertised is a classic functional testing exercise. Since we basically know how the software should behave, we can run some tests and make sure that it does.[4]

[4] This is not to trivialize the critical field of software testing. Testing is a difficult and painstaking activity that requires years of experience to do right.

On the other hand, traditional QA staff will have more difficulty performing risk-based security testing. The problem is one of expertise. First, security tests (especially those resulting in complete exploit) are difficult to craft because the designer must think like an attacker. Second, security tests don't often cause direct security exploit and thus present an observability problem. Unlike in the movies, a security compromise does not usually result in a red blinking screen flashing the words "Full Access Granted." A security test could result in an unanticipated outcome that requires the tester to perform further sophisticated analysis. Bottom line: Risk-based security testing relies more on expertise and experience than we would likeand not testing experience, security experience.

The software security field is maturing rapidly. I hope we can solve the experience problem by identifying best practices, gathering and categorizing knowledge, and embracing risk management as a critical software philosophy.[5] At the same time, academics are beginning to teach the next generation of builders a bit more about security so that we no longer build broken stuff that surprises us when it is spectacularly exploited.

[5] The three pillars of software security.

How

Books, such as How to Break Software Security and Exploiting Software, help educate testing professionals on how to think like an attacker during testing [Whittaker and Thompson 2003; Hoglund and McGraw 2004]. Nevertheless, software exploits are surprisingly sophisticated these days, and the level of discourse found in books and articles is only now coming into alignment.

White and black box testing and analysis methods both attempt to understand software, but they use different approaches depending on whether the analyst or tester has access to source code. White box analysis involves analyzing and understanding both source code and the design. This kind of testing is typically very effective in finding programming errors (bugs when automatically scanning code and flaws when doing risk analysis); in some cases, this approach amounts to pattern matching and can even be automated with a static analyzer (the subject of Chapter 4). One drawback to this kind of testing is that tools might report a potential vulnerability where none actually exists (a false positive). Nevertheless, using static analysis methods on source code is a good technique for analyzing certain kinds of software. Similarly, risk analysis is a white box approach based on a thorough understanding of software architecture.

Black box analysis refers to analyzing a running program by probing it with various inputs. This kind of testing requires only a running program and doesn't use source code analysis of any kind. In the security paradigm, malicious input can be supplied to the program in an effort to break it: if the program breaks during a particular test, then we might have discovered a security problem. Black box testing is possible even without access to binary codethat is, a program can be tested remotely over a network. If the tester can supply the proper input (and observe the test's effect), then black box testing is possible.

Any testing method can reveal possible software risks and potential exploits. One problem with almost all kinds of security testing (regardless of whether it's black or white box) is the lack of itmost QA organizations focus on features and spend very little time understanding or probing nonfunctional security risks. Exacerbating the problem, the QA process is often broken in many commercial software houses due to time and budget constraints and the belief that QA is not an essential part of software development.

Case studies can help make sense of the way security testing can be driven by risk analysis results. See the box An Example: Java Card Security Testing.

An Example: Java Card Security Testing

Doing effective security testing requires experience and knowledge. Examples and case studies like the one I present here are thus useful tools for understanding the approach.

In an effort to enhance payment cards with new functionalitysuch as the ability to provide secure cardholder identification or remember personal preferencesmany credit-card companies are turning to multi-application smart cards. These cards use resident software applications to process and store thousands of times more information than traditional magnetic-stripe cards.

Security and fraud issues are critical concerns for the financial institutions and merchants spearheading smart-card adoption. By developing and deploying smart-card technology, credit-card companies provide important new tools in the effort to lower fraud and abuse. For instance, smart cards typically use a sophisticated crypto system to authenticate transactions and verify the identities of the cardholder and issuing bank. However, protecting against fraud and maintaining security and privacy are both very complex problems because of the rapidly evolving nature of smart-card technology.

The security community has been involved in security risk analysis and mitigation for Open Platform (now known as Global Platform, or GP) and Java Card since early 1997. Because product security is an essential aspect of credit-card companies' brand protection regimen, companies like Visa and MasterCard spend plenty of time and effort on security testing and risk analysis. One central finding emphasizes the importance of testing particular vendor implementations according to our two testing categories: adherence to functional security design and proper behavior under particular attacks motivated by security risks.

The latter category, adversarial security testing (linked directly to risk analysis findings), ensures that cards can perform securely in the field even when under attack. Risk analysis results can be used to guide manual security testing. As an example, consider the risk that, as designed, the object-sharing mechanism in Java Card is complex and thus is likely to suffer from security-critical implementation errors on any given manufacturer's card. Testing for this sort of risk involves creating and manipulating stored objects where sharing is involved. Given a technical description of this risk, building specific probing tests is possible.

Automating Security Testing

Over the years, Cigital has been involved in several projects that have identified architectural risks in the GP/Java Card platform, suggested several design improvements, and designed and built automated security tests for final products (each of which has multiple vendors).

Several years ago, we began developing an automated security test framework for GP cards built on Java Card 2.1.1 and based on extensive risk analysis results. The end result is a sophisticated test framework that runs with minimal human intervention and results in a qualitative security testing analysis of a sample smart card. This automated framework is now in use at MasterCard and the U.S. National Security Agency.

The first test set, the functional security test suite, directly probes low-level card security functionality. It includes automated testing of class codes, available commands, and crypto functionality. This test suite also actively probes for inappropriate card behavior of the sort that can lead to security compromise.

The second test set, the hostile applet test suite, is a sophisticated set of intentionally hostile Java Card applets designed to probe high-risk aspects of the GP on a Java Card implementation.

Results: Nonfunctional Security Testing Is Essential

Most cards tested with the automated test framework (but not all) pass all functional security tests, which we expect because smart-card vendors are diligent with functional testing (including security functionality). Because smart cards are complex embedded devices, vendors realize that exactly meeting functional requirements is an absolute necessity for customers to accept the cards. After all, they must perform properly worldwide.

However, every card submitted to the risk-based testing paradigm exhibited some manner of failure when tested with the hostile applet suite. Some failures pointed directly to critical security vulnerabilities on the card; others were less specific and required further exploration to determine the card's true security posture.

As an example, consider that risk analysis of Java Card's design documents indicates that proper implementation of atomic transaction processing is critical for maintaining a secure card. Java Card has the capability of defining transaction boundaries to ensure that if a transaction fails, data roll back to a pre-transaction state. In the event that transaction processing fails, transactions can go into any number of possible states, depending on what the applet was attempting. In the case of a stored-value card, bad transaction processing could allow an attacker to "print money" by forcing the card to roll back value counters while actually purchasing goods or services. This is called a "torn transaction" attack in credit-card risk lingo.

When creating risk-based tests to probe transaction processing, we directly exercised transaction-processing error handling by simulating an attacker attempting to violate a transactionspecifically, transactions were aborted or never committed, transaction buffers were completely filled, and transactions were nested (a no-no according to the Java Card specification). These tests were not based strictly on the card's functionalityinstead, security test engineers intentionally created them, thinking like an attacker given the results of a risk analysis.

Several real-world cards failed subsets of the transaction tests. The vulnerabilities discovered as a result of these tests would allow an attacker to terminate a transaction in a potentially advantageous mannera critical test failure that wouldn't have been uncovered under normal functional security testing. Fielding cards with these vulnerabilities would allow an attacker to execute successful attacks on live cards issued to the public. Because of proper risk-based security testing, the vendors were notified of the problems and corrected the code responsible before release.


Coder's Corner

Let's take a look at one of the tests that we built for Java Card security testing. This test set as a whole probes whether shareable objects behave properly on a card.

First, the interface specification:

package tests.config1.jcre.JcreTest010_1; import javacard.framework.Shareable; import ssg.framework.*; public interface shareableInterface extends Shareable {    public void shareObject(); }


This little glob of code implements the shared interface and sets up the test harness.

[View full width]

package tests.config1.jcre.JcreTest010_1; import javacard.framework.*; import ssg.framework.*; public class JcreTest010_1a extends Applet implements shareableInterface { static byte[] shareableObjectBuffer; private JcreTest010_1a() { shareableObjectBuffer = new byte[10]; for(byte i=0; i < 10; i++) shareableObjectBuffer[i] = 0x11; register(); } public void shareObject() { for(byte i=0; i < 10; i++) shareableObjectBuffer[i] = 0x22; } public void testFunc() { for(byte i=0; i < 10; i++) shareableObjectBuffer[i] = 0x33; } public Shareable getShareableInterfaceObject(AID client_aid, byte parameter) { /*for(byte i=0; i < 10; i++) shareableObjectBuffer[i] = 0x33;*/ return (this); } public static void install(byte[] bArray, short bOffset, byte bLength) { new JcreTest010_1a(); } public void process(APDU apdu) { byte[] apdu_buffer = apdu.getBuffer(); apdu.setOutgoing(); apdu.setOutgoingLength((short)10); Util.arrayCopy(shareableObjectBuffer, (short)0, apdu_buffer, (short)0, (short)10); for(byte i=0; i < 10; i++) shareableObjectBuffer[i] = 0x11; apdu.sendBytes((short)0, (short)10); } }


Then we can run tests like this. (I show you only one of the five tests related to shareable interfaces just to keep things simple.)

[View full width]

package tests.config1.jcre.JcreTest010_2; import javacard.framework.*; import ssg.framework.*; import tests.config1.jcre.JcreTest010_1.*; public class JcreTest010_2a extends Applet { byte [] serverAID = {74,99,114,101,84,101,115,116,48,49,48,49,97}; byte [] AIDValue; private JcreTest010_2a() { AIDValue = new byte[16]; register(); } public static void install( byte[] bArray, short bOffset, byte bLength ) { new JcreTest010_2a(); } public void process(APDU apdu) { AID serverAIDObject = JCSystem.lookupAID(serverAID, (short)0, (byte)serverAID.length); if(serverAIDObject == null) ISOException.throwIt(ISO7816.SW_WRONG_P1P2); // 0x6B00 if((serverAIDObject.equals(serverAID, (short)0, (byte)serverAID.length)) == false) ISOException.throwIt(ISO7816.SW_CORRECT_LENGTH_00); // 0x6C00 shareableInterface sio = (shareableInterface) (JCSystem.getAppletShareableInterfaceObject(serverAIDObject, (byte)0)); if(sio == null) { byte length = serverAIDObject.getBytes(AIDValue, (short)0); byte[] apdu_buffer = apdu.getBuffer(); apdu.setOutgoing(); apdu.setOutgoingLength((short)length); Util.arrayCopy(AIDValue, (short)0, apdu_buffer, (short)0, (short)length); apdu.sendBytes((short)0, (short)length); ISOException.throwIt(ISO7816.SW_INS_NOT_SUPPORTED); // 0x6D00 } sio.shareObject(); } }


What we found in practice on one of the many real cards we tested was that the shareable interface tests all worked fine. What failed was the test teardown procedure that tries to leave the card in the same state as when we started. When this failed, we did some investigation by hand and uncovered some interesting issues.


There is no silver bullet for software security; even a reasonable security testing regimen is just a start. Unfortunately, security continues to be sold as a product, and most defensive mechanisms on the market do little to address the heart of the problem, which is bad software. Instead, they operate in a reactive mode: Don't allow packets to this or that port, watch out for files that include this pattern in them, throw partial packets and oversized packets away without looking at them. Network traffic is not the best way to approach the software security predicament because the software that processes the packets is the problem. By using a risk-based approach to software security testing, testing professionals can help solve security problems while software is still in production.

Of course, any testing approach is deeply impacted by software process issues. Because of eXtreme Programming's (XP) "test first" philosophy, adopting a risk-based approach may be difficult if you are in an XP shop. See the following box, eXtreme Programming and Security Testing.




Software Security. Building Security In
Software Security: Building Security In
ISBN: 0321356705
EAN: 2147483647
Year: 2004
Pages: 154
Authors: Gary McGraw

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net