People's natural inclination when approaching code review is to try to structure it like a waterfall-style development process. This means starting with a structured design review phase and adhering to a formal process, including DFDs and attack trees. This type of approach should give you all the information you need to plan and perform an effective targeted review. However, it doesn't necessarily result in the most time-effective identification of high and intermediate level design and logic vulnerabilities, as it overlooks a simple fact about application reviews: The time at which you know the least about an application is the beginning of the review. This statement seems obvious, but people often underestimate how much one learns over the course of a review; it can be a night and day difference. When you first sit down with the code, you often can't see the forest for the trees. You don't know where anything is, and you don't know how to begin. By the end of a review, the application can seem almost like a familiar friend. You probably have a feel for the developers' personalities and can identify where the code suffers from neglect because everyone is afraid to touch it. You know who just read a book on design patterns and decided to build the world's most amazing flexible aspect-oriented turbo-logging engineand you have a good idea which developer was smart enough to trick that guy into working on a logging engine. The point is that the time you're best qualified to find more abstract design and logic vulnerabilities is toward the end of the review, when you have a detailed knowledge of the application's workings. A reasonable process for code review should capitalize on this observation. A design review is exceptional for starting the process, prioritizing how the review is performed, and breaking up the work among a review team. However, it's far from a security panacea. You'll regularly encounter situations, such as the ones in the following list, where you must skip the initial design review or throw out the threat model because it doesn't apply to the implementation:
Accepting all the preceding points, performing a design review and threat model first, whenever realistically possible, is still encouraged. If done properly, it can make the whole assessment go more smoothly. Avoid DrowningThis process has been structured based on extensive experience in performing code reviews. Experienced auditors (your authors in particular) have spent years experimenting with different methodologies and techniques, and some have worked out better than others. However, the most important thing learned from that experience is that it's best to use several techniques and switch between them periodically for the following reasons:
Iterative ProcessThe method for performing the review is a simple, iterative process. It's intended to be used two or three times over the course of a work day. Generally, this method works well because you can switch to a less taxing auditing activity when you start to feel as though you're losing focus. Of course, your work day, constitution, and preferred schedule might prompt you to adapt the process further, but this method should be a reasonable starting point. First, you start the application review with an initial preparation phase, in which you survey what information you have available, make some key decisions about your audit's structure, and perform design review if adequate documentation is available and you deem it to be time effective. After this initial phase, the cycle has three basic steps:
These three steps are repeated until the end of the application review phase, although the selection of auditing strategies changes as a result of the assessment team understanding the codebase more thoroughly. Initial PreparationYou need to get your bearings before you can start digging into the code in any meaningful way. At this point, you should have a lot of information, but you probably don't know exactly where to start or what to do with the information. The first decision to make is how you're going to handle the structure of your review. If you don't have much documentation, your decision is simple: You have to derive the design from the implementation during the course of your review. If you have adequate documentation, you can use it as a basic roadmap for structuring your review. There are three generalized approaches to performing an assessment: top-down, bottom-up, and hybrid. The first two are analogous to the types of component decomposition in software design. As in software design, the approach is determined by your understanding of the design at a particular level. Top-Down ApproachThe top-down (or specialization) approach mirrors the classical waterfall software development process and is mostly an extension of the threat-modeling process described in Chapter 2, "Design Review." For this approach, you begin from your general knowledge of the application contained in your threat model. You then continue refining this model by conducting implementation assessments along the security-relevant pathways and components identified in your model. This approach identifies design vulnerabilities first, followed by logical implementation vulnerabilities and then low-level implementation vulnerabilities. This technique is good if the design documentation is completely accurate; however, any discrepancies between the design and implementation could cause you to ignore security-relevant code paths. In practice, these discrepancies are probable, so you need to establish some additional checks for assessing the pathways your model identifies as not relevant to security. Bottom-Up ApproachThe bottom-up (or generalization) approach mirrors the other classic software-factoring approach. The review proceeds from the implementation and attempts to establish the lowest-level vulnerabilities first. A valuable aspect of this approach is that it allows you to build an understanding of the application by assessing the codebase first. You can then develop the higher-level threat models and design documentation later in the review process, when your understanding of the application is greatest. The disadvantage is that this approach is slow. Because you're working entirely from the implementation first, you could end up reviewing a lot of code that isn't security relevant. However, you won't know that until you develop a higher-level understanding of the application. As part of a bottom-up review, maintaining a design model of the system throughout the assessment is valuable. If you update it after each pass through the iterative process, you can quickly piece together the higher-level organization. This design model doesn't have to be formal. On the contrary, it's best to use a format that's easy to update and can capture potentially incomplete or incorrect information. You can opt for DFD sketches and class diagrams, or you can use a simple text file for recording your design notes. Choose the approach you consider appropriate for the final results you need. Hybrid ApproachThe hybrid approach is simply a combination of the top-down and bottom-up methods, but you alternate your approach as needed for different portions of an application. When you lack an accurate design for the basis of your review (which happens more often than not), the hybrid approach is the best option. Instead of proceeding from a threat model, you use the information you gathered to try to establish some critical application details. You do this by performing an abbreviated modeling process that focuses on identifying the following high-level characteristics (from the design review process):
These points might seem nebulous when you first encounter a large application, but that's why you can define them broadly at first. As you proceed, you can refine your understanding of these details. It can also help to get a few complete design reviews under your belt first. After all, it's good to know how a process is supposed to work before you try to customize and abbreviate it. PlanIn the planning phase, you decide which auditing strategy you should use next. These auditing strategies are described in detail and evaluated based on several criteria in "Code-Auditing Strategies," later in this chapter. However, you need to understand some general concepts first, described in the following sections. Consider Your GoalsTypically, you have several goals in an application assessment. You want to discover certain classes of implementation bugs that are easy to find via sub-string searches or the use of tools, especially bugs that are pervasive throughout the application. Cross-site scripting and SQL injection are two common examples of these types of bugs. You might analyze one or two instances in detail, but the real goal here is to be as thorough as possible and try to come up with a list that developers can use to fix them all in one mass effort. You also want to discover implementation bugs that require a reasonable degree of understanding of the code, such as integer overflows, where you have to know what's going on at the assembly level but don't necessarily have to know what the code is trying to do at a higher level of abstraction. As your understanding develops, you want to discover medium-level logic and algorithmic bugs, which require more knowledge of how the application works. You also want to discover higher-level cross-module issues such as synchronization and improper use of interfaces. If you're using a top-down approach, you might be able to ascertain such vulnerabilities working solely from design documentation and developer input. If you're using a bottom-up or hybrid approach, you will spend time analyzing the codebase to create a working model of the application design, be it formal or informal. Pick the Right StrategyThe "Code-Auditing Strategies" section later in this chapter describes a number of options for proceeding with your review. Most of these strategies work toward one or more goals at the same time. It's important to pick strategies that emphasize the perspective and abstraction level of the part of the review you're focusing on. Your planning must account for the stages at which a strategy is best applied. If you can perform a true top-down review, your progression is quite straightforward, and your strategies proceed from the application's general design and architecture into specific implementation issues. However, in practice, you can almost never proceed that cleanly, so this section focuses on considerations for a hybrid approach. The beginning of a hybrid review usually focuses on the simpler strategies while trying to build a more detailed understanding of the codebase. As you progress, you move to more difficult strategies that require more knowledge of the implementation but also give you a more detailed understanding of the application logic and design. Finally, you should build on this knowledge and move to strategies that focus on vulnerabilities in the application's high-level design and architecture. Create a Master Ideas ListAs the review progresses, you need to keep track of a variety of information about the code. Sometimes you can lose track of these details because of their sheer volume. For this reason, maintaining a separate list of ways you could exploit the system is suggested. This list isn't detailed; it just includes ideas that pop into your head while you're working, which often represent an intuitive understanding of the code. So it's a good idea to capture them when they hit you and test them when time is available. Pick a Target or GoalReviewing a large codebase is overwhelming if you don't have some way of breaking it up into manageable chunks. This is especially true at the beginning of an assessment when you have so many possible approaches and don't know yet what's best. So it helps to define targets for each step and proceed from there. In fact, some code-auditing strategies described in this chapter require choosing a goal of some sort. So pick one that's useful in identifying application vulnerabilities and can be reasonably attained in a period of two to eight hours. That helps keep you on track and prevents you from getting discouraged. Examples of goals at the beginning of an assessment include identifying all the entry points in the code and making lists of known potentially vulnerable functions in use (such as unchecked string manipulation functions). Later goals might include tracing a complex and potentially vulnerable pathway or validating the design of a higher-level component against the implementation. CoordinateWhen reviewing a large application professionally, usually you work with other auditors, so you must coordinate your efforts to prevent overlap and make the most efficient use of your time. In these situations, it helps if the module coupling is loose enough that you can pick individual pieces to review. That way, you can just make notes on what potential vulnerabilities are associated with a set of module interfaces, and one of your co-auditors can continue the process to review these interfaces in his or her own analysis. Unfortunately, divisions aren't always clean, and you might find yourself reviewing several hundred KLOC of spaghetti code. Splitting up the work in these situations might not be possible. If you can, however, you should work closely with other auditors and communicate often to prevent too much overlap. Fortunately, a little overlap can be helpful because some degree of double coverage is beneficial for identifying vulnerabilities in complex code. Remember to maintain good notes and keep each other informed of your status; otherwise, you can miss code or take twice as long on the application. You also need to know when coordinated work just isn't possible, particularly for smaller and less modular applications. With these applications, the effort of coordination can be more work than the review of the application itself. There's no way to tell you how to make this call because it depends on the application and the team performing the review. You have to get comfortable with the people you work with and learn what works best for them and you. WorkThe actual work step involves applying the code-auditing strategies described in this chapter. This explanation sounds simple, but a lot of effort goes into the work step. The following sections cover a handful of considerations you need to remember during this step. Working PapersRegulated industries have established practices for dealing with working papers, which are simply notes and documentation gathered during an audit. The information security industry isn't as formalized, but you should still get in the habit of taking detailed assessment notes. This practice might seem like a nuisance at first, but you'll soon find it invaluable. The following are a few reasons for maintaining good working papers:
Knowing the value of notes is one thing, but every auditor has his or her own style of note taking. Some reviewers are happy with a simple text file; others use spreadsheets that list modules and files. You can even create detailed spreadsheets listing every class, function, and global object. Some reviewers develop special-purpose interactive development environment (IDE) plug-ins with a database back end to help in generating automated reports. In the end, how you take notes isn't as important as what you're recording, so here are some guidelines to consider. First, your notes should be clear enough that a peer could approximate your work if you aren't available. Analogous to comments in code, clear and verbose notes aren't just for knowledge transfer, but also prove useful when you have to revisit an application you haven't seen in a while. Second, your notes must be thorough enough to establish code coverage and support any findings. This guideline is especially important for a consultant when dealing with clients; however it is valuable for internal reviews as well. Don't Fall Down Rabbit HolesSometimes you get so caught up in trying to figure out a fascinating technical issue that you lose track of what your goal is. You want to chase down that complex and subtle vulnerability, but you risk neglecting the rest of the application. If you're lucky, your trip down the rabbit hole at least taught you a lot about the application, but that won't matter if you simply run out of time and can't finish the review. This mistake is fairly common for less experienced reviewers. They are so concerned with finding the biggest show-stopping issue they can that they ignore much of the code and end up with little to show for a considerable time investment. So make sure you balance your time and set milestones that keep you on track. This might mean you have to ignore some interesting possibilities to give your client (or employer) good coverage quality within your deadline. Make note of these possible issues, and try to return to them if you have time later. If you can't, be sure to note their existence in your report. Take Breaks as NeededYour brain can perform only so much analysis, and it probably does a good chunk of the heavy lifting when you aren't even paying attention. Sometimes you need to walk away from the problem and come back when your subconscious is done chewing on it. Taking a break doesn't necessarily mean you have to stop working. You might just need to change things up and spend a little time on some simpler tasks you would have to do anyway, such as applying a less taxing strategy or adding more detail to your notes. This "break" might even be the perfect time to handle some minor administrative tasks, such as submitting the travel expense reports you put off for the past six months. However, sometimes a break really means a break. Get up from your chair and poke your head into the real world for a bit. ReflectIn the plan and work steps, you've learned about the value of taking notes and recording everything you encounter in a review. In the reflect step, you should take a step back and see what you've accomplished. It gives you an opportunity to assess and analyze the information you have without getting tripped up by the details. This step enables you to make clearer plans as your review continues. Status CheckLook at where you are in this part of your review and what kind of progress you're making. To help you determine your progress, ask yourself the following questions:
Of course, this list of questions isn't exhaustive, but it's a good starting point. You can add more questions based on the specific details of your review. Include notes about possible questions on your master ideas list, and incorporate them into your status check as appropriate. ReevaluateSometimes plans fail. You might have started with the best of intentions and all the background you thought you needed, but things just aren't working. For example, you started with a strict top-down review and found major discrepancies between the design and the actual implementation, or your bottom-up or hybrid review is way off the mark. In these cases, you need to reevaluate your approach and see where the difficulties are stemming from. You might not have chosen the right goals, or you could be trying to divide the work in a way that's just not possible. The truth is that your understanding of an application should change a lot over the course of a review, so don't be bothered if a change in your plan is required. Finally, don't mistake not identifying any vulnerabilities for a weakness in your plan. You could be reviewing a particularly well-developed application, or the vulnerabilities might be complex enough that you need a detailed understanding of the application. So don't be too quick to change your approach, either. Peer ReviewsGetting input from another code auditor, if possible, is quite valuable. When you look at the same code several times, you tend to get a picture in your head about what it does and how it works. A fresh perspective can help you find things you might not have seen otherwise because you hadn't thought of them or simply missed them for some reason. (As mentioned, glancing over a few lines of code without fully considering their consequences can be easy, especially during all-night code audits!) If you have another code reviewer who's willing to look over some of your work, by all means, compare notes. An interesting exercise is to look at the same code without discussion, and then compare what you both came up with. This exercise can help you see any inconsistencies between how either of you thinks the code works. Usually, peer reviewing isn't feasible for double-checking your entire audit because basically, it means doing the audit twice. Therefore, peer reviews should focus on parts of the code that are particularly complex or where you might not be certain of your work. |