Practice 5. Test Your Own Code


Scott Will and Ted Rivera

Defensive coding techniques and good development test practices will help you produce higher-quality code in shorter periods of time and with fewer defects.

Problem

A primary motivation for writing this book is set forth in the Preface:

We want to reduce the initial anxiety and cost associated with taking on a software improvement effort by providing an easy and unintrusive path toward improved results, without overwhelming you and your team. At the same time we want to show early results that keep the momentum going, thus maintaining the interest and commitment of everyone involved.

Any software improvement or development effort needs to focus also on the outlook and habits of the individual developer, but the problem is that many improvement efforts tend to overlook this key item, partly because too many developers think that their job is merely to write the code and that it's then the tester's job to find defects. However, this mindset is the antithesis of agility, and it results in the need for additional effort later in the development cycle.

By adopting simple defect prevention and early defect detection techniques, developers can produce immediate improvements in code quality and productivity, which in turn create momentum for wider adoption of these techniques as well as other improvement initiatives.

This practice focuses on simple techniques that all developers can use to prevent defects from occurring and to help find defects before the code is given to a test team.

Background

It's a good thing that defensive driving techniques are hammered home in driver's education classes. You may have witnessed drivers running a red light while talking on a cell phone, swerving all over the road while drinking a cup of coffee, or being distracted by kids misbehaving in the back of the car. As part of defensive driving, we quickly learn to anticipate dangerous situations, as well as do our best to avoid them. As part of safe driving preventive measures, we make sure that we wear seatbelts and drive cars equipped with airbags and other safety devices.

Defensive driving is a useful analogy for "defensive development." While most of us have been taught defensive driving techniques (and for good reason!), not all developers have been taught defensive development techniques. Sure, cars are equipped with seatbelts and airbags, but these are fallback measures in case the avoidance techniques don't work. Similarly, in software development we use testing teams, but they shouldn't be our first resort for preventing defects from reaching the end user. Rather, the testing team should be seen as the fallback measure if defects make it out of the development teams.

The testing team is a fallback measure if defects escape development teams.


The remainder of this practice addresses both defensive coding techniques and good development test practices that will help you produce higher-quality code in less time and with fewer defects.

Applying the Practice

First, a "no brainer" question: Which takes less time?

  1. Having a test team find a defect and write a defect report, then having a developer fix the defect, and then having the test team confirm that the defect is fixed through subsequent testing.

  2. Having a developer find defects on his or her own and fix them before the test team finds them.

  3. Preventing the defect from occurring in the first place.

While the theoretical answer to the question is obviously number 3, in actual practice we too often choose answer 1 because of intense schedule pressures placed on us during the initial phases of product development. Such pressures must be resisted: defect prevention should be a top priority for developers and must be consciously and actively pursued along with a strong emphasis on early defect detection.

Following is a discussion of several helpful defect prevention techniques to consider, broken down into three stages.

  1. Prior to writing code: design considerations, installation and usage considerations, operating system and database considerations, testability considerations.

  2. While writing code: defensive coding techniques, customer awareness, pair programming, code inspections.

  3. When testing code: defensive testing techniques, scaffolding code, test-driven development, source-level debuggers.

Prior to Writing Code

Before you even start writing the code, you should consider the following items in detail: design, installation and usage, operating systems and databases, and testability.

Think First: Design Considerations

Steve McConnell uses the example of moving enormous blocks for building the pyramids as an analogy for building complex softwarethere are many ways to move such blocks, and although it may take time to plan such movement up front, you will make far greater progress if you use adequate forethought.[1] Don't be content just to push hard! Ensure that the design is appropriate for the complexity of the method, function, or application to be developed. In some instances this will be no more difficult than forming a clear design in your head or on the whiteboard by identifying a set of test cases to be proved, or, when appropriate, through more formal means.

[1] Ensure that the design is appropriate for the complexity of the application.

Ensure that the design is appropriate for the complexity of the application.


When looking at the use cases for the product, do any areas appear ambiguous? Is requirements wording vague, or even contradictory? Is anything unclear or confusing? Make sure to resolve these issues, especially if it appears that one developer could interpret an issue one way and a peer could interpret the same item in a different way, thus leading to confusion, incompatibilities, and ultimately delays and dissatisfied users. A great way to obtain an independent assessment of the requirements is to review them with an end user or an end user's representative. Without such input, you might not fully understand the requirements [2]

[2] Note that in some agile projects only high-level requirements are written down.

One additional point is worth emphasizing: the beginning of a project always has the highest degree of uncertainty. A common mistake is to spend a disproportionate amount of time up front getting the requirements "just right" instead of two days describing them, two days designing, four days implementing, and then a few more days updating the requirements, design, and implementation to make sure all elements are in synch. Writing code is a great way to validate requirements and designs, as long as you are consciously doing so with a plan in mind, but it should not be the only means by which the designs are created. Conversely, a tight schedule is no justification for just "getting started"; on the contrary, appropriate planning and design increases overall project speed.

Writing code is a great way to validate requirements and designs.


Installation and Usage Considerations

First impressions matter. When users have to install, upgrade, or fine-tune software that is important to their organization, they are often "under the gun." If these activities are awkward or difficult, their perception of the software itself is often permanently skewed. Many defects can be avoided, and many users pleased, by considering installation, migration, and usage up front; however, these are too often tasks that are left to the end of a software project.

First impressions matter.


Has functionality been changed from an earlier release? If so, how will upgrading from an earlier release to this newer level be affected? Have parameters or other options been changed that may cause these upgrades to fail? Has an application programming interface (API) changed? Do other products integrate with this code? Will the latest changes impact that integration?

Your software is often part of a mixed environment, and the ease with which it is deployed contributes significantly to the value users will derive from it and (for those customers who purchase your product) to the positive return on their investment. Understanding what changes have been made, and writing or updating code accordingly, can prevent these high-impact defects from occurring.

As a side note, test teams should be thinking about installation issues early on as part of their test strategy. Scott Ambler has been hammering home for years the necessity of having testers perform comprehensive installation testing. See his discussion of this issue in The Object Primer: Agile Model Driven Development with UML 2.0.[3]

[3] Ambler 2004.

Operating System and Database Considerations

Teams sometimes assume that the code they are developing on the operating system or database in their office or cube will easily port to other similar operating systems or databases. The testing of other such important platforms is often either left to others or never adequately completed, thus frequently creating inconsistent levels of reliability across such platforms. Are there new releases of operating systems or databases that your code is expected to run on? Do you know what changes have been incorporated into these newer versions and whether they will affect your code?

Have developers routinely use different combinations of software as their primary development platform.


One simple approach to consider wherever possible is having individual developers or independent testers routinely use different combinations of software as their primary development platform or test environment. This approach minimizes the cost and risk of platform testing but, surprisingly, is rarely considered. Regardless, if you are selling software that purports to run on key operating systems or databases, you must give adequate consideration to the necessity of ensuring success in these environments; you cannot treat this as an afterthought if the software is to achieve true success.

Testability Considerations

The increasing complexity of software, the rise of componentization and service-oriented architectures, and the need for software to be integrated into sophisticated environments makes figuring out how to debug errors up front a necessity, not a luxury. The harsh alternative is a lifetime maintaining code with weak testability. Do you plan to incorporate testability into your code? If so, do you understand what this entails?

Debugging errors up front is a necessity, not a luxury.


What types of measures should you be considering in terms of testability? Here are a few to point you in the right direction:

  • Run-time access to variables. Allow access to "status" variables (for example, current state) by external tools, specifically to those variables needed by testers to verify that the code is working correctly or to help debug a problem.

  • Tracing and logging. Is there significant emphasis on tracing and logging? The easier you can make it for others to help with defect analysis, the easier it will be for you to fix any defects they find.

  • Automation hooks. Depending on the automation test tools you have chosen to use, specific "hooks" may be required for the tools to work properly.

No doubt you can come up with other items to add to this list, especially those that may be specific to your product or your organization.[4]

[4] See Feathers 2005 for a good discussion of testability techniques.

While Writing Code

There are several techniques to consider using while actually writing code: defensive coding techniques, customer awareness, pair programming, and code inspections.

Defensive Coding Techniques

Let's face it: defects will make their way into your software. Substantial problems arise from errors in design, architecture, requirements, documentation, complex or erroneous environment setup, and a myriad of other sources. This section focuses primarily on coding defects. A little forethought while writing code can go a long way to minimizing the number of defects injectedand that is what this particular practice is all about. While some defects are inevitable, many can be avoided by the application of defensive coding techniques that require little investment.

Forethought while writing code minimizes the number of defects injected.


  • Make the compiler your ally. One of the best ways to prevent bugs from living very long is to use the compiler. Using the lowest level of warning output for compilations is frightening. Instead, crank up the compiler warnings full blastconsider it a challenge to write code that, when compiled, produces absolutely no warnings even when the compiler is configured to complain about e-v-e-r-y l-i-t-t-l-e t-h-i-n-g.

  • Defensive coding techniques. Define a set of appropriate coding techniques that you and your team adhere to. Here are several sample coding techniques for your consideration:

    1. Initialize all variables before use. Also, consider listing all the local variables you'll be using at the very outset of your routines and then specifically initializing them at that point as well. You will then leave no doubt as to what you intended when you wrote the codeand most optimizing compilers won't generate any extra runtime code for doing this.

    2. Ensure return-code consistency. One area that creates difficulties when debugging problems is when a return code signifying an error is masked (overwritten) by the calling routine. Ensure that all return codes returned to your code are handled appropriately.

    3. Use a "single point of exit" for each routine. Simply put, do not use multiple "returns" within a routineever. This is quite possibly one of the most overlooked and yet most beneficial practices you can include in your day-to-day coding activities. If a routine returns from only one place, you have a very easy way of making sure that all necessary "cleanup" is accomplished prior to returning; debugging problems also then becomes much easier.

    4. Use meaningful "assertions" in your code.[5] An assertion is nothing more than a programmatic sanity check: is what I expect to be true in fact true?

      [5] See the following two references in IBM's developerWorks that discuss assertions in Java: Zurkowski 2002 and Allen 2002.

    5. Write readable code. Can you recall what you were thinking when coding that method six months ago? What do you think someone who has to modify your code a year from now is going to think? Use names that clearly explain what the named element is for. Factor the code into a series of self-explanatory steps. Follow consistent coding conventions. Use comments sparingly to explain only what is not obvious from the code. If you can change the code so that a comment is no longer needed, do so.[6]

      [6] For the reader interested in exploring further the idea of writing readable code, see the following works on literate programming: Knuth 1992 and Knuth 1993.

Customer Awareness

Remember, your customers think about your application differently than you do. They will install it in an environment that your team has likely never considered, or at least has not been able to test against. They will use it in ways you've never envisioned. They will configure it in ways you would never dream of. So your job is to try to make sure they don't get burned. Here are a few items to consider:

  • Check and verify the integrity of all incoming parameters. Consider what happens if you're expecting an array, and you get passed a NULL somehow, and you don't check for that possibility prior to indexing the array. Diligence here is especially important if your application permits APIs allowing the customer to write code that provides input into your product. It is so often true that customers do things we do not expect them to do; consequently, APIs represent an often undertested aspect of our software.

    Customers will use your application in ways you've never envisioned.


  • Consider all possible error cases. Add code to handle each one you can think of (you want your code to handle error conditions gracefully instead of just choking and then dying). Although the likelihood of errors is small, if one does occur, you can at least attempt to handle it, as opposed to just accepting the inevitable system crash that your customers will see. And for those error conditions you can't think of, include a generic "catchall" error handler (while still ensuring return-code consistency as mentioned above). How many times have you seen an error message that says, "This should never happen. . ."? If your experience is anything like ours, you've seen it many, many times.

    In addition to trying to test for each error condition you can think of, an even better approach would be to write an automated test case for each of these conditionsallowing others to reap the benefits in addition to increasing the pool of automated regression tests that can be run.[7]

    [7] This is a logical area to apply test automation. See also Practice 6:Leverage Test Automation Appropriately later in this chapter for additional information.

  • Globalization considerations. If your product is to be translated for use in other countries, make sure your code is "enabled." Even if there is just a small possibility that your product will be translated, consider that attempting to retrofit code to make it enabled is a real defect generator. Here are just a couple of enablement considerations to whet your appetite for further research:

    - Do you have any hard-coded strings?

    - Do you handle different date/time formats correctly?

    - What about the different currency representations?[8]

    [8] For more details, see Esselink 2000.

    - What about the icons you will use? Often, an icon you are comfortable with will mean something entirely different in another country.

  • Active stakeholder participation. Active stakeholder participation is an expansion of the XP practice of on-site customer participation. This is perhaps the most robust way of obtaining customer awareness, since customers are available to the development team.

  • Delivery of working code on a regular basis. Whether your team is able to have customer representatives on site, or you have an established practice of beta releases, one of the best ways to gain customer insight and feedback is to provide prerelease versions of working software to your intended customer set on a regular basis. Customers are not shy about letting you know what works, what doesn't, and what they expect to be done about the feedback they provide.

Pair Programming

Two heads are better than oneand one implementation of this age-old axiom is pair programming. Pair programming is nothing more than having two programmers sit at the same computer and jointly write code: while one programmer is typing, the other is looking over his shoulder and providing both input and immediate feedback. Although pair programming has been around for years, it has only recently been "mainstreamed" by XP practitioners.

You might think that having two programmers working on the same code would cut productivity in half. However, as many advocates of pair programming have discovered, the "lost" productivity is countered by increases in code quality. Think of pair programming as simultaneous code writing and code reviewing.

Pair programming "lost" productivity is countered by increases in code quality.


Code Inspections

Wait a minute: this section of the book was supposed to address easy-to-implement techniques, wasn't it? And isn't the practice of code inspections an enormously time-consuming activity?

Code inspections range from a buddy looking at your code to formal inspections.


Here's the truth: there's nothing glamorous about doing code inspections, and they do take timeespecially the highly structured, formal code inspections originally articulated by Michael Fagan of IBM nearly thirty years ago.[9] However, they are also one of the best ways to discover defects. Aside from encouraging you, in the strongest terms possible, to perform code inspections, we'll leave the details concerning various aspects of inspections to those who have written extensively on them. For example, see Karl Wiegers's Peer Reviews in Software[10] for details on the types of existing code inspections (ranging from simply asking a buddy to look at your code to semiformal walk-throughs and desk checks, up to and including formal inspections); ways to implement inspections in your organization; and, especially, the tremendous benefits they provide.

[9] Fagan 1976.

[10] Wiegers 2002.

To date, we are not aware of any head-to-head comparisons between teams using a highly structured, formal code inspection process versus teams using a less structured approach like pair programming. Studies have shown that the returns realized from pair programming are worth the investment,[11] and there is an extensive history showing the benefits of formal inspections,[12] so a word of warning is appropriate here: in the same way that an organization can become buried in documentation while detailing requirements or designs before ever writing code, so also reviews and inspections can degenerate into an end in themselves. So don't fall in love with the process, but with the goal: driving out defects. Let common sense prevail and choose the technique that best suits you, your teammates, your project, and your organizational culture.

[11] See Cockburn 2000 and Williams 1998. Williams's work refers to an even earlier work by Larry Constantine: "In 1995, Larry Constantine reported observing pairs of programmers producing code faster and freer of bugs than ever before" (p. 20). Additionally, see Padburg 2003.

[12] See Wiegers 2002.

When Testing Code

When the code is written and you are testing it, consider the following techniques: defensive testing techniques, scaffolding code, test-driven development, and source-level debuggers.

Defensive Testing Techniques

Defensive testing is done by a developer before it is handed off to a test team.


In this book defensive testing is considered to be any testing and analysis that a developer performs on his or her code, after the code compiles correctly and before it is handed off to a test team. It is important for developers to pursue such testing actively and to have a tester's mindset when doing so.[13] Here is what "developer testing" should include:

[13] For an excellent perspective of a tester's mindset, see Whittaker 2003.

  • Static code analysis tools. The first and easiest way to analyze your code is to have someone else do it for youor in this case, have some thing else do it for you. There are numerous static code analysis tools available, from comprehensive tools that some development organizations actually incorporate into their "build" environments to others that can run on a developer's own machine.[14]

    [14] For an example of a very capable, open source static code analysis tool for Java, see the "FindBugs" tool available at http://findbugs.sourceforge.net.

    Static code analysis tools can help you find errors such as boundary exceptions, null pointer dereferences, memory leaks, and resource leaks (for example, file handles), and they are actually fairly easy to use. What they can't find are things like missing functionalitywhich is why code inspections, as well as specific testing against the specified requirements, are needed. However, your code should be run through static analysis tools prior to performing an inspection; let the tools find what they can and highlight potential areas of concern, allowing subsequent code inspections then to focus on the "bigger" items.

  • Runtime and structural analysis. Static analysis tools are quite good at finding certain types of defects, but they certainly don't find all of them. The only way to find absolutely all memory leaks is to run the code and monitor the memory usage. Many tools exist to help automate what would otherwise be a very intensive manual task. Depending on how your organization is structured, how large your project is, the complexity of the code, and the number of developers working on the project, it may be appropriate to perform runtime analysis. As the overall complexity of a software development project increases, the greater the return on investment from runtime and structural analysis.[15]).

    [15] For example, IBM's Rational Purify Plus has extensive memory leak detection capabilities (http://library/811.html and http://www-128.ibm.com/developerworks/rational/library/811.html and http://www-128.ibm.com/developerworks/rational/library/957.html

  • Performance testing. The mere fact that a section of code compiles and seems free of significant errors doesn't mean that your work as a developer is done. Performance bottlenecks need to be identified, as these problems can be magnified when the code is used with the rest of the application, or used in conjunction with other software, or load-tested with a large group of simultaneous users. While comprehensive performance testing needs to be done at a system level, individual developers can execute limited performance testing along the way. In doing so, design defects leading to performance bottlenecks can often be discovered much earlier.

  • Ways to find bugs. Remember, these are either bugs you created or bugs arising from code you omitted. Here are some helpful ways to "think maliciously"and a malicious attitude is what you need to cultivate when looking for defects:

    - Attempt to force all error conditions you can think of, and attempt to see all error messages that can occur.

    - Exercise code paths that interact with other components or programs. If those other programs or components don't yet exist, write some scaffolding code yourself so that you can exercise the APIs, or populate shared memory or shared queues, and so on.

    - For every input field in a GUI, try various unacceptable inputs: too many characters, too few characters, numbers that are too large or too small, and many other such items. The goal is to try to single out the errors in this way and then, once the simple test cases pass, try combinations of unacceptable inputs.

    Try the following also: negative numbers (especially if you are expecting only positive numbers); cutting and pasting data and text into an input field (especially if you have written code to limit what the user can type into the field); combinations of text and numbers; uppercase-only text and lowercase-only text; repeating the same steps over and over and over and over and over and . . .; for arrays and buffers, adding n data items to your array (or buffer) and then attempting to remove n + 1 items. There are obviously many morethese are offered only to whet your appetite for thinking maliciously.

This list is the result of a combination of years of experience in development and testing, extensive reading on the subject of testing (especially works by James Whittaker[16] and Boris Beizer[17]), and countless discussions with other developers and testers. It is by no means a comprehensive list; modify it as necessary according to your own imagination, creativity, and understanding of your own strengths and weaknesses.

[16] Whittaker 2003.

[17] Beizer 1995.

Scaffolding Code

Scaffolding code is the "throwaway" code you write to mimic or simulate other parts of the code that have not yet been completed (sometimes also referred to as "stubbing" your code). If you need to create it for your own use, don't throw it away; make sure you pass it on to the test team. It may be that the scaffolding code you provide will allow them to get an early jump on testing your code, or at least give them a better idea of what to expect when the other components are ready. It can also provide a solid basis for their test automation efforts.

Scaffolding code simulates other parts of the code that have not yet been completed.


If your product has protective security features, test those features carefully. Providing scaffolding code that can create the situation you are trying to prevent becomes very important: you must be able to create the very situations against which the system is attempting to protect itself.

Another simple example of "scaffolding code" is providing code to manipulate queues. If your product makes use of queues, imagine how easy it would be to have a tool that would allow you to add and delete items on the fly from the queue, corrupt the data within the queue (to ensure proper error handling), and so on.

Test-Driven Development

When a developer has finished writing the code, it has already been tested.


Arising from within the agile/XP development community is an important technique known as test-driven development (TDD). While still falling in the realm of defensive testing techniquesbut providing a solution from an almost opposite directionthe concept of TDD is that the developer first writes a test case and then writes code for that test case to run against. If the test case fails, the code is changed until the test case passes. And not until after the test case passes is new code written (other than the code necessary to make the next test case pass). The ideal of this methodology is that when a developer has finished writing his or her code, the code has already been tested, and a full suite of automated test cases exists that can be run by test teams, change teams, and even customers should the team so choose.

Kent Beck, the "Father of Extreme Programming," has written about TDD in Test Driven Development: By Example,[18] which provides an excellent introduction to TDD. A newer work by Dave Astels covers TDD more thoroughly and has received much acclaim.[] Consider using TDD if you and your team have already implemented many of the techniques and practices discussed above and you are ready to take your improvement and development efforts to the next level.

[18] Beck 2003. Additionally, for a brief overview of TDD, see: http://www.agiledata.org/essays/tdd.html.

[] Astels 2003.

Source-Level Debuggers

The benefits of source-level debuggers far outweigh any learning curve.


The use of source-level debuggers is one key way in which thorough individual testing can be performed. For developers, being able to use debuggers is vital; the benefits of source-level debuggers far outweigh any learning curve, and we certainly encourage readers to make the effort to learn a debugger, and to learn it thoroughly. Here are just a few ways you can use source level debuggers to test your code:

  • Set breakpoints. This allows you to stop execution of the code at a specified location and then "single step" through the code so that you can watch what each line of code does.

  • Manipulate data on the fly. You can set a break point just as your code is entered and then reset the value of a parameter that is passed in to see if your code handles the (now) invalid parameter in the way it should. Using the debugger in this way saves the time and effort of trying to get that actual error condition to occur.

  • Set "watches" on variables. Putting a "watch" on a variable sets a conditional break point that will be hit only if the value of a specified variable is being changed.

  • View the call stack. This allows you to see which routines called your code, which is a tremendous aid in debugging defects.

  • "Trap" errors when they occur. If you don't know exactly where a defect occurs, many source-level debuggers will automatically drop you into the debuggerat exactly the right locationwhen a system-level application error occurs (for example, an attempt to dereference a null pointer). Simply run your application under the debugger, without trying to step through the code and without setting breakpoints.

Conclusion

As strongly implied at the outset of this practice, preventing defects from occurring is a significant step toward improving code quality and developer efficiency. Further, finding any defects that do get introduced as early as possible also significantly improves product quality and efficiency. One of the best parts of this practice is that the techniques described work independently of any development methodology (e.g., iterative, waterfall, agile/XP) and generally cost virtually nothing to adopt.

However, sometimes the use of such techniques seems counterintuitive: given the typical schedule and staff pressures cited in the introductory discussion of this practice, it is often tempting to sacrifice solid, strategic goals for tactical necessities. Projects and practitioners are often tempted to hit unrealistic or unreasonable schedules at the expense of using sound development techniques, but the implications are far-reaching and often affect many subsequent releases, not just the current one.

The importance of establishing a culture cannot be overstated.


The ultimate benefits of considering these techniques thoroughly and implementing them intelligently and selectively in your organization will result in fewer defects in your code, fewer regressions, higher quality, and lower rework costs. But none of these benefits will occur unless the project leadership actively promotes an environment in which the adoption of these techniques will be welcomed, and even expected. The importance of establishing a culture that enables and encourages the adoption of techniques to prevent defects, or at least detect those that do make it into the code, as early as possible cannot be overstated. Without such a culture, much of what is discussed in this practice will likely fall on deaf ears.

Other Methods

It is difficult to say this without appearing cynical or sarcastic, but the following comments are the result of objective observation: employing none of the techniques described in this practicein other words, "doing nothing" (or almost nothing)is the primary alternative method employed when writing code. The time and resource pressures that we have alluded to are not fictional; almost every developer knows what it is like to be constantly under the gun. Generally speaking, developers are a conscientious lot, with an appreciation for concerns such as quality and craftsmanship. But when hounded for a particular deliverable on an unreasonable schedule, it appears necessary to be satisfied with producing code that "works for me" rather than writing code using many of the techniques outlined in this practice. To return to our opening analogy, this is the equivalent of "offensive driving," that is, driving along in the hope that all the lights will be green, the speed limits are mere guidelines, and other drivers stationary. In such circumstances, a collision is inevitable. Similarly, in the end doing nothing almost certainly means longer schedules and lower quality, resulting in ever-increasing schedule pressures; it's a vicious cycle.

XP is very much a code-focused approach, and its techniques (including pair programming and test-driven development) are good approaches to improving code quality. Test-first design in particular ensures higher-quality code, because there is no opportunity for untested code. Pair programming, on the other hand, is somewhat more controversial. It adds a lot of value by ensuring that all code is looked at by two people, and it enables programmers to share their knowledge. Pair programming and test-first design are techniques that you should consider applying. The other techniques listed in this practice should also be considered. We believe that most development organizations will adopt a mix of techniques that work for them.

Levels of Adoption

This practice can be adopted at three different levels:

  • Basic. Coding guidelines and standards exist and are followed by developers.

    The creation and use of coding guidelines and standards ensure that the development team begins to think about "defensive coding" ideas while writing code. (Note that many of the techniques listed in the Defensive Coding Techniques section above provide a good foundation for creating coding guidelines and standards.) Informal peer reviews of written code take place. Designs are thoroughly assessed in order to prevent design defects from being introduced. Developers try to understand the environment that the code will run in so that any operating system and middleware dependencies can be addressed earlythus preventing defects arising from API changes, middleware updates, and so on. The goal is defect prevention.

  • Intermediate. Developers actively test their own code. Discovering and fixing defects as early as possible will work only if developers test code thoroughly before a separate test team runs the code in a test lab. Code reviews take place systematically and consistently. A thorough understanding of customer environments and product usage contributes to defect prevention. Static code analysis is being accomplished, and the introduction of pair programming is showing positive results. Following the defensive testing techniques described above (plus any others that developers may wish to include) will help to shift defect discovery to earlier in the project lifecycle and likely shorten it.

  • Advanced. More formal code inspections are performed and static, structural, and runtime code analysis tools are used extensively. Performance testing is a standard part of the development process. While performing code inspections and setting up an environment where tools can be run against the code certainly takes more effort, doing so will help teams discover more defects earlier than would otherwise be the case. Inspections and static, structural, and runtime analysis should not be viewed as replacements for coding standards or individual testing but as complementary to those efforts. Test-driven development is widespread.

Related Practices

  • Practice 1: Manage Risk discusses the key idea in risk management, which is not to wait passively until a risk materializes and becomes a problem, but rather to seek out and deal with risks.

  • Practice 6: Leverage Test Automation Appropriately describes how automating appropriate amounts of the test effort can also realize improved product quality and shorter product development schedules.

  • Practice 7: Everyone Owns the Product! addresses how to orient the responsibilities and mindset of team members to ensure that everybody takes ownership of the final quality of the product, broadens the scope of team responsibilities, and learns to collaborate more effectively within the team.

Additional Information

Information in the Unified Process

OpenUP/Basic covers basic informal reviews (optionally replaced by pair programming) and developer testing techniques. OpenUP/Basic assumes that programming guidelines exist and requires developers to follow those guidelines. OpenUP/Basic recommends that an architectural skeleton of the system be implemented early in development to address technical risks and identify defects.

RUP adds guidance on static, structural, and runtime code analysis, as well as defensive coding and advanced testing techniques. RUP also provides guidance on how to create project-specific guidelines and apply formal inspections.

Additional Reading

For detailed books on software development, defensive coding and testing, and code inspections, we recommend the following:

David Astels. Test-Driven Development: A Practical Guide. Prentice Hall, 2003.

Kent Beck. Test-Driven Development: By Example. Addison-Wesley, 2003.

Alistair Cockburn and Laurie Williams. "The Costs and Benefits of Pair Programming." http://collaboration.csc.ncsu.edu/laurie/Papers/XPSardinia.PDF.

Steve McConnell. Code Complete: A Practical Handbook of Software Construction, Second Edition. Microsoft Press, 2004.

James Whittaker. How to Break Software: A Practical Guide to Testing. Addison-Wesley, 2003.

Karl Wiegers. Peer Reviews in Software: A Practical Guide, Addison-Wesley, 2002.

For books that provide you with an understanding of how writing code fits with other lifecycle activities, such as design, implementation and testing, see the following:

Boris Beizer. Black Box Testing: Techniques for Functional Testing of Software and Systems. John Wiley & Sons, 1995.

Michael Feathers. Working Effectively with Legacy Code. Prentice Hall, 2005.



Agility and Discipline Made Easy(c) Practices from OpenUP and RUP
Agility and Discipline Made Easy: Practices from OpenUP and RUP
ISBN: 0321321308
EAN: 2147483647
Year: 2006
Pages: 98

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net