Until a few years ago, software engineering suffered the same tragic notion of quality that manufacturing companies had much earlier that quality was something that was done at the end of the assembly/development process, before the product was to be delivered. It was common to see quality-conscious project managers plan for system testing after the development (other project managers did not even plan properly for system testing!) but fail to give any importance to quality control tasks during development. The result? System testing frequently revealed many more defects than anticipated. These defects, in turn, required much more effort than planned for repair, finally resulting in buggy software that was delivered late.
As the situation improved, project managers started planning for reviews and unit testing. But they did not know how to judge the effectiveness and implications of these measures. In other words, projects still lacked clear quality goals, convincing plans to achieve their goals, and mechanisms to monitor the effectiveness of quality control tasks such as unit testing.
With proper use of measurements and past data, it is possible to treat quality in the same way you treat the other two key parameters: effort and schedule. That is, you can set quantitative quality goals, along with subgoals that will help track the project's progress toward achieving the quality goal.
This chapter discusses how project managers at Infosys set the quality goals for their projects and how they develop a plan to achieve these goals using intermediate quality goals to monitor their progress. Before we describe Infosys's approach, we briefly discuss some general concepts of quality management.