Software Development: The Need for a New Paradigm


Computing has been the fastest-growing technology in human history. The performance of computing hardware has increased by more than a factor of 1010 (10,000 million times) since the commercial exploitation of the electronic technology developed for the ENIAC 50 years ago, first by Eckert and Mauchly Corp., later by IBM, and eventually by many others. In the same amount of time, programming performance, a highly labor-intensive activity, has increased by about 500 times. A productivity increase of this magnitude for a labor-intensive activity in only 50 years is truly amazing, but unfortunately it is dwarfed by productivity gains in hardware. It's further marred by low customer satisfaction resulting from high cost, low reliability, and unacceptable development delays. In addition, the incredible increase in available computer hardware cycles has forced a demand for more and better software. Much of the increase in programming productivity has, as you might expect, been due to increased automation in computer software production. Increased internal use of this enormous hardware largesse to offset shortcomings in software and "manware" have accounted for most of the gain. Programmers are not 500 times more productive today because they can program faster or better, but because they have more sophisticated tools such as compilers, operating systems, program development environments, and integrated development environments. They also employ more sophisticated organizational concepts in the cooperative development of programs and employ more sophisticated programming language constructs such as Object-Oriented Programming (OOP), class libraries, and object frameworks. The first automation tools developed in the 1950s by people such as Betty Holburton[1] at the Harvard Computation Laboratory (the sortmerge generator) and Mandalay Grems[2] at the Boeing Airplane Company (interpretive programming systems) have emerged again. Now they take the form of automatic program generation, round-tripping, and of course the ubiquitous Java Virtual Machine, itself an interpretive programming system.

Over the years, a number of rules of thumb or best practices have developed among enterprise software developers, both in-house and commercial or third-party vendors. Enterprise software is the set of programs that a firm, small or large, uses to run its business. It is usually conceded that it costs ten times as much to prepare (or "bulletproof") an enterprise application for the marketplace as it costs to get it running in the "lab." It costs another factor of 2 from that point to market a software package to the break-even point. The high cost of software development in both time and dollars, not to mention political or career costs (software development is often referred to as an "electropolitical" problem, and a high-risk project as a "death march"), has encouraged the rise of the third-party application software industry and its many vendors. Our experience with leading both in-house and third-party vendor enterprise software development indicates that the cost of maintaining a software system over its typical five-year life cycle is equal to its original development cost.

Each of the steps in the software life cycle, as shown in Figure 1.1, is supported by numerous methods and approaches, all well-documented by textbooks and taught in university and industrial courses. The steps are also supported by numerous consulting firms, each having a custom or proprietary methodology, and by practitioners well-trained in it. In spite of all of this experience supported by both computing and organizational technology, the question remains: "Why does software have bugs?" In the past two decades it has been popular to employ an analogy between hardware design and manufacture and software design and development. Software "engineering" has become a topic of intense interest in an effort to learn from the proven practices of hardware engineeringthat is, how we might design and build bug-free software. After all, no reputable hardware manufacturer would ship products known to have flaws, yet software developers do this routinely. Why?

Figure 1.1. Essential Steps in the Traditional Enterprise Software Development Process


One response is that software is intrinsically more complex than hardware because it has more states, or modes of behavior. No machine has 1,000 operating modes, but any integrated enterprise business application system is likely to have 2,500 or more input forms. Software complexity is conventionally described as proportional to some factorsay, Ndepending on the type of program, times the number of inputs, I, multiplied by the number of outputs, O, to some power, P. Thus

software complexity = N*I*OP

This can be thought of as increasing with the number of input parameters but growing exponentially with the number of output results.

Computers, controlled by software, naturally have more statesthat is, they have larger performance envelopes than do other, essentially mechanical, systems. Thus, they are more complex.

Sidebar 1.1: Computer Complexity

When one of the authors of this book went from being an aircraft designer to a computer architect in 1967, he was confronted by the complexity of the then newly developing multiprocessor computer. At the time, Marshall McLuhan's book Understanding Media was a popular read. In it, this Canadian professor of English literature stated that a supersonic air transport plane is far simpler than a multiprocessor computer system. This was an amazing insight for a professor of English literature, but he was correct.

One of the authors of this book worked on the structural optimization of the Concorde and on a structural aspect of the swing-wing of the Boeing SST. In 1968 he was responsible for making the Univac 1108 function as a three-way multiprocessor. Every night at midnight he reported to the Univac test floor in Roseville, Minnesota, where he was assigned three 1108 mainframe computers. He connected the new multiprocessor CRT console he had designed and loaded a copy of the Exec 8 operating system modified for this new functionality. Ten times in a row the OS crashed at a different step of the bootstrap process. He began to wonder if this machine were a finite automaton after all. Of course it was, and the diverse halting points were a consequence of interrupt races, but he took much comfort from reading Marshall McLuhan. Today, highly parallel machines are commonplace in business, industry, and the scientific laboratoryand they are indeed far more complex than supersonic transport aircraft (none of which are still flying now that the Concorde has been taken out of service).


Although software engineering has become a popular subject of many books and is taught in many university computing curricula, we find the engineering/manufacturing metaphor to be a bit weak for software development. Most of a hardware product's potential problems become apparent in testing. Almost all of them can be corrected by tuning the hardware manufacturing process to reduce product and/or process variability. Software is different. Few potential problems can be detected in testing due to the complexity difference between software and hardware. None of them can be corrected by tuning the manufacturing process, because software has no manufacturing process! Making copies of eight CD-ROMs for shipment to the next customer along with a box of installation and user manuals offers little chance for fine-tuning and in any case introduces no variability. It is more like book publishing, in which you can at most slip an errata sheet into the misprinted book before shipping, or, in the case of software, an upgrade or fix-disk.

So, what is the solution? Our contention is that because errors in software are almost all created well upstream in the design process, and because software is all design and development, with no true manufacturing component, everything that can be done to create bug-free software must be done as far upstream in the design process as possible. Hence our advocacy of Taguchi Methods (see Chapters 2, 15, and 17) for robust software architecture. Software development is an immensely more taxing process than hardware development. Although there is no silver bullet, we contend that the Taguchi Methods described in the next chapter can be deployed as a key instrument in addressing software product quality upstream at the design stage. Processes are often described as having upstream activities such as design and downstream activities such as testing. This book advocates moving the quality-related aspects of development as far upstream in the development process as possible. The RSDM presented in this book provides a powerful framework to develop trustworthy software in a time- and cost-effective manner.

This introductory chapter is an overview of the software development situation today in the view of one of the authors. Although he has been developing both systems and applications software since 1957, no single individual's career can encompass the entire spectrum of software design and development possibilities. We have tried in this chapter to indicate when we are speaking from personal experience and sharing our personal opinions, and when we are referring to the experience of others.




Design for Trustworthy Software. Tools, Techniques, and Methodology of Developing Robust Software
Design for Trustworthy Software: Tools, Techniques, and Methodology of Developing Robust Software
ISBN: 0131872508
EAN: 2147483647
Year: 2006
Pages: 394

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net