Controlling Complexity in Software Development


As we stated, complexity is the mother of all nonconformities. It must be addressed before poka yoke deployment. Chapters 1 and 3 discussed complexity as a challenge to developing trustworthy software from the software developer's perspective. From the user perspective, the question "Why does software have bugs?" will not go away. The response to that question is, software is intrinsically more complex than hardware because it has more states, or modes of behavior. An integrated enterprise business application system, for example, is likely to have 2,500 or more input forms. No machine has such a large number of operating modes. Computers, controlled by software, have more states (that is, larger performance envelopes) than do other, essentially mechanical, systems. Thus, they are intrinsically more complex.

Two vital questions from the developer's perspective are "Do we understand the nature of complexities in software?" and "Can we measure them?" The fact is that we do not and are not likely to understand or measure complexities anytime soon. A number of approaches have been taken to calculate or at least estimate the degree of complexity, but the simplest is lines of code (LOC), which is a count of the executable statements in a computer program. Although this metric began in the days of assembly-language programming, it is still used today for programs written in high-level programming languages. Most procedural third-generation memory-to-memory languages such as FORTRAN, COBOL, and ALGOL typically produce six executable ML statements, whereas register-to-register languages such as C, C++, and Java produce about three. Recent studies show a curvilinear relationship between defect rate and LOC; defect density appears to decrease with program size and then increase again when as program modules become very large (see Chapter 3). Curiously, this result suggests that there may be an optimum program size leading to a lowest defect ratedepending, of course, on programming language, project, product, and environment.

McCabe proposed a topological or graph-theory measure of cyclomatic complexity as a measure of how many linearly independent paths make up a computer program (see Chapter 3). Kan has reported that, other than module length, the most important predictors of defect rates are number of design changes and complexity level.[14] We may never fully comprehend complexity and measure it as such, but we now have a number of identifiable surrogates:

  • Number of inputs

  • Number of outputs

  • Number of states

  • Number of executable statements in a program (LOC)

  • Module length

  • Number of design changes

  • Complexity level

  • Number of features

  • Defect rates

  • Number of components

  • Lack of discipline on the part of software developers

  • Duration of development process

All these are related to complexity, so where do we start? Hinckley reports a link between assembly time and defect rate. Furthermore, he proposes assembly time as a measure of complexity:[15]

Every change that reduces product or process complexity also reduces the total time required to procure materials, fabricate parts, or assemble products. Thus, time is a remarkably useful standard of complexity because it is fungible, or, in other words, an interchangeable standard useful in measuring and comparing the complexity of different product attributes. As a result, we can use time to compare the difficulty of installing a bolt to an alternative assembly method based on a snap-fit assembly. In addition, time has a common international value that is universally recognized and easily understood. It would result in fewer mistakes. The product design for reduced complexity involves addressing time.

He cautions that care must be taken when using time as a measure of complexity, because worker skill, training, and the workplace strongly influence how long it takes to perform similar tasks.[16]

Task complexity can be reduced by product designs that take less time to complete. This involves addressing two fundamental issues:

  • Software product concept: The TRIZ and Pugh concept selection methodologies presented in Chapter 12 provide opportunities to come up with the right product concept for reducing complexity, with development time as a measure of complexity. The concept itself has to be grounded in customer requirements, as identified by QFD and the Kano model, presented in Chapter 11.

  • Software product optimization: This involves software design optimization with development time as a requirement using Taguchi Methods, presented in Chapters 16 and 17.

Reducing process complexity is as important as reducing product complexity. But unlike manufacturing, software product design is intricately linked to its development process. Reducing process complexity involves asking three basic questions: Is a robust software process in place to attain the complexity reduction objective? Is the process adequately supported by value analysis, standardization for best practices, and necessary documentation? Are the process and its supportive elements being used and observed in practice? This broad framework consists of the following:

  • Robust Development Model: This ensures that a defined process is in place that lets the software team identify, develop, and deliver software that meets customer requirements with focus and discipline. The DFTS model (see, for example, Figure 2.6 in Chapter 2) provides such a framework.

  • Value analysis: Value analysis keeps the development process focused on the product value as defined by the customer. It identifies the product and process activities that add value as perceived by the customer, those that do not but are required by the process, and those that neither create any value nor are required by the process. The last is called muda (waste) in Japanese and must be eliminated.

  • Standardization: This ensures that standardized procedures support the development process. These should be monitored and improved. They should be based on the best practice available, including those coming from the internal development team as well as the supporting personnel.

  • Documentation: Adequate documentation of the process as well as its changes and improvements are critical in meeting software development objectives.

  • 5S system: The 5S system (discussed in Chapter 10) is a key methodology for workplace organization and visual controls developed by Hiroyuki Hirano.[17] The 5 S's refers to five Japanese words: seiri (sort or clean up), seiton (straighten), seiso (shine), seiketsu (standardize), and shitsuke (sustain). Sort means to separate needed and unneeded materials and remove the latter. Straighten means to neatly arrange and identify needed materials for ease of use. Shine means to conduct a cleanup campaign. Standardize means to do sorting, straightening, and shining at frequent intervals and to standardize your 5S procedures. Sustain means to form the habit of always following the first four S's. The 5S system supports the software development process as a whole to be disciplined and productive.

  • Decomposition: Simplicity is attained when tasks are comprehended. One of the major causes of complexity is cognitive. The idea of decomposing tasks into smaller, manageable segments cannot be overemphasized.

  • Smarter domain-specific languages and formally based tools: We expect deployment of smarter domain-specific languages like Lawson software's Landmark™ (see Chapter 16). We also expect formally based tools that in the future can automatically expose certain kinds of software errors by producing evidence that a software system satisfies its requirements. Such tools then can allow practitioners to focus on development tasks best performed by people, such as obtaining and validating requirements and constructing high-quality requirements specifications.[18]

Managing complexity is one of the most critical software quality assurance tasks and is also one of the major challenges in software development. It is one of the root causes of variation- as well as mistake-based nonconformities. It must precede any poka yoke deployment, because its correct deployment reduces mistake-based nonconformities substantially.

Recognizing flaws in design that result in complexity and detecting mistakes are best done early in the design phases and should be planned accordingly. Complexity, in particular, can be corrected only upstream in the concept development and design stages. For mistake detection, the payoff from 100% inspection upstream at the source is substantially higher than inspections downstream. This sets the stage for mistake reduction using poka yoke.




Design for Trustworthy Software. Tools, Techniques, and Methodology of Developing Robust Software
Design for Trustworthy Software: Tools, Techniques, and Methodology of Developing Robust Software
ISBN: 0131872508
EAN: 2147483647
Year: 2006
Pages: 394

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net