Verification


This section defines verification in a Taguchi context as the process of ensuring that programs meet their design specifications. Verification is carried out primarily through unit testing as opposed to system or subsystem testing. With Taguchi Methods, the first stage of verification is actually validating the parameter optimization. The next step is verifying the design. The most important part of conventional software development processes is verifying the code itself as a unit, component, module, or subsystem. Much has been written about program verification, much of which has been summarized by Humphrey.[6] The current research on formal methods is a descendant of early efforts to prove programs correct (to verify them), as you would prove a mathematical theorem. This can be done for small programs of about 150 lines of code. But it is as difficult and painstaking as proving a mathematical theorem, and it usually requires the same degree of training and talent. So-called "straight-line code" is usually transparent to its creator. If it's well-documented, it is equally transparent to any trained reader of computer program codes. Most low-level programming errors are introduced when a loop, branch, or function call is introduced. Structured methods have been developed to verify for loops, while loops, and repeat-until loops, but they are rather clunky and are seldom used by programmers, who usually just execute the loop in their heads as an intuitive verification. Use of these structured methods is like solving an algebra problem by following all the steps explicitly and quoting the rules. Such analysis is not very interesting, but it always exposes any errors (just as it does in algebra).

Clearly, such methods verify code rather than design, but the promise of Taguchi Methods is that the developers may verify design directly. Case Study 18.1 illustrates an early such Taguchi application. Only about half a dozen Taguchi software design case studies have been published, most of which are quoted as examples or referred to in this book.

Case Study 18.1: Taguchi Methods for RTOS Design Verification[7]

This case study illustrates the use of a series of L8 parameter design experiments as a strategy for selecting a set of real-time operating system features that provide optimum performance against the stated goals. The parameters were chosen for a radio set controller architecture and were tried in a test-bed environment. The use of Taguchi Methods ensured that the product would behave in a predictable and optimal fashion. The developer of this application and author of the case study presents it as a template to help any software engineer enhance real-time operating system (RTOS) performance for other applications.

The application is to computer scheduling and operation of a tactical VHF (very high frequency) radio set. For some years, advanced military radios have been more like computers that send and receive both digital and analog signals, rather than solving equations and presenting their solutions. Early systems had the digital control logic distributed through the radio circuitry, but the emergence of inexpensive powerful microprocessors led to their use as stored-program controllers for such complex multifunction radio systems. As these programs grew in size and complexity, they began to require operating systems, and because radio communication takes place in real time, hence RTOS. It is a little-known fact that most of the microprocessors sold each year do not appear in desktop computers or enterprise servers. Instead, they go into dedicated or embedded applications like this case. Some automobiles today have as many as nine. Just as the engineer can purchase a general-purpose microprocessor off the shelf and customize it to his application, he can buy an RTOS for that same chip and customize or parameterize it for a particular application functionality. The VHF radio in this case study carries out a set of independent concurrent processes under preemptive priority scheduling. Because radios today are basically digital computers, they generate interrupts, which must be handled by the control microprocessor and its RTOS's interrupt scheduling routine (ISR). The control software running in the controller depends on the RTOS to provide communication between software tasks and hardware interrupts (as software ISRs). The three communication requirements are ISR to program task, task to task, and task to hardware (such as turning a virtual knob or making a menu selection). Each of these three cases has two different operating system "mechanisms" for interprocessor communication. For ISR-to-task or task-to-task communication, you can specify either a mailbox or a queue, but for task-to-hardware, the choices are a semaphore or a resource lock.

During design, control and data flows were constructed for radio and traffic operations and their primary communication mechanism, messaging, signaling, and resource lock. The goal was to optimize these communication flows without any possibility of multitask contention, which could lock up the radio in the middle of a critical message. An experiment was designed to determine the correct parameters for optimal RTOS performance for this embedded application. The following control factors over which the designer has some influence were chosen:

  • Processor speed: Choice of 12MHz or 30MHz clock speeds

  • Messaging: Choice of mailboxes or circular queues

  • Signaling: Two methods for messaging between tasks are events and mailboxes

  • Resource locking: Use of either resource locks or semaphores to prevent simultaneous access to hardware resources by competing tasks

The exogenous noise factors over which the designer has no control, but that result from the radio's operation in its tactical environment, were as follows:

  • Interrupt mix: The mix of various interrupts generated by the system

  • Interrupt rate: The rate at which interrupts are received by the RTOS

For the experimental design, the control factors were as follows:

Factor

Level 1

Level 2

Processor speed (MHz)

12

30

Messaging

Mailbox

Queue

Signaling

Event

Mailbox

Resource locking

Resource

Semaphore


The input signal is a series of events entered as interrupts (tISR). Output is measured as when an event is queued, when it starts execution, and when it finishes (tend). Ideally the system should process the messages as fast as possible, so a smaller-the-better method with a signal-to-noise (SN) ratio is

SN = 10 log {Σ (tend tISR)2 / N}

The average case was 7 times per second, the worst case was 30 times per second, and failure was defined as an interrupt rate of N times per second.

The L8 matrix for this Taguchi parameter design was as follows:

Case

Speed

Messaging

Signaling

Resource Lock

1

12

Mailbox

Event

Resource

2

12

Mailbox

Semaphore

Semaphore

3

12

Queue

Event

Semaphore

4

12

Queue

Semaphore

Resource

5

30

Mailbox

Event

Semaphore

6

30

Mailbox

Semaphore

Resource

7

30

Queue

Event

Resource

8

30

Queue

Semaphore

Semaphore


The SN ratios for data rates of 7 and 30 messages per second were as follows:

Case

SN 7

SN 30

1

27.7

27.7

2

27.7

27.7

3

32.0

32.3

4

32.0

32.6

5

21.7

21.8

6

21.8

21.8

7

28.7

28.4

8

28.5

28.3


A study of these results indicates that the dominant factors are processor speed and messaging, which were comparable in amplitude. Thus, the optimal choice is a 30MHz clock speed and the use of mailboxes for messaging. The author of this case study goes on to validate the results of his choice of 30MHz and mailboxes with 12MHz and queues. The performance difference was 10.3 db at 7 events per second and 10.7 db at 30 events per second. 10 db amounts to a significant performance difference. The other factors, signaling and resource locking, did not contribute in any significant way to the optimal results, so they can be chosen at the designer's convenience.

The 10 db performance enhancement caused by choosing a processor having 2.5 times the clock speed is a noteworthy optimization. This is primarily because of the RTOS's ability to run its timer loop, the main overhead in the system, twice as fast. In fact, it reduced message latency from 40.6 ms to 12.1 ms. But at what cost? The author computed the Taguchi quality loss function associated with use of the more expensive part. He discovered that the quality loss for the current radio system, using the slower, less-expensive chip, is $78 and for future systems would be $215. The cost of upgrading from the 12MHz chip to the 30MHz chip is minor. Compared to the quality loss, it is a very desirable design option.


Verification should be done as early in the design process as possible, before any code is written and available for testing of validation. To do this, the design logic must be represented precisely. In the early days of computing, flowcharts were used and verified before coding began. Today, pseudocode is often employed to serve the same function. Automatically produced flowcharts are not only used by reverse-engineering tools such as Xcellerator and Texas Instruments TEI but also are the mainstay of large-scale design support systems such as Rational Rose. The design of large, complex software systems is hardly possible without some sort of pseudocode or graphical design tool employing UML to allow design verification at the precoded logic level stage. The ability to model logic for the various loop constructs is critical, because that is where most errors are introduced. As long as the intended behaviors of all the program's high-level architectural constructs are known, design verification can take place.[8] A precise specification of the design's intended function helps you define the preconditions and thus ensures that the logic handles every case properly. It is desirable to consider all possible cases at the logic level and be sure they are handled. In large development projects, design reviews are generally used as an intuitive verification process as design and development proceed. After 50 years of program design and development, no single method for program verification has become commonplace. Until formal methods are more widely available, the developer's best option is to use conventional means supplemented by Taguchi Methods.




Design for Trustworthy Software. Tools, Techniques, and Methodology of Developing Robust Software
Design for Trustworthy Software: Tools, Techniques, and Methodology of Developing Robust Software
ISBN: 0131872508
EAN: 2147483647
Year: 2006
Pages: 394

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net