3.10 Problems encountered in model development and use

 < Free Open Study > 



3.10 Problems encountered in model development and use

Developing a performance assessment project for a specified system is not without its pitfalls. We must start with developing a concept for what we are evaluating and why. That is, does our performance study have as its goal to measure the existing performance of a system or future possibilities? Are we measuring the cost of the system now or in the future? Are we measuring the correctness of the system or the adequacy? How do we define these terms? What dictates correctness or adequacy? Why do we need to perform this study?

The typical analyst first begins with the primary concern, that the system performs its intended design function correctly. For example, if a computer system is to be able to perform concurrent operations, then a primary measure is that it can do just that. A secondary concern of the modeler is that the system has adequate performance and delivers this at a reasonable cost. This implies that we need some way to measure and predict what is adequate performance and what is reasonable cost.

To understand these terms we first need to put them in the context of an environment where the system is to be operational. Even before this, though, we must start by determining what is meant by the system. For example, if it is a PC, we need to know what this term entails. Do we wish to include the motherboard, processor type, memory volume and type, I/O boards, graphics cards, disk drives, and maybe network interconnects? Or do we simply mean the black box, without concern for what is inside? Once the system of interest has been defined, the modeler must define what components make up this system and what their importance is in the context of the entire system.

Given the systems definition and the components definition, we next must define the environment in which the system will operate. The environment should only include the important factors defining it, not everything. For example, if we are studying a PC architecture, we may wish to know if it will be exposed to the elements, extreme temperatures, humidity, and so on.

Once the environment is defined, we must determine what parameters are of interest to us as analyst. These may include parameters upon which the system is used or measured by. Some parameters may include things such as the PC processor speed, the size of primary memory, and so forth.

The common answer with PC users and computer systems users in general is that they cannot easily define the above terms. They typically look at computer systems performance evaluation as only answering one question: If my computer is not working up to snuff, can't I just add more of "whatever" to make it work better? The problem lies in how to know what "whatever" is needed. How much of this "whatever" is needed? The problem is that one does not readily know when adding "whatever" that a certain quantity will provide the intended result. More importantly, without performance evaluation how do we know we are done?

The problem the performance evaluator is faced with is how to determine what to measure and how to do this. There are two main classes of techniques for computer systems performance assessment. The first is to take an existing system and design some experiments involving possibly hardware, software, or both. Then measure the result to determine what is needed. The second class of modeling tool utilizes more abstract means. These involve either analytical modeling or simulations. Analytical modeling typically uses queuing theory or Petri nets theory and can provide coarse analysis of the systems under study. Simulations can provide more fidelity but at an added cost in terms of design time and analysis. Simulations can be designed as discrete event-based models, continuous-based models, or combined models.

Performance measures used by the analyst in making a determination of performance include responsiveness, use levels, missionability, dependability, productivity, and predictability. Responsiveness indicates the system's ability to be provided commands and to deliver answers within a reasonable time period. Use level indicates the system's degree to which it is loaded-for example, is the system 50 percent loaded or 100 percent saturated? Missionability refers to the system's ability to perform as it was intended for the duration demanded. For example, a spaceship must be highly missionable. Dependability is related to the last measure but indicates the system's ability to resist failure or to stay operational. Productivity indicates a measure of the throughput of the given system. And predictability indicates a measure of a system's ability to operate as required under all or most conditions.

All of these measures have a place, given specific classes of systems. For example, a general-purpose computing facility must possess the qualities of being responsive, have good use levels, and be productive. High-availability systems, such as transaction processing or database systems, must not only be responsive but must possess a higher degree of dependability than the general-purpose computing environment. Real-time control systems require high responsiveness, dependability, and predictability. Mission-oriented systems, such as avionic control systems, require extremely high reliability over short durations and must be responsive. Long-life applications, such as spacecraft and autonomous underwater vehicles, must be highly dependable, missionable, and responsive.

There are common errors or mistakes computer systems performance analysts make or must avoid when performing their tasks. The first and most common is having no goals or ill-defined goals for the performance study. The goals should include a specification for a model of the system or component under study and definition of the techniques, metrics, and workload to be used in the evaluations. The second major problem is setting biased goals. This is a very common mistake by the modeler. The goal becomes to prove that "my system is superior to someone else's system." This makes the analyst the jury, which will lead to bad judgments.

If the analyst uses an unsystematic approach to developing the model or jumps into analysis before fully understanding the problem under study, the results will be flawed. The choice of incorrect performance metrics or misleading metrics will result in erroneous results and conclusions. Choosing an unrepresentative or nonstressful workload will lead to misinterpretations of system performance boundaries. Choosing the wrong evaluation technique-for example, analytical modeling, when a testbed is the right choice-will lead to overly simplistic or complex analysis. Overlooking important system parameters or not examining the interaction among systems parameters may lead to erroneous conclusions about sensitivities and dependencies among system elements. Inappropriate experimental design or bad choice of the level of detail can cause misleading conclusions. Erroneous analysis, no sensitivity analysis, or even no analysis lead to failure. Ignoring input, internal or output errors, or the variability of these can cause misleading interpretations of results. Not performing sensitivity analysis, outlier analysis, or ignoring change can also cause problems in interpreting or trusting results. Performing too complex an analysis or improper presentation or interpretation of results, as well as the omission of assumptions and limitations, will yield a failed analysis.

To try to alleviate these problems the analyst should ask the following questions before, during, and after an analysis has been done:

  1. Is the system correctly defined and the goals of the analysis clearly stated?

  2. Are the goals stated in an unbiased manner?

  3. Have all the steps of the analysis been followed systematically?

  4. Is the problem clearly understood before analysis is begun?

  5. Are the performance metrics relevant for this problem?

  6. Is the workload correct for this problem?

  7. Is the evaluation technique appropriate?

  8. Is the list of parameters that affect performance complete?

  9. Have all parameters that affect performance been chosen as factors to be used in experimental design?

  10. Is the experimental design efficient in terms of time and results expected?

  11. Is the model's level of detail sufficient?

  12. Are the measured data presented with analysis and interpretation?

  13. Is the analysis statistically correct?

  14. Has the sensitivity analysis been done?

  15. Would errors in the input cause an insignificant change in the results?

  16. Have the outliers in the input or outputs been treated properly?

  17. Have the future changes in the system and workload been modeled?

  18. Has the variance of input been taken into account?

  19. Has the variance in results been analyzed?

  20. Is the analysis easy and unambiguous to explain?

  21. Is the presentation style suitable for its intended audience?

  22. Have the results been presented graphically as much as possible?

  23. Are the assumptions and limitations of the analysis clearly documented and accounted for?

When developing a performance study the sage performance analyst would follow a systematic approach, which has the following point as its components:

  1. State goals and define the system to be studied.

  2. List services and outcomes clearly and completely.

  3. Select the performance metrics.

  4. List all systems parameters of interest.

  5. Select the factors for the study.

  6. Select the evaluation technique to apply.

  7. Select the workload.

  8. Design the experiments.

  9. Analyze and interpret results.

  10. Present results clearly and unambiguously.

  11. Repeat if needed.



 < Free Open Study > 



Computer Systems Performance Evaluation and Prediction
Computer Systems Performance Evaluation and Prediction
ISBN: 1555582605
EAN: 2147483647
Year: 2002
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net