11.2 Validation of results

 < Free Open Study > 



11.2 Validation of results

The tool selected must produce results that are correct and consistent and, therefore, convincing to our client. If the results and assumptions used to get to them are too far from the expected systems result, the analysis may be very suspect and will not be used. Analytical results readily fall into this venue, since most people are skeptical when it comes to the assumptions and simplifications required to make these models workable. Simulations also suffer from this at times, due to the nature of simulation model construction. Simulations also typically require the modeler to make tradeoffs when it comes to specific details. Some of these tradeoffs may make the model's results less realistic to the client. Also, many simulation developers suffer from one major flaw. They often do not fully validate the correctness of their models before they apply them to the problem being studied.

Once we determine which modeling tool to use and have constructed our model, we still cannot simply begin running our experiments. The selected tool and model must be validated so we believe the results they produce. The validation of one tool starts by selecting another tool or tools, and collecting information from the other tools. Using this collected information the modeler runs the new tool for the same configuration and compares the results provided by the multiple tools. The results collected from all the tools should lead the modeler to the same conclusions. There are no hard and fast rules as to how a validated tool's results should compare point to point with the tool used for validation. Many simulation studies have used a measure that looks for aggregate results not to differ by more than 5 percent, give or take a few percentage points.

The validation requires the modeler to look at a variety of components of the model. First, does the model have a correspondence to the real system under study? That is, is it a faithful representation of the real system? For example, if the model has two processors and the real system has one, it is not a faithful representation. Second, are the assumptions used by the modeler realistic in terms of the real-world system being modeled? Third, does the model's input parameters and distributions track that of the real system values, if available? If they are not available, do they track those of some other model constructed for a similar project that was validated? Finally, do the results and conclusions from the model map favorably to those of the measured system or other tools? In addition, do the conclusions from the model being validated follow those of the real system or other model consistently and correctly?

Each of these questions can be answered in a variety of ways. They can be determined using expert intuition, by measuring the system and comparing the results, or through analytical results. Expert intuition comes from an individual modeler who has performed many tests in the past. Using this wealth of knowledge, the modeler may be able to examine the results and model and determine if they appear-"in his or her opinion"-to be representative of a faithful and correct rendition of the system under study. These experts are drawn from designers, architects, implementers, analysts, maintainers, operators, and even users of the systems being studied. What we do not want is the validation expert coming from the team used to design the model being validated.

Real system measurement is the most reliable means of model validation, but it also can be the hardest to come by. This is because the real system may not exist yet, or collected information may not exist. Possibly the measurements for an existing system, if they are available, may not represent the full spectrum of information needed to corroborate the model's data. The last method for obtaining the required validation information is by using analytical results. As long as the model we are trying to validate is not an analytical model, this is an available and acceptable means of validating information. By setting the parameters for a simulation to those of an analytical model, we should in theory be able to faithfully determine the same results as those generated by the analytical model.



 < Free Open Study > 



Computer Systems Performance Evaluation and Prediction
Computer Systems Performance Evaluation and Prediction
ISBN: 1555582605
EAN: 2147483647
Year: 2002
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net