8.1 Simulation process

 < Free Open Study > 



8.1 Simulation process

The use of a digital computer to perform modeling and run experiments has been a popular technique for quite some time. In this environment simulation can make systematic studies of problems that cannot be studied by other techniques. The simulation model describes the system in terms of the elements of interest and their interrelationships. Once completed, it provides a laboratory in which to carry out many experiments on these elements and interactions.

Simulation programs, as with generic modeling, require discrete phases to be performed in order to realize their full potential. They are as follows:

  1. Determine that the problem requires simulation.

  2. Formulate a model to solve the problem.

  3. Formulate a simulation model of the problem.

  4. Implement the model in a suitable language.

  5. Design simulation experiments.

  6. Validate the model.

  7. Perform experiments.

The typical simulation model project will spend most of its time in phases 2, 3, and 4, because of the complexities associated with formulating the model and the conversion to simulation format and implementation in a language. Model formulation deals with the definition of critical elements of the real-world system and their interactions. Once these critical elements have been identified and defined (mathematically, behaviorally, functionally) and their interactions (cause and effect, predecessor and successor, dependencies and nondependencies, data flows, and control flow) are defined in terms of their essence, simulation model development flows into and along with systems model definition. That is, as we develop a system model we can often directly define the simulation model structure.

An important aspect of this model development is the selection of a proper level of simulation, which is directly proportional to the intended purpose of the performance evaluation, the degree of understanding of the system, its environment, and the output statistics required. On one extreme, for example, we could model our bank teller system down to the level of modeling all his or her actions. Or, on the other hand, we could model the teller service as strictly a gross estimate of time to perform service regardless of the type of service. The level to choose would be dependent on what is to be examined. In the first example, we may wish to isolate the most time-consuming aspect(s) of their functions so that we could develop ways to improve them. At the second level possibly all we wish to determine is based on the customer load, the average teller service time, and the optimal number of tellers to have on duty and when.

The intent of the performance measure drives us directly to a simulation level of detail, which typically falls somewhere in between the two extremes: too low or too high to be useful. In most cases, however, we as modelers do not or cannot always foresee how the level of detail of all components can influence the model's ultimate usefulness. A solution typically used to cope with such uncertainties is to construct the model in a modular fashion, allowing each component to migrate to the level consistent with its intent and overall impact on the simulation and system. What this typically drives us to is top-down model development, with each layer being refined as necessary.

Simulations, beyond their structure (elements and interactions), require data input and data extraction to make them useful. The most usual simulations are either self-driven or trace-driven. In self-driven simulations the model itself (i.e., the program) has drivers embedded in it to provide the needed data to stimulate the simulation. These data are typically derived by various analytical distributions and linked with a random number generator. In the example of the bank teller system, we have been using a self-driven simulation. We may use a Poisson arrival distribution to describe the random nature of customers arriving to the system. Such a use is indicative of some artificially generated stream-to-model system inputs.

In the other case, when we use trace-driven data, the simulation is being driven by outside stimuli. Typically these are extracted, reduced, and correlated data from an actual running system. For example, in our bank teller case we may wish to have a more realistic load base from which to compute the optimal number of tellers and their hours. In such a case we would measure over some period of time the dynamics of customers arriving at the bank for service. This collected information would then be used to build a stored input sequence, which would drive the simulation based on these realistic data. This type of modeling is closer to the real-world system but has the disadvantage of requiring the up-front data collection and analysis to make such data available for use.



 < Free Open Study > 



Computer Systems Performance Evaluation and Prediction
Computer Systems Performance Evaluation and Prediction
ISBN: 1555582605
EAN: 2147483647
Year: 2002
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net