Simulation Model Examples


Modeling solutions have evolved beyond early generations. The successful modeling products on the market today have all discovered one fundamental truth about the marketplace: customers are more interested in solutions than in modeling technology. Nevertheless, it's important to examine the underlying technology of any simulation modeling product; some modeling products are easier to use and more accurate than others.

The following sections look at two typical simulation modeling tools as examples of what is available for modeling a complex Web system: OPNET Modeler and HyPerformix Integrated Performance Suite.

Model Construction

Building a model has usually been a tedious and error-prone process. Describing the elements and their relationships grows more difficult as the environment grows more complex. The possibility of errors being introduced into the model also grows, necessitating more laborious checking.

Modeling tools use automatic discovery as much as they can to simplify model building. This approach works reasonably well at the topology level where the elements and their connections can be determined by most discovery tools. The task gets more difficult when the applications and dependent services are included.

As applications are distributed across many servers and data centers, understanding the relationships among their components can be quite challenging. Most services are dependent upon other applications and services, and the dependencies are usually incorporated into the model manually.

Models are usually constructed by combining the interconnections of the system with the characteristics of the individual system nodes. The interconnections, or topology, can be discovered automatically from an existing system, or the topology can be constructed using design tools. Even if the topology is discovered automatically or imported from other systems that have discovered it automatically, some manual intervention may be needed to ensure that it's accurate.

The individual system nodes are usually based on prepackaged object libraries, or templates, that are ready for out-of-the-box model building. In these libraries, behavioral descriptions are built for each type of object. For example, network object libraries have descriptions for each device, detailing the maximum number of interfaces and maximum link speeds, packet forwarding rates, Quality of Service (QoS) capabilities, and other factors. A server object would describe CPU power, memory capacity, disc I/O rates, and similar server behaviors.

Application objects are complex and usually require some manual characterization of the application process flow. Sophisticated simulation modeling packages include programming languages that can be used to construct those flows.

Planners use the library to build a model quickly and explore its behavior. The predefined templates save time and reduce errors; they are complemented with tools for constructing new objects and incorporating them into the library.

Models are then driven by a variety of inputs for a thorough coverage of the system's performance envelope. Using actual inputs is always the best alternative. Actual network traffic can be captured with a variety of collectorsremote monitoring (RMON) agents, protocol analyzers, or a variety of point products. Transactions can be captured with transaction recorders and from server logs. These sources give the most accurate input to the model. Models can also be driven from scripts, files, or other sources. These inputs can be tuned to stress different parts of the model and are also used as a repetitive, consistent baseline to track changes in results.

The OPNET Network Editor is used to build and display topology information. Network topology information can be imported or constructed graphically with the Network Editor. Users have a palette of node and link objects to choose from while they build a topological description of their environment. OPNET has an extensive object library, including objects for an aggregated "cloud node" that can be configured with the latencies and packet-loss ratios that have been measured from a real network. Customers can also create their own objects for new devices. Simple dialog boxes for each object instance provide a means for configuring them with the appropriate parameters, although reasonable defaults are provided.

OPNET Flow Analysis can then be used to model the detailed characteristics of networks, and OPNET Application Characterization Environment (ACE) can be used to model the details of application transactions. ACE can use input from measurement collectors; it discovers transactions and their detailed performance characteristics for input into the model.

Similarly, the HyPerformix solutions include the HyPerformix Infrastructure Optimzer and Performance Profiler; these jointly create or import topology information, model the system, and use input from measurement collectors to discover transaction performance characteristics for use in the models.

Model Validation

Determining the good enough point requires validation of the model's results. Then you actually know how good it is and how good you need it to be. One approach uses the test bed, if it exists, and compares actual results produced by the test bed to the model results. When the discrepancy between them is acceptable, the model is good enough.

HyPerformix suggests driving the model to the point where the most heavily used server is at 50, 70, and 90 percent of maximum capacity. They recommend as a validation guideline that modeled server utilization should be within 10 percent of measured utilization, modeled response time should be within 1020 percent of measured response time, and modeled throughput should be within 1015 percent of measured throughput. Of course, acceptable accuracy is also determined by your time and resource commitments, tolerance for risk, and your staffing skill levels. At some point, the marginal value produced by more refinements is not worth the time and expense to achieve them.

Comparing the model results with the test bed results can also identify areas where the model's results can be adjusted with the real-world input from the test bed. Rather than extensive modifications to the model, a simple adjustment of the results can suffice sometimes. Data from the actual production environment can also be used to calibrate the model results. Good instrumentation captures the loading characteristics and the responses. The actual loads are used to drive the model, and its results are compared with those from the actual production environment.

The model becomes even more valuable after it has been calibrated because its results can be adjusted to achieve more accuracy. Combining modeling with load testing and other capabilities builds a stronger overall long-term management capability.

Reporting

Presenting the modeling results in an easy-to-understand form is another key. Models, like the environments they simulate, generate large amounts of data, and their value lies in converting it into usable information, particularly through visual representation. A variety of formats as well as the ability to interact with the data are key to effective analysis. Interactive use of the model clarifies sensitivities to certain operating conditions, showing the changes in model outputs that result from changes in model inputs.




Practical Service Level Management. Delivering High-Quality Web-Based Services
Practical Service Level Management: Delivering High-Quality Web-Based Services
ISBN: 158705079X
EAN: 2147483647
Year: 2003
Pages: 128

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net