3.11 A case study

 < Free Open Study > 



3.11 A case study

If we wished to study the issue of remote pipes versus remote procedure calls, we could go through the following modeling effort. The first step is to define the system we wish to study. This entails developing a model that contains all of the major components of interest. In Figure 3.6 we postulate such a definition.

click to expand
Figure 3.6: System definition.

The services we wish to focus on are small and large data transfers. We will not be concerned with other details of the services.

The metrics we wish to focus on as well as some assumptions include that there are no errors and no failures in the system. We wish to focus on defining rates of access, time for performance, and resource requirement per service. The resources we will focus on are the client, server, and network elements.

These metrics and assumptions may lead us to focus on measurements to be collected, such as elapsed time per call, maximum call rate per unit of time, time required to complete a block of N successive calls, local CPU time per call, remote CPU time per call, number of bytes sent over the link per call, and so on.

These in turn will require us to focus on definition of the system's parameters-for example, the speed of the local and remote CPUs, the speed of the network, operating system overhead for interfacing with the channels, operating system overhead for interfacing with the network, reliability of the network, and so forth.

The workload parameters used to define the presented workload may include the time between successive calls, the number and sizes of the call parameters, number and size of the results, the type of channel used, and other background loads on the local and remote site as well as on the network.

Factors we may wish to study could include type of channel (RPC or remote pipes), size of the network (long distance, local area network), size of the calls (small, large), and the number of successive calls (can vary from one, five, ten up to some saturation load).

The assumptions made may include fixing the type of CPU and operating system, ignoring retransmissions due to network errors, and doing measurements with no other loads on hosts and networks.

The evaluation technique may be chosen as a prototype along with analytical models to validate or bound the expected results. The workload is constructed using synthetic constructs. The experimental design will vary all factors, resulting in a full factorial experimental design with 88 experiments used. This represents the varying of all factors described over their entire range of postulated values. The data analysis will involve determining the variance of results and comparing these against each factor. This would be followed by the plotting of all results in graphical form to better show performance variations.



 < Free Open Study > 



Computer Systems Performance Evaluation and Prediction
Computer Systems Performance Evaluation and Prediction
ISBN: 1555582605
EAN: 2147483647
Year: 2002
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net