14.3 Single Class Models with LD Devices


Chapter 12 presents the three main relationships (repeated below) needed to solve a queuing network using MVA. They assume that service times are load-independent (LI).

Equation 14.3.2

graphics/14equ32.gif


Equation 14.3.3

graphics/14equ33.gif


Equation 14.3.4

graphics/14equ34.gif


For load-dependent (LD) devices, the service rate, and consequently the response time, is a function of the distribution of customers at the device. Therefore, Eqs. (14.3.2) and (14.3.4) need to be adjusted. Instead of simply the mean queue length, the complete queue length distribution at these devices is required. Let

  • Pi(j | n) = probability that device i has j customers given, that there are n customers in the QN.

  • mi(j) = service rate of device i when there are j customers at the device.

An arriving customer who finds j 1 customers at device i will have a response time equal to j/mi(j). The probability that an arrival finds j 1 customers at device i given that there are n customers in the queuing network is Pi(j 1 | n 1), due to the Arrival Theorem [19]. The average residence time is computed as the product of the average number of visits to the device times the average response time per visit. That is,

Equation 14.3.5

graphics/14equ35.gif


The mean queue length at node i is given by

Equation 14.3.6

graphics/14equ36.gif


What remains is the computation of Pi(j | n). By definition, Pi(0 | 0) = 1. If one applies the principle of flow equilibrium to the queuing network states [9], the probability of having j customers at device i for a queuing network with n customers can be expressed in terms of the probability of having j 1 customers at device i when there is one less customer in the queuing network. Hence,

Equation 14.3.7

graphics/14equ37.gif


where ai(j) is a service-rate multiplier [12] defined as mi(j)/mi(1). From Eq. (14.3.5) and the definition of the service-rate multipliers it follows that

Equation 14.3.8

graphics/14equ38.gif


The service time, Si, when there is no congestion at device i is equal to 1/mi(1). Since Di = ViSi, it follows that

Equation 14.3.9

graphics/14equ39.gif


The MVA algorithm for load-dependent devices is given in Figure 14.5.

Example 14.1.

A Web server has two processors and one disk. Benchmarks indicate that a two-processor server is 1.8 times faster than the single-processor model for this type of workload. Thus,

graphics/388fig01.gif


Let the service demand of an HTTP request at the disk and processor be 0.06 sec and 0.1 sec, respectively. For simplicity, let the maximum number of simultaneous connections be 3. Using the algorithm of Figure 14.5 yields the results shown in Table 14.2. The table also shows conditional probabilities at the load-dependent device that models the 2-processor system. The average response time is 0.24 sec and the average throughput is 12.44 requests/sec.

Table 14.2. Detailed Results of the Load Dependent MVA

graphics/390fig01.gif

Example 14.2.

Consider the financial application described in the motivating example. We want to calculate the database server throughput and response time. Suppose that the CS system has 100 workstations. However, the analysts expect that during any given period of time only an average of 30% of the workstations are active. Thus, the number of active client workstations is 30. First, an expression is required for the service rate of the network as a function of the number of client workstations using the network.

Let mnet (m) denote the LAN service rate measured in SQL requests per second as a function of the number of client workstations m.

Equation 14.3.10

graphics/14equ310.gif


where mp(n) is the network throughput, in packets/sec, for a network with n stations. Note that when m = 1, even though there are two stations in the network (the server and the only client), the server transmits only when requested by the client. No collisions occur. For that reason, mp(1) is used in the expression for mnet(1). An expression for mp(n) for an Ethernet network is derived in [12] and is given by

Equation 14.3.11

graphics/14equ311.gif


where C is the average number of collisions and is given by (1 A)/A. The parameter A is the probability of a successful transmission and is given by (1 1/n)n 1.

Using the parameters in Table 14.1, one can obtain the values of the throughput mnet (m) when there are m client workstations. Using these values for mnet (m), for m = 1,2,...30, the MVA model with LD devices given in Figure 14.5 yields a throughput of 82.17 SQL requests/sec and a response time equal to 0.265 sec. The network time is equal to 0.00095 sec, which represents only 0.36% of the total response time. In this case, the LAN can be effectively ignored since the network speed of 100 Mbps is such that collisions and packet transmission times become negligible.

Figure 14.5. Exact single-class MVA algorithm with LD devices.

graphics/14fig05.gif

Example 14.3.

Consider a database server with eight processors and two disk subsystems. The eight processors allow the server to run concurrent server processes. The use of an OLTP benchmark provides the following scaling factor function over a one-processor configuration:

graphics/391fig01.gif


The system is used for processor intensive transactions, explaining the need for an 8-processor server. Let the service demand of a transaction at the two disk subsystems be 0.008 and 0.011 sec, respectively. The processor service demand is 0.033 sec. What is the impact of increasing the number of concurrent processes at the multiprocessor database server?

The database server is represented by a simplified model, composed of the three devices. One load-independent (LI) device represents disk subsystem 1, another LI device represents disk subsystem 2, and a load-dependent device (LD) models the eight-processor server, as shown in Figure 14.6. Consider two scenarios: in the first case, the system executes 20 processes simultaneously and in the second case, the database runs 30 processes concurrently. The results obtained with the algorithm of Figure 14.5 are shown in Table 14.3.

Table 14.3. Results for Example 14.3

n

R'processor

R'disk1

R'disk2

R0

X0

Uprocessor

Udisk1

Udisk2

20

0.067

0.024

0.135

0.227

87.96

36.28

70.36

96.75

30

0.069

0.026

0.239

0.334

89.73

37.01

71.78

98.70

The model's results indicate that eight processors are capable of handling the processor intensive transactions (i.e., Uprocessor < 40%) However, disk 2 is the bottleneck (i.e., Udisk2 > 95%) of the database server and limits the system throughput.

Figure 14.6. Multiprocessor database server model.

graphics/14fig06.gif



Performance by Design. Computer Capacity Planning by Example
Performance by Design: Computer Capacity Planning By Example
ISBN: 0130906735
EAN: 2147483647
Year: 2003
Pages: 166

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net