Chapter 12 presents the three main relationships (repeated below) needed to solve a queuing network using MVA. They assume that service times are loadindependent (LI). Equation 14.3.2
Equation 14.3.3
Equation 14.3.4
For loaddependent (LD) devices, the service rate, and consequently the response time, is a function of the distribution of customers at the device. Therefore, Eqs. (14.3.2) and (14.3.4) need to be adjusted. Instead of simply the mean queue length, the complete queue length distribution at these devices is required. Let
An arriving customer who finds j 1 customers at device i will have a response time equal to j/m_{i}(j). The probability that an arrival finds j 1 customers at device i given that there are n customers in the queuing network is P_{i}(j 1  n 1), due to the Arrival Theorem [19]. The average residence time is computed as the product of the average number of visits to the device times the average response time per visit. That is, Equation 14.3.5
The mean queue length at node i is given by Equation 14.3.6
What remains is the computation of P_{i}(j  n). By definition, P_{i}(0  0) = 1. If one applies the principle of flow equilibrium to the queuing network states [9], the probability of having j customers at device i for a queuing network with n customers can be expressed in terms of the probability of having j 1 customers at device i when there is one less customer in the queuing network. Hence, Equation 14.3.7
where a_{i}(j) is a servicerate multiplier [12] defined as m_{i}(j)/m_{i}(1). From Eq. (14.3.5) and the definition of the servicerate multipliers it follows that Equation 14.3.8
The service time, S_{i}, when there is no congestion at device i is equal to 1/m_{i}(1). Since D_{i} = V_{i}S_{i}, it follows that Equation 14.3.9
The MVA algorithm for loaddependent devices is given in Figure 14.5. Example 14.1.A Web server has two processors and one disk. Benchmarks indicate that a twoprocessor server is 1.8 times faster than the singleprocessor model for this type of workload. Thus,
Let the service demand of an HTTP request at the disk and processor be 0.06 sec and 0.1 sec, respectively. For simplicity, let the maximum number of simultaneous connections be 3. Using the algorithm of Figure 14.5 yields the results shown in Table 14.2. The table also shows conditional probabilities at the loaddependent device that models the 2processor system. The average response time is 0.24 sec and the average throughput is 12.44 requests/sec.
Example 14.2.Consider the financial application described in the motivating example. We want to calculate the database server throughput and response time. Suppose that the CS system has 100 workstations. However, the analysts expect that during any given period of time only an average of 30% of the workstations are active. Thus, the number of active client workstations is 30. First, an expression is required for the service rate of the network as a function of the number of client workstations using the network. Let m_{net} (m) denote the LAN service rate measured in SQL requests per second as a function of the number of client workstations m.
Equation 14.3.10
where m_{p}(n) is the network throughput, in packets/sec, for a network with n stations. Note that when m = 1, even though there are two stations in the network (the server and the only client), the server transmits only when requested by the client. No collisions occur. For that reason, m_{p}(1) is used in the expression for m_{net}(1). An expression for m_{p}(n) for an Ethernet network is derived in [12] and is given by
Equation 14.3.11
where C is the average number of collisions and is given by (1 A)/A. The parameter A is the probability of a successful transmission and is given by (1 1/n)^{n 1}. Using the parameters in Table 14.1, one can obtain the values of the throughput m_{net} (m) when there are m client workstations. Using these values for m_{net} (m), for m = 1,2,...30, the MVA model with LD devices given in Figure 14.5 yields a throughput of 82.17 SQL requests/sec and a response time equal to 0.265 sec. The network time is equal to 0.00095 sec, which represents only 0.36% of the total response time. In this case, the LAN can be effectively ignored since the network speed of 100 Mbps is such that collisions and packet transmission times become negligible. Figure 14.5. Exact singleclass MVA algorithm with LD devices.
Example 14.3.Consider a database server with eight processors and two disk subsystems. The eight processors allow the server to run concurrent server processes. The use of an OLTP benchmark provides the following scaling factor function over a oneprocessor configuration:
The system is used for processor intensive transactions, explaining the need for an 8processor server. Let the service demand of a transaction at the two disk subsystems be 0.008 and 0.011 sec, respectively. The processor service demand is 0.033 sec. What is the impact of increasing the number of concurrent processes at the multiprocessor database server? The database server is represented by a simplified model, composed of the three devices. One loadindependent (LI) device represents disk subsystem 1, another LI device represents disk subsystem 2, and a loaddependent device (LD) models the eightprocessor server, as shown in Figure 14.6. Consider two scenarios: in the first case, the system executes 20 processes simultaneously and in the second case, the database runs 30 processes concurrently. The results obtained with the algorithm of Figure 14.5 are shown in Table 14.3.
The model's results indicate that eight processors are capable of handling the processor intensive transactions (i.e., U_{processor} < 40%) However, disk 2 is the bottleneck (i.e., U_{disk2} > 95%) of the database server and limits the system throughput. Figure 14.6. Multiprocessor database server model.
