Chapter 12 presents the three main relationships (repeated below) needed to solve a queuing network using MVA. They assume that service times are load-independent (LI).
For load-dependent (LD) devices, the service rate, and consequently the response time, is a function of the distribution of customers at the device. Therefore, Eqs. (14.3.2) and (14.3.4) need to be adjusted. Instead of simply the mean queue length, the complete queue length distribution at these devices is required. Let
An arriving customer who finds j 1 customers at device i will have a response time equal to j/mi(j). The probability that an arrival finds j 1 customers at device i given that there are n customers in the queuing network is Pi(j 1 | n 1), due to the Arrival Theorem . The average residence time is computed as the product of the average number of visits to the device times the average response time per visit. That is,
The mean queue length at node i is given by
What remains is the computation of Pi(j | n). By definition, Pi(0 | 0) = 1. If one applies the principle of flow equilibrium to the queuing network states , the probability of having j customers at device i for a queuing network with n customers can be expressed in terms of the probability of having j 1 customers at device i when there is one less customer in the queuing network. Hence,
where ai(j) is a service-rate multiplier  defined as mi(j)/mi(1). From Eq. (14.3.5) and the definition of the service-rate multipliers it follows that
The service time, Si, when there is no congestion at device i is equal to 1/mi(1). Since Di = ViSi, it follows that
The MVA algorithm for load-dependent devices is given in Figure 14.5.
A Web server has two processors and one disk. Benchmarks indicate that a two-processor server is 1.8 times faster than the single-processor model for this type of workload. Thus,
Let the service demand of an HTTP request at the disk and processor be 0.06 sec and 0.1 sec, respectively. For simplicity, let the maximum number of simultaneous connections be 3. Using the algorithm of Figure 14.5 yields the results shown in Table 14.2. The table also shows conditional probabilities at the load-dependent device that models the 2-processor system. The average response time is 0.24 sec and the average throughput is 12.44 requests/sec.
Consider the financial application described in the motivating example. We want to calculate the database server throughput and response time. Suppose that the CS system has 100 workstations. However, the analysts expect that during any given period of time only an average of 30% of the workstations are active. Thus, the number of active client workstations is 30. First, an expression is required for the service rate of the network as a function of the number of client workstations using the network.
Let mnet (m) denote the LAN service rate measured in SQL requests per second as a function of the number of client workstations m.
where mp(n) is the network throughput, in packets/sec, for a network with n stations. Note that when m = 1, even though there are two stations in the network (the server and the only client), the server transmits only when requested by the client. No collisions occur. For that reason, mp(1) is used in the expression for mnet(1). An expression for mp(n) for an Ethernet network is derived in  and is given by
where C is the average number of collisions and is given by (1 A)/A. The parameter A is the probability of a successful transmission and is given by (1 1/n)n 1.
Using the parameters in Table 14.1, one can obtain the values of the throughput mnet (m) when there are m client workstations. Using these values for mnet (m), for m = 1,2,...30, the MVA model with LD devices given in Figure 14.5 yields a throughput of 82.17 SQL requests/sec and a response time equal to 0.265 sec. The network time is equal to 0.00095 sec, which represents only 0.36% of the total response time. In this case, the LAN can be effectively ignored since the network speed of 100 Mbps is such that collisions and packet transmission times become negligible.
Figure 14.5. Exact single-class MVA algorithm with LD devices.
Consider a database server with eight processors and two disk subsystems. The eight processors allow the server to run concurrent server processes. The use of an OLTP benchmark provides the following scaling factor function over a one-processor configuration:
The system is used for processor intensive transactions, explaining the need for an 8-processor server. Let the service demand of a transaction at the two disk subsystems be 0.008 and 0.011 sec, respectively. The processor service demand is 0.033 sec. What is the impact of increasing the number of concurrent processes at the multiprocessor database server?
The database server is represented by a simplified model, composed of the three devices. One load-independent (LI) device represents disk subsystem 1, another LI device represents disk subsystem 2, and a load-dependent device (LD) models the eight-processor server, as shown in Figure 14.6. Consider two scenarios: in the first case, the system executes 20 processes simultaneously and in the second case, the database runs 30 processes concurrently. The results obtained with the algorithm of Figure 14.5 are shown in Table 14.3.
The model's results indicate that eight processors are capable of handling the processor intensive transactions (i.e., Uprocessor < 40%) However, disk 2 is the bottleneck (i.e., Udisk2 > 95%) of the database server and limits the system throughput.
Figure 14.6. Multiprocessor database server model.