Test Environment


Many things, of course, happened to move the test forward to this point. Let's review some of the activity at TriMont since the initial case study.

Hardware Configuration

The TriMont team reviewed your initial network analysis and agreed to the assumptions you made. The web site manager became somewhat alarmed at the traffic volumes expected at peak loading (the potential for 1,400 requests /second in the peak hour ). The TriMont team decided to use a caching proxy server for their static content.

TriMont representatives also spoke with their web application server vendor and came up with some rough sizing estimates for their web site. The site receives, at peak, 87.5 dynamic requests/second. The TriMont team wants to give the web site a margin of safety for unexpected traffic (for more details on headroom estimates, see Chapter 6), so they actually use 120 dynamic requests/second as their planning number. This provides a little over 25% headroom for the web site, which they consider sufficient.

 87.5 requests/sec / 120 requests/sec = 73% 

The application server vendor publishes capacity planning data for its customers. TriMont estimates an eight-CPU UNIX server should support 40 requests per second given their particular application. Using this estimate, the web site requires three eight-CPU UNIX servers to support 120 requests per second. To handle site capacity if one server fails, the team plans for four eight-CPU servers.

Input Data

Tables 14.1 and 14.2 show the calculations using the Hardware Sizing worksheet from Appendix A.

Table 14.1. Hardware Sizing Worksheet, Lines 1 “6
  Input Data Source Your Data
1. Concurrent users Appendix A: Capacity Sizing worksheet 7,350 users
2. Throughput: page rate Appendix A: Capacity Sizing worksheet 87.5 pages/sec
3. Throughput: requests Appendix A: Capacity Sizing worksheet 1,400 requests/sec (87.5 dynamic)
4. Response time Marketing 5 sec
5. Headroom factor Marketing/planning 25%
6. Estimated throughput supported on your selected hardware Hardware/application server vendor 40 requests/sec (dynamic)

Calculating Hardware Requirement Estimate (Pre-Test)

Table 14.2. Hardware Sizing Worksheet, Lines 7 “10
  Calculated Data Equation Total
7. Concurrent users (with headroom)

Concurrent users + Concurrent users / (100 / Headroom “ 1)

Line 1 + Line 1 / (100 / Line 5 “1)

7,350 + 7,350/3 = 9,800 users

Round to 10,000 users

8. Throughput requirement (with headroom): page rate

Throughput + Throughput / (100 / Headroom “ 1)

Line 2 + Line 2 / (100 / Line 5 “ 1)

87.5 + 87.5 / 3 = 117 pages

Round to 120 pages/sec

9. Throughput requirement (with headroom): requests

Throughput + Throughput / (100 / Headroom “ 1)

Line 3 + Line 3 / (100 / Line 5 “ 1)

1,400 + 1,400 / 3 = 1866 requests/sec
10. Application servers required

Throughput requirement / Estimated throughput for your hardware + 1 Maintenance)

Line 8 / Line 6 + 1

120 / 40 + 1 = 4 servers

TriMont wants a basic DMZ configuration, as discussed in Chapter 3. They put together the list of equipment shown in Table 14.3.

Table 14.3. Production Equipment List
Equipment Quantity
Router 1
Caching proxy server 2
HTTP server machines 4
Java Web Application Server machines 4
Firewall servers 2
Persistent HTTP session database servers 1
Boat Selector DB server 1

At your urging, TriMont agrees to performance test at least half of the production configuration. The actual performance test environment uses two Java web application servers, two HTTP servers, and one caching proxy server (the other hardware remains the same). Figure 14.1 shows the resulting test environment configuration.

Figure 14.1. TriMont performance test configuration

graphics/14fig01.gif

Note the following things in this configuration:

  • In the test-planning phase, you approached TriMont about their Orders database. Since the performance tests actually place orders, you wondered if TriMont might use special account numbers to test their systems.

  • Actually, TriMont doesn't use special account number to test their system. Instead, they build a special Orders test database for testing purposes. This database lives outside their business infrastructure, so they can run tests without impacting their inventory, shipping, or accounting systems.

  • This system should work just fine for the performance test. The test scripts just require an order to check, which this provides. The URL link to the shipping company's web site is not part of the TriMont performance test, so integration of order numbers between TriMont and the shipper is not important for performance testing.

  • As discussed in Chapter 11, TriMont does not plan to throw the switch on all this equipment on the first day of performance testing. This means they will use just an HTTP Server/application server pair (and backend databases) for the first week or so. All of the intervening equipment (firewalls, routers, caching proxy servers) remain inactive during this initial testing.

  • Next, they plan to double the load on the web site and add the second HTTP Server and application server to the mix, along with the router and perhaps the caching proxy server. Finally, in the last days of the test, they plan to turn on the firewalls to assure they do not significantly impact the performance of the web site.

We know more about the TriMont test than we did in the initial assessment. For example,

  • The site must support 120 pages per second.

  • Three servers should handle all of this traffic.

120 pages per second translates to the following number of concurrent users:

 120 pages/sec / 5 pages/user = 24 users  /sec  24 users/sec * 7 min/visit * 60 sec/min = 10,080 concurrent users 

Since we are estimating anyway, let's round to 10,000 concurrent users for simplicity's sake. (We'll do this throughout this section.) Note that this number corresponds to the Hardware Planning Worksheet, line 7 (Table 14.2), give or take a little for rounding order.

The test environment only uses two servers, whereas we expect to fully engage three servers in the production environment. (The fourth production server is spare capacity for failover). Since we're using two-thirds of our production capacity in the test environment, we need to generate two- thirds of the production users:

 10,000 concurrent users * 2/3 = 6,700 concurrent users 

As we discussed in Chapter 8, most major commercial performance test tool vendors charge on a "per virtual user" basis. A 7,000 user license represents a significant investment when dealing with most major testing client vendors . Not surprisingly, TriMont does not want to spend potentially tens of thousands of dollars on virtual user licensing.

Instead, TriMont decides to try a different testing technique. They want to reduce the duration of each virtual user's visit by reducing each virtual user's think time. For example, if the average user visit becomes 45 seconds instead of seven minutes, the number of virtual users required by the test drops dramatically:

 120 pages/sec / 5 pages/user = 24 users/sec 24 users/sec * 45 sec = 1,080 simultaneous users 

The test environment requires two-thirds of these users to drive the two boxes:

 1,080 users * 2/3 = 720 simultaneous users 

Of course, we need to be sure 45 seconds is a reasonable amount of time for a user visit to complete:

 45 sec/visit / 5 pages/visit = 9 sec/page 

TriMont wanted 5 second response time or better, even at peak loading. Nine seconds per page allows us to meet this response time with a small margin for think time. Therefore, 45 seconds should work for the test.

HTTP Session Pressure

So, TriMont uses fewer virtual users but requires them to interact with the site faster. This gives the same traffic rates as a larger number of users with longer think times. This technique works well in many situations, but may lead to problems if the web site software caches or uses HTTP sessions (which TriMont's does). For example, a TriMont user stays on the site an average of 7 minutes in production usage. The HTTP session stays in memory until the timeout period for the HTTP session completes (currently, TriMont's is set at 30 minutes). This means each HTTP session stays active for an average of 37 minutes. If the user visit lasts only 45 seconds, the HTTP session active time becomes approximately 31 minutes. This reduces the number of HTTP sessions in memory, and gives an overly optimistic view of memory requirements.

Let's do some calculations to work out the impact of fewer users on the TriMont performance test. Remember, we calculated 10,000 concurrent users for the entire web site, given our new requirements for additional headroom. Since we're only testing two-thirds of the web site's capacity, we estimate 6,700 concurrent users for our two test servers (10,000 concurrent users * 2/3). In a full test, 6,700 concurrent users generate the following number of HTTP sessions within 37 minutes (the production HTTP session timeout period):

 37 min / 7 min/visit * 6,700 users = 35,500 simultaneous HTTP sessions (for two-thirds of the web site traffic) 

Note this only represents two-thirds of the simultaneous HTTP sessions for the full web site. For the full web site, the simultaneous HTTP sessions would be higher:

 37 min / 7 min/visit * 10,000 users = 53,000 simultaneous HTTP sessions (for the full web site traffic) 

Also, notice the slightly different way we calculated the maximum HTTP sessions in this exercise. We're trying to measure how many visits or iterations each virtual user makes during the test. At the end of each visit, the virtual user starts a new visit as a new customer. As a new customer, the virtual user triggers a new HTTP session at the web site. So, if the virtual user requires 7 minutes to complete a visit under one user identity, the virtual user only assumes about 5 different customer identities during a 37-minute test (37 minutes / 7 minutes per visit = 5 different customer visits ). We multiply this by the total virtual users involved in the test to determine how many HTTP sessions we generate during the test period.

Continuing our example, we only use 720 users in the reduced virtual user test, and we reduce their visit time to 45 seconds. This results in the following HTTP session pressure within 31 minutes (the reduced average HTTP session timeout period):

 31 min / 45 sec/visit * 720 users = 29,760 simultaneous HTTP sessions 

Of course, as discussed in Chapter 11, iterations of the initial performance tests only last a few minutes (at most 15 or 20 minutes in the beginning). This further reduces the difference in the HTTP session memory footprint. For example, both styles of testing (full virtual users versus recycled virtual users) generate the following number of HTTP sessions during a 20-minute test:

 20 min / 7 min/visit * 6,700 users = 19,143 HTTP sessions 20 min / 45 sec/visit * 720 users = 19,200 HTTP sessions 

Therefore, the shorter test actually generates

 19,200 HTTP sessions  19,143 HTTP sessions = 57 more HTTP sessions 

Compare this to the difference in HTTP sessions in a longer test spanning a full timeout range:

 35,500 HTTP sessions  29,760 HTTP sessions = 5,740 fewer HTTP sessions 

TriMont decides to continue with a reduced number of virtual users for their initial performance testing. They plan to keep their 500 user license until they reach capacity on the first machine. Depending on the results of that test, they will upgrade the license when they add another server to the mix. However, they want to exert more pressure in their post-performance test work. They may buy the much larger 7,000 virtual user license for a limited time (maybe two or three weeks). Using this license, they plan some long-run stress testing before they enter production. This approach not only reduces their expenses in regard to virtual user licensing, but also gives them a realistic long-run test of the environment prior to entering production.



Performance Analysis for Java Web Sites
Performance Analysis for Javaв„ў Websites
ISBN: 0201844540
EAN: 2147483647
Year: 2001
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net