Development Servers

Development Servers

Many studies have shown that the five year total cost of ownership of a desktop system can easily range over $10,000 per year in 1998 dollars. That's an average of over ten times the acquisition costs assuming a $5,000 high-end desktop. A large part of the total cost of ownership is in system administration. Thus the easiest way to save on desktop costs is not to buy cheaper desktops, which would decrease productivity, but to have the right server infrastructure to allow you to centralize desktop management. There are at least three common functions that should be centralized to servers. These are file, compile, and database services.

File Server Benchmarks

A properly configured file server on a modern network (such as fast ethernet) can provide faster file services to a development desktop than a local disk drive. The file server can also provide much better reliability and availability than individual disk drives . Finally, centralizing file storage greatly simplifies backup and other administration tasks .

The industry standard benchmark for file server performance is the SPEC SFS97 ( formally called LADDIS) benchmark, which can be found at http://www.specbench.org.

There are two primary performance figures quoted for this benchmark, throughput (in operations per second) and response time (in milliseconds ). It is important to consider both numbers , as response time generally rises as throughput increases . Besides looking at the ramp-up in response time, there are other reasons you should look closely at the test details and consider the appropriateness of these numbers. Tables 16-2 through 16-6 describe some of the detailed results of the SPEC SFS97 benchmark using a Sun Enterprise 6002. These results are used to illustrate some of the detailed results of the benchmark and other items you should consider when reviewing a benchmark such as this.

One important item in the test details is the number of network interfaces used. In this benchmark, there were 9 network interfaces used, which works out to an average of approximately 3,000 operations per network interface. If your development network utilizes a single 100 Mbit fast ethernet network, you would be wasting about 88 percent of the Enterprise 6002 server if you utilized it only for file services. In real life development environments, it would be very uncommon to see loads of more than 2,000 operations/second on any 100 Mbit network segment. The moral is, before considering any benchmark number, make sure that your environment would be capable of making use of the benchmarked system.

Table16-2. SPEC SFS97 Results for Sun Enterprise 6002
Throughput (ops/sec) Response (msec)
5036 5.3
7563 5.9
10034 7.0
12634 10.0
15168 9.9
17689 12.7
20258 13.9
22788 15.3
25329 19.7
25465 19.1
25639 28.8

If you were to simply look at a summary listing of the SPEC SFS97 results, you would see this benchmark listed as 25639 operations/second with an overall response time of 9.94 msec. As with all benchmarks, you need to be very careful when looking only at summary numbers. The overall response time in the summary refers to the average response time while the operations/second refers to the maximum throughput. Careful examination of the detailed results will show that at the maximum throughput, response time is nearly three times the average response time. Different benchmark results will have different throughput/response curves and it is thus important to compare not just actual maximum throughputs but also throughput versus response.

Now lets take a closer look at the server configuration and availability data for this benchmark.

Table16-3. Spec SFS97 Server Configuration and Availability
Attribute Value
Vendor Sun Microsystems Inc.
Hardware Availability March 1998
Software Availability May 1996
Date Tested February 1998
Model Name Sun Enterprise 6002
Processors 18 336 MHz UltraSPARC II
Main Memory 8 GB
Operating System Solaris 2.5.1

Several important things to note on the server configuration include the number of processors, main memory, and hardware/software availability. Vendors may publish Spec benchmarks as long as the hardware and software used will be generally available within six months. You should not compare one vendor's currently shipped systems with those that another vendor will ship in six months. The next section of the Spec SFS97 report specifies the server tuning used. These parameters will be important if you are trying to re-create similar results to those published in the benchmark. You should note how many specific parameters need to be tuned . If you need to tune dozens of parameters to obtain the benchmark results, this may indicate that it will be very difficult to tune the server to your specific workload, which may vary greatly from the benchmark workload.

Table16-4. Server Tuning Parameters
Parameter Value
Buffer Cache Size Dynamic
Number of NFS Processes 839
Fileset size 246 GB

The next section of the benchmark report describes the network subsystem used. To achieve the published Spec SFS97 results, vendors typically use very large network subsystems. If your network is not of the same or greater capacity, you will not be able to achieve the same result.

Table16-5. Network Subsystem
Parameter Value
Network Type 100 Mbit Fast Ethernet
Number of Networks 9
Number of Network Controllers 9
Protocol Type UDP
Switch Type 9 SunSwitch

Large benchmarks also tend to stress the disk subsystem of the host computer. Here is the disk subsystem used in the test.

Table16-6. Disk Subsystem
Parameter Value
Number of Disk Controllers 13
Number of Disks 364
Disk Type 2 GB, 7200 RPM
Number of Filesystems 360
File System Configuration Default

Compile Server Benchmarks

Software developers have a unique ability to ferret out the fastest , most underutilized system on their network and then use that system to run compile jobs. Your local policy or security may, of course, limit such activity. Nevertheless, providing some sort of compile server for your developer environment may make sense. Often, a file server may double as a compile server. A number of different factors will impact compile speed, including complexity of the application and compiler optimization or debug level. A simple compile with no debug or optimization level usually compiles fastest. Adding debug information typically slows down a compilation as does performing additional compile-time optimizations. Compiling often stresses the integer performance of a CPU. In fact, one of the components of the standard components of the SPEC integer benchmark is a compile benchmark called 126.gcc. Below is a summary of the SPECint_rate95 benchmark for a Sun Ultra Enterprise 6000 server.

Table16-7. SPECint_rate95 Benchmark Summary
Parameter Value
Model Sun Enterprise 6000
CPU Thirty 336 MHz UltraSPARC II
Memory 8 GB
Disk 4x4.2 GB
Hardware Availability March 1998
Software Availability August 1998
OS Solaris 2.6
Compiler Sun C 5.0

Here are the specific results for each of the subtests within the SPECint_rate95 benchmark

Table16-8. SPECint_rate95 Subtests Results
Benchmark Base Number Copies Base Run Time Base SPEC Ratio Peak # Copies Peak Run Time Peak SPEC Ratio
099.go 30 398 3122 30 384 3236
124.m88ksim 30 193 2663 30 131 3910
126.gcc 30 227 2019 30 227 2022
129.compress 30 141 3454 30 120 4055
130.li 30 198 2596 30 132 3887
132.ijpec 30 207 3133 30 199 3264
134.perl 30 208 2469 30 120 4276
147.vortex 30 244 2987 30 191 3818
SPECint_rate_base95 2771
SPECint_rate95 3480

Database Server Benchmarks

One of the most common database server benchmarks is the TPC suite of benchmarks. As in any benchmark, your application performance may be vastly different from the benchmark. You should, therefore, only use benchmarks as a starting point in comparing hardware platforms. There are two major benchmarks included in the TPC suite. The first is the TPC-C benchmark that measures server performance on a set of transaction processing workloads. This workload is characterized by a large number of small transactions. The second benchmark is the TPC-D benchmark that measures server performance on a set of online application processing, or data warehouse, workloads.

Here is a sample report showing the TPC-C benchmark for a Sun E6000 server. All published TPC results are available on the TPC web page at http://www.tpc.org. While the total TPC-C throughput attained by many vendors is quite impressive, few applications require a $6 million database server capable of handling forty-four thousand users. As with all benchmarks, you should consider not only the absolute number published but also the details of the benchmark report to determine if this is a valuable benchmark or not for your application.

Table16-9. E6000 TPC-C Benchmark Summary
Parameter Value
Platform Sun Enterprise 6000
TPC-C Throughput 51,871.62 tpmC
Price/Performance 134.46 tpmC
Availability Date February 23, 1998
Operating System Solaris 2.6
Database Oracle8 Enterprise Edition 8.0.3
Processors Two nodes, each with 22 250 MHz CPUs
Number of Users 44,000
Total System Cost $6,974,524
Main Memory 5.5 GB (node 1), 6 GB (node 2)
Disk Controllers 40 Fiber Channel
Disk Drives 1188 4.2 GB SCSI
Front-end Systems 26 UltraServer 1 Model 170

Web Server Benchmarks

One of the common web server benchmarks used today is the SPECweb96 benchmark. Like other benchmarks, it is important to consider the details of the benchmark result along with the summary figures. Let's start examining the SPECweb96 result by using one of the published results for Sun's Ultra Enterprise 450. The benchmark results are broken down further as detailed in the following tables.

From the summary table (Table 16-10), you should first notice the availability dates for the hardware and software. As in the other benchmarks, be sure you are comparing currently available software and hardware configurations as the SPEC benchmarks allow data to be released up to six months before scheduled availability of the platform.

Table16-10. SpecWeb96 Summary
Parameter Value
Platform Sun Ultra Enterprise 450
Software Solaris 2.6/Sun WebServer 1.0
Hardware Availability August 1997
OS Availability August 1997
HTTP Software Availability August 1997
Processors Four 296 MHz UltraSPARC II
Memory 2048 MB
Disk Controllers Internal
Disk Subsystem Two 4.2 GB disks
Table16-11. Response Time
Throughput (ops/sec) Response (msec)
300 3.8
600 4.1
900 4.4
1201 4.9
1500 5.3
1800 5.8
2099 6.6
2401 7.8
2700 9.8
2905 21.8

The second table shows the response time at various throughput levels. Note that in this particular benchmark there was a steep rise in the response time to achieve the last few hundred operations/second. In web server performance, it is especially crucial to monitor both throughput and response time.

In most cases, your web server performance is likely to be bottlenecked by network throughput. Table 16-12 shows the network subsystem used to achieve the throughput on this particular test. Since most web sites do not currently have dual 622 Mb/sec ATM links to the Internet, these results may be meaningless if you are trying to determine the performance you will actually receive using a different speed network link.

Table16-12. Network Subsystem
Parameter Value
Number of Controllers 2
Number of Networks 2
Type of Network 622 Mb/sec ATM

Finally, lets take a look at the operating system tuning that was required to achieve these results. If the server requires extensive tuning, it may be very difficult to tune the system to your local site's requirements. If only a few tunable parameters are modified, it is much more likely you will be able to use the system in your environment with little modification.

Table16-13. OS Tuning Parameters
Parameter Value
tcp:tcp_conn_hash_size 262144
ba:atm_pfifos
tcp_close_wait_interval 60
web server: cache_small_file_cache_size 24
web server: cache_large_file_cache_size 1240


Software Development. Building Reliable Systems
Software Development: Building Reliable Systems
ISBN: 0130812463
EAN: 2147483647
Year: 1998
Pages: 193
Authors: Marc Hamilton

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net