Because SQL Server runs on any hardware that runs Windows NT, you can choose from among thousands of hardware options. Of course, SQL Server Desktop runs on any hardware that runs Windows 95 or Windows 98; however, since your production applications will be deployed on Windows NT, you must provide the best hardware for the job. Our discussion of hardware will thus focus on the needs of SQL Server running on Windows NT. The following sections offer some guidelines on choosing hardware.
If you cobble together a system from spare parts , Windows NT will run just fine, but your system might be unreliable. Unless you're a hobbyist and you like to tinker, this method isn't suitable for a production system. Support is another issue. Many people buy motherboards, chassis, processors, hard drives , memory, video cards, and assorted other peripherals separately and put together terrific systems, but the support options are limited.
Keep in mind that even name -brand systems occasionally fail. If you're using hardware that's on the Windows Hardware Compatibility List (HCL) and you get a "blue screen" (Windows NT system crash), you're more likely to be able to identify the problem as a hardware failure or find out, for example, that some known device driver problem occurs with that platform (and that you can get an update for the driver). With a homegrown system, such a problem can be nearly impossible to isolate. If you plan to use SQL Server, data integrity is probably a key concern for you. Most cases of corrupt data in SQL Server can be traced to hardware or device driver failure. (These device drivers are often supplied by the hardware vendor.)
NOTE
Over the life of a system, hardware costs are a relatively small portion of the overall cost. Using no-name or homegrown hardware is being penny-wise and pound -foolish. You can definitely get a good and cost-effective HCL-approved system without cutting corners.
SQL Server 7 runs on only two of the four processor architectures supported by Windows NT: Intel x 86 and DEC Alpha-AXP. It will not install on Motorola MIPS R4000 and PowerPC processors.
Most SQL Server users have machines with the Intel architecture. Although the SQL Server product versions for both processor architectures are on the same CD, many vendor software components for SQL Server might ship first, or only, for the Intel platform. For example, you might use an accounting software package that is first released or is available only for Intel-based machines. You can run the accounting application on an Intel-based server and use it to communicate via the network with an instance of SQL Server running on a DEC Alpha-AXP_based system as long as the application isn't required to run on the same machine as SQL Server (and you have multiple machines in your environment). In addition, if you ever need to dual-boot your system to MS-DOS, Windows 95, or Windows 98, you should go with Intel. An Intel-based solution is likely to provide all the horsepower your application needs. But the processor is only part of the equation, as we'll see later.
If you're part of a large organization, you probably have a set of standard hardware requirements. SQL Server 7 runs well if you have a RISC platform with Windows NT. SQL Server provides a small amount of processor-specific assembly language for each environment, but otherwise the code is all-common. The SQL Server development group at Microsoft has no "porting" team versions for both supported processor architectures are built and tested at the same time, and, as mentioned earlier, they are on the same CD. If this is the first time you've used a RISC platform with Windows NT (that is, Alpha-AXP), you'll be surprised at how "normal" it seems compared to the Intel-based systems you probably already use. For example, if you have access to both an Intel and an Alpha machine running Windows NT and SQL Server and they are configured with a switch box that allows them to use the same keyboard, monitor, and mouse, you probably can't tell the systems apart unless you make an effort to find out which one is currently running.
System throughput is only as fast as the slowest component. A bottleneck in one area reduces the rest of the system to the speed of the slowest part; the performance of your hardware is a function of the processing power available, the amount of physical memory (RAM) in the system, and the number of I/Os per second that the system can support.
Of course, the most important aspect of performance is the application's design and implementation. You should choose your hardware carefully , but there is no substitute for efficient applications. Although SQL Server can be a brilliantly fast system, you can easily write an application that performs poorly one that's impossible to "fix" simply by "tuning" the server or by "killing it with hardware." You might double performance by upgrading your hardware or fine-tuning your system; however, application changes can often yield hundredfold increases .
Unfortunately, there's no simple formula for coming up with an appropriately sized system. Many people ask for this, and it is common for minicomputer and mainframe vendors to provide "configuration programs" that claim to do just this. But the goal of those programs often seems to be to sell more hardware.
Once again, your application is the key. In reality, your hardware needs are dependent on your application. How CPU- intensive is it? How much data will you store? How random are the requests for data? Can you expect to get numerous "cache hits" with the most frequently requested data often being found in memory, without having to perform physical I/Os? How many users will simultaneously and actively use the system?
If you are planning a large system, it's worthwhile to invest up front in some benchmarking. SQL Server has numerous published benchmarks, most of them Transaction Processing Council (TPC) benchmarks. Although such benchmarks are useful for comparing systems and hardware, they offer only broad guidelines and the benchmark workload is unlikely to compare directly to yours. It's probably best to do some custom benchmarking for your own system.
Benchmarking can be a difficult, never-ending job, so keep the process in perspective. Remember that what counts is how fast your application runs, not how fast your benchmark performs. And sometimes significant benchmarking efforts are unnecessary the system might perform clearly within parameters or similar enough to other systems. Experience is the best guide. But if you are testing a new application that will be widely deployed, the up-front cost of a benchmark can pay big dividends in terms of a successful system rollout. Benchmarking is the developer's equivalent of the carpenter 's adage "Measure twice, cut once."
SEE ALSO
For information on TPC benchmarks and a summary of results, see the TPC home page at http://www.tpc.org.
Nothing beats a real-world system test, but that is not always practical. You might need to make hardware decisions before or while the application is being developed. In that case, you should develop a proxy test. Much of the work in benchmarking comes from developing the appropriate test harness the framework used to simultaneously dispatch multiple clients running the test program, to run for exactly an allotted time, and to gather the results.
The companion CD includes a Microsoft benchmark kit that lets you "add water and stir" to produce a benchmark environment within a few hours and simply substitute your own transactions in place of the kit's transactions. The kit includes the source code and executables used in previous TPC-B benchmarks. Although TPC-B might not be directly relevant to your system, the real value is in the kit's test harness the framework that drives "virtual clients" and synchronizes start and stop times, records results, and so on. Before you use the kit, you should first try to run the TPC-B test without modification. Try to achieve results roughly comparable to those listed in the kit. (Allow for hardware differences you're looking only for a "reasonableness quotient .") Then you can easily modify the tests with your own custom transaction to better simulate your actual system. Other products, such as Dynameasure by Bluecurve, Inc. (http://www. bluecurve .com), aid in benchmarking for SQL Server performance.