8.1 Aspects of Exchange performance

 < Day Day Up > 



The earliest Exchange servers were easy to configure. You bought the fastest processor, equipped the server with as much direct connected storage as it supported, and bought what seemed to be a huge amount of memory (such as 128 MB). System administration was easier, too, since the number of supported mailboxes on servers was not large. Table 8.1 charts the evolution of typical "large" Exchange server configurations since 1996 and especially illustrates the growth in data managed on a server. Today, we see a trend toward server consolidation, as companies seek to drive down cost by reducing the number of servers that they operate. The net result of server consolidation is an increase in the average number of mailboxes supported by the typical Exchange server, an increasing desire to use network- based storage instead of direct connected storage, and growing management complexity with an attendant need for better operational procedures and implementation.

Table 8.1: The Evolution of Exchange Server Configurations

Version

CPU

Disk

Memory

Exchange 4.0

Single 100-MHz/256-KB cache

4 GB

128 MB

Exchange 5.5

Single 233-MHz/512-KB cache

20 GB

256 MB

Exchange 2000

Dual 733-MHz/1-MB cache

>100 GB

512 MB

Exchange 2003

Quad 2-GHz/2-MB cache

SAN

4 GB

The growing maturity of Windows and the hardware now available to us help server consolidation, but the new servers that we deploy still have to be balanced systems suited to the application in order to maximize results. A balanced system is one that has the right proportions of CPU power, storage, and memory. After all, there is no point in having the fastest multi- CPU server in the world if you cannot provide it with data to process. Storage and good I/O management are key points in building Exchange servers to support large user communities.

Exchange performance experts often aim to move processing to the CPU and keep it busy on the basis that a server that hums along at 10 percent CPU load may be underloaded because it is waiting for I/Os to complete. This illustrates the point that a system is composed of multiple elements that you have to balance to achieve maximum performance.

8.1.1 Storage

Storage becomes cheaper all the time, as disk capacity increases and prices drop. However, configuring storage is not simply a matter of quantity. Instead, for Exchange servers, you need to pay attention to:

  • Quantity:   You have to install enough raw capacity to accommodate the space you expect the O/S, Exchange, and other applications to occupy for their binaries, other support files, and user data. You also need to have sufficient capacity to perform maintenance operations and to ensure that the server will not run out of space on important volumes if users generate more data than you expect.

  • Resilience:   You have to isolate the important parts of the overall system so that a failure on one volume does not lead to irreversible damage. The basic isolation scheme is to use separate physical volumes to host the following files:

    • Windows O/S

    • Exchange binaries

    • Exchange databases

    • Exchange transaction logs

  • Recoverability:   Tools such as hot snapshots need a substantial amount of additional space to work.

  • I/O:   The sheer capacity does not matter if the storage subsystem (controller and disks) cannot handle the I/O load generated by Exchange.

  • Manageability:   You need to have the tools to manage the storage, including backups. Newer techniques such as storage virtualization may be of interest if you run high-end servers.

You can build these qualities into storage subsystems based on direct- connected storage to the largest SAN. The general rule is that the larger the server, the more likely it is to connect to a SAN in order to use features such as replication, virtualization, and business continuity volumes. Indeed, you can argue a case that it is better to concentrate on storage first and build servers around storage rather than vice versa, because it is easier to replace servers if they use shared storage.

Best practice for Exchange storage includes:

  • Always keep the transaction logs and the Store databases isolated from each other on different physical volumes.

  • Place the transaction logs on the drives with the optimal write performance so that the Store can write transaction information to the logs as quickly as possible.

  • Protect the transaction logs with RAID 1. Never attempt to run an Exchange server in a configuration where the transaction logs are unprotected.

  • Protect the Store databases with RAID 5 (minimum) or RAID 0+1. RAID 0+1 is preferred, because this configuration delivers faster performance (twice the speed of RAID 5) with good protection.

  • Multispindle volumes help the system service the multiple concurrent read and write requests typical of Exchange. However, do not attempt to add too many spindles (no more than 12) to a RAID 5 volume. Deciding on the precise number of spindles in a volume is a balancing act between storage capacity, I/O capabilities, and the background work required to maintain the RAID 5 set.

  • Use write cache on the storage controller for best performance for transaction log and database writes, but ensure that the controller protects the write cache against failure and data loss with features such as mirroring and battery backup. You also need to be able to transfer the cache between controllers if the controller fails and you need to replace it.

Storage technology evolves at a startling rate and we have seen the price per GB driven down dramatically since Exchange 4.0 appeared. New technologies are likely to appear, and you will have to make a decision regarding whether to use the technology with your Exchange deployment. Sometimes vendors make it unclear whether Microsoft fully supports the technology, and this is especially so with respect to database-centric applications such as Exchange and SQL. For example, Network Attached Storage (NAS) devices seem attractive because they are cheap and allow you to expand storage easily. However, at the time of writing Microsoft does not support NAS block- mode devices with Exchange and does not support any file-mode NAS devices. There are a number of reasons for this position, including network latency for write operations and redirectors introduced between the Store APIs and the Windows I/O Manager (see Microsoft Knowledge Base articles 314916 and 317173 for more information, including Microsoft's support policy for NAS devices). The Hardware Compatibility List (available from Microsoft's Web site) is the best place to check whether Microsoft supports a specific device, and it is also a good idea to ask vendors whether they guarantee that their device supports Exchange. Another good question is to ask the vendor to describe the steps required to recover mailbox data in the event of a hardware failure. However, technology changes and newer devices may appear that eliminate the problems that prevent Microsoft from supporting NAS and other storage technology. For this reason, you should consult a storage specialist before you attempt to build a storage configuration for any Exchange server.

8.1.2 Multiple CPUs

Given the choice, it is better to equip Exchange servers with multiple CPUs. Since Exchange 5.0, the server has made good use of multiple CPUs. Best practice is to use multi-CPU systems instead of single-CPU systems, with the only question being how many CPUs to use. Here is the logic:

  • It does not cost much to equip a server with additional CPUs when you buy servers, and adding a CPU is a cheap way to extend the lifetime of the server.

  • The extra CPU power ensures that servers can handle times of peak demand better.

  • Add-on software products such as antivirus scanners consume CPU resources. The extra CPUs offload this processing and ensure that Exchange continues to service clients well.

  • New versions of Windows and Exchange usually include additional features that consume system resources. If the new features support SMP, the extra CPUs may allow you to upgrade software without upgrading hardware.

  • Adding extra CPUs after you build a server can force you to reinstall the operating system or applications.

  • The performance of any computer degrades over time unless you perform system maintenance, and, even then, factors such as disk fragmentation conspire to degrade performance. Some extra CPU power offsets the effect of system aging.

Note that secondary cache is important for symmetric multiprocessing. Secondary cache is a high-performance area of memory that helps prevent front-side bus saturation. Large multi-CPU servers are inevitably equipped with a generous secondary cache, with the general rule that the more, the merrier.

With these points in mind, it is a good idea to equip small servers (under 1,000 mailboxes) with dual CPUs, and large mailbox servers with four CPUs. Going beyond this limit enters the domain of high-end systems and is probably not necessary for the vast majority of Exchange servers. Few people find that something like a 32-way server is necessary to support Exchange and that it is easier and cheaper to deploy servers with fewer CPUs. If you are tempted to purchase a system with more than eight CPUs, make sure that you know how you will configure the system, the workload it will handle, and the additional benefit you expect to achieve.

8.1.3 Memory

The various components of Exchange, such as the Store, DSAccess, Routing Engine, and IIS, make good use of memory to cache data and avoid expensive disk I/Os, so it is common to equip Exchange servers with large amounts of memory, especially since the price of memory has come down. It is always better to overspecify memory than install too little, since server performance is dramatically affected by any shortage of memory.

The Store is a multithreaded process implemented as a single executable (STORE.EXE), which runs as a Windows service and manages all the databases and storage groups on a server. As more users connect to mailboxes and public folders, the number of threads grows and memory demands increase.The Store is reputed to be a particular "memory hog," because it uses as much memory as Windows can provide. However, this behavior is by design and is due to a technique called Dynamic Buffer Allocation, or DBA, which Microsoft introduced in Exchange 5.5. Before Exchange 5.5, administrators tuned a server with the Exchange Performance Wizard, which analyzed the load on a running server and adjusted system parameters. Specifically, the wizard tuned the number of buffers allocated to the Store to accommodate an expected number of connections. However, the wizard used no great scientific method, and much of the tuning was by guesswork and estimation. If the actual load on a server differed from the expected load, the tuning was inaccurate.

Microsoft implemented DBA to provide a self-tuning capability for the Store and ensure that the Store uses an appropriate amount of memory at all times, taking the demands of other active processes into account. DBA is an algorithm to control the amount of memory used by the Store and is analogous to the way that Windows controls the amount of memory used by the file cache and the working set for each process. To see the analogy, think of I/O to the Store databases as equivalent to paging to the system page file.

DBA works by constantly measuring demand on the server. If DBA determines that memory is available, the Store asks Windows for more memory to cache more of its data structures. If you monitor the memory used by the Store process, you will see it gradually expand to a point where the Store seems to use an excessive amount of memory, a fact that can alarm inexperienced system administrators who have not experienced it before. For example, in Figure 8.1 you can see that the Store process occupies a large amount of memory even though the system is not currently under much load. This is the expected situation and if another process becomes active that requires a lot of memory, the Store process shrinks if Windows cannot provide the required memory to that process.

click to expand
Figure 8.1: Memory used by the Store.

There is no point in having memory sitting idle, so it is good that the Store uses available memory as long as it does not affect other processes. DBA monitors system demand and releases memory back to Windows when required to allow other processes to have the resources they need to work; it then requests the memory back when the other processes finish or release the memory back to Windows. On servers equipped with relatively small amounts of memory, you can sometimes see a side effect of DBA when you log on at the server console and Windows pauses momentarily before it logs you on and paints the screen. The pause is due to DBA releasing resources to allow Windows to paint the screen. DBA is not a fix for servers that are underconfigured with memory, but it does help to maximize the benefit that Exchange gains from available memory.

8.1.4 Using more than 1 GB of memory

Exchange servers running Windows 2000 Advanced Server (or any version of Windows 2003) that are equipped with 1 GB or more of physical memory require changes to the default virtual memory allocation scheme to take advantage of the available memory. Usually, Windows divides the standard 4 GB available address space between user and kernel mode. You can set the /3GB switch to tell Windows that you want to allocate 3 GB of the address space to user-mode processing, which allows Exchange to use the additional memory, especially within the single Store process that probably controls multiple Store instances on large servers (one for each storage group). Although using this switch allows you to provide more memory to Exchange and therefore scale systems to support heavier workloads, Windows may come under pressure as you reduce kernel-mode memory to 1 GB, which may cause Windows to exhaust page table entries-in turn, leading to unpredictable system behavior.

To make the necessary change, add the /3GB switch to the operating system section of boot.ini. For example:

[Operating Systems] multi(0)disk(0)rdisk(0)partition(2)\WINNT="Microsoft Windows 2000 Server" /fastdetect /3GB 

Windows 2003 provides an additional switch for boot.ini (USERVA). When used in conjunction with the /3GB switch, you can use the USERVA switch to achieve a better balance between the allocation of kernel- and user-mode memory. Microsoft recommends that you use a setting of /USERVA=3030 (the value is in megabytes) for Exchange 2003 servers. This value may change as experience grows with Exchange 2003 in different production configurations, so check with Microsoft to determine the correct value for your server configuration. Its net effect is to allocate an extra 40 MB of memory to the Windows kernel for page table entries, in turn allowing Exchange to scale and support additional users without running out of system resources.

Based on experience gained in the way the Store uses memory, Exchange 2003 attempts to use available memory more intelligently than Exchange 2000 when you set the /3GB switch, and you should set the switch on any server that has more than 1 GB of physical memory. If you do not, Exchange reports a nonoptimal configuration as event 9665 in the event log (Figure 8.2) when the Information Store service starts. This is just a pointer for you to remember to set the /3GB switch.

click to expand
Figure 8.2: Exchange reports nonoptimal memory.

In Exchange 2000, the Store allocates a large amount of virtual memory (858 MB) for the ESE buffer. The Store always allocates the same amount of virtual memory, regardless of the system or memory configuration. The one size fits all approach is convenient, but it can lead to situations where smaller systems exhaust virtual memory. In Exchange 2003, the Store looks for the /3GB switch and uses it as the basis for memory allocation. If the switch exists, the Store assumes that lots of physical memory are available, so it allocates 896 MB for its buffer. If not, the Store tunes its virtual memory demand back to 576 MB.

Finally, even though the Datacenter Edition of Windows 2003 supports up to 512 GB of memory, there is no point in equipping an Exchange server with more than 4 GB, since the current 32-bit version of Exchange cannot use the extra memory. This situation may change over time, so it is a good idea to track developments as Microsoft improves its 64-bit story.

8.1.5 Advanced performance

People want the best possible performance for their servers, so each new advance in server technology is eagerly examined to see whether it increases the capacity of a server to support more work. In the case of Exchange, this means more mailboxes. As we have discussed, other factors such as extended backup times or not wanting to put all your eggs in one basket (or all mailboxes on one server) can influence your comfort level for the maximum number of mailboxes on a server, but it is still true that extra performance always helps. Extra CPU speed can balance the inevitable demand for system resources imposed by new features, any lack of rigor in system management and operations, and the drain from third-party products such as antivirus scanners. Apart from speedier CPUs, the two most recent developments are hyperthreading and 64-bit Windows.

Hyperthreading (or simultaneous multithreading) is a technique that allows a CPU such as recent Intel Xeon processors to handle instructions more efficiently by providing code with multiple execution paths. In effect, to a program, a server seems to have more CPUs than it physically possesses. Not every program is able to take advantage of hyperthreading, just as not every program can take advantage of a system equipped with multiple CPUs, and not every program can exploit a grid computer. As it happens, Exchange has steadily improved its ability to use advanced hardware features such as multithreading since Exchange 5.0, and Exchange 2003 is able to use hyperthreaded systems. Indeed, experience shows that enabling hyperthreading on the 400-MHz front-side bus found in high-end servers creates some useful extra CPU "head room," which may allow you to support additional mailboxes on a server. Therefore, if you have the option, it is best to deploy a hyperthreaded system whenever possible.

With the arrival of the first native 64-bit Windows operating system,[1] people often ask how Exchange will take advantage of the extended memory space and other advantages offered by a 64-bit operating system. The answer is that Exchange runs on the IA64 platform, but only as a 32-bit application running in emulation mode in the same manner as first-generation Exchange supports the Alpha platform. Porting a large application such as Exchange to become a native 64-bit application requires an enormous amount of work, and given that the third generation of Exchange uses a new database engine, it was always very unlikely that Microsoft would do the work in Exchange 2003. Thus, the Kodiak release is the first true 64-bit version of Exchange. In the meantime, you can certainly deploy Exchange 2003 on IA64 systems with an eye on the future.

Waiting for Kodiak does not mean that Microsoft will stop fixing problems in the current Store, nor will it stop adding features. Instead, the true meaning is that Microsoft is now dedicated to building Kodiak on top of the Yukon database engine rather than enhancing the current ESE-base Store. This means that we will only be able to run 64-bit Exchange on top of a new database engine designed to take advantage of IA64 after we move to the next generation of Exchange.

[1] . Windows NT ran on the 64-bit Alpha chip from versions 3.1 to 4.0, but Windows 2000 was never ported to Alpha for production purposes. Microsoft used 64-bit versions of Windows 2000 on Alpha for development purposes only.



 < Day Day Up > 



Microsoft Exchange Server 2003
Microsoft Exchange Server 2003 Administrators Pocket Consultant
ISBN: 0735619786
EAN: 2147483647
Year: 2003
Pages: 188

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net