Optimizing the Performance of Network Storage

team bbl


The remaining sections in this chapter discuss various configuration and implementation issues to consider when determining how to best optimize the performance of your network storage. Specifically, these sections discuss the following:

  • Storing data remotely or locally

  • Using SAN versus the Network File System

  • Choosing the system protocol

  • Choosing the client and server to implement

  • Optimally tuning the client for the workload

  • Optimally tuning the server for the workload

  • Validating the performance and identifying bottlenecks

  • Measuring load and improving capacity planning

  • Print server performance

Determining What Data to Store Remotely

The performance of local file I/O is usually better than when the same files are accessed through a network file system. Local I/O does not overload network resources such as routers and Ethernet segments. When the server has a large amount of RAM available, sufficient for caching the commonly accessed files, there are cases in which network file system speeds can exceed local noncached file access. Storage is easier to manage centrally, and keeping a few copies of large, infrequently accessed files can avoid the problem of duplicate copies of the same files being kept on each enterprise desktop filling them to overflowing.

SAN Versus Network File Systems/NAS

As a general rule, storage area networks can transfer data faster than network file systems over networks of similar speed, because less processing is involved in parsing the network requests at the block (rather than the file) level. In addition, storage area networks often use specialized high-speed Fibre Channel switches and network hardware. Network file systems, however, provide additional security as well as the capability to back up and manage files more intuitively, advantages that often outweigh the performance advantages of SAN. Additionally, SAN cabling restrictions (and concerns about security when accessing data over corporate networks) can limit the appeal of SANs to large server rooms.

The Network File System Protocol

Network file system protocols come in all shapes and sizes. Some require complex clients to manage complex state information (such as AFS), whereas others are idempotent (stateless), such as NFS version 3. They vary from OS/2-centric (such as SMB) to Windows-centric (such as CIFS) and UNIX-centric (such as NFS). They vary in their security models, performance, and, of course, complexity. SANs move data as if it were blocks on disk, whereas most network file systems move data based on the filename or file identifier.

Making sense of this maze of protocols requires looking back and categorizing the protocols into families. The following list groups related network file system protocols into families to make it easier to understand their characteristics:

Major Network File System Families

HTTP WebDAV

CIFS

NFSv2 NFSv3 WebNFS NFSv4 DAFS

DCE/DFS, AFS Coda Intermezzo

GFS, GPFS, and hybrid network file systems

A more detailed description of the more popular network file systems follows. The file systems are listed in order of the approximate size of their installed base:

  • SMB/CIFS. This network file system was invented by Dr. Barry Feigenbaum of IBM in the early 1980s, was extended heavily by Microsoft, and then was renamed CIFS in the late 1990s by Microsoft. This protocol is the default network file system on most versions of Windows (and even OS/2 and DOS), and most modern operating systems support it. The 2.6 Linux kernel includes two client implementations: the legacy smbfs and the newer CIFS VFS client, which will eventually replace smbfs. There are also user space SMB file system tools, which are commonly used on Linux (although, not part of the kernel itself), including the popular smbclient utility program, which includes an FTP-like option for retrieving files via the SMB protocol.

    The Linux 2.6 kernel also has been helpful for SMB servers. The popular open-source Samba SMB server performs up to 50% faster on the Linux 2.6 kernel than on Linux 2.4. Samba also has been ported to many other UNIX and UNIX-like operating systems. The CIFS protocol, which the Samba server and the CIFS client implement, can be used reasonably securely, and it is very rich in function due to many optional extensions. However, CIFS itself is not considered a formal standard in the sense that NFS version 4 (IETF RFC 3010) and HTTP (IETF RFC 2616) are. The Storage Network Industry Association (SNIA) does document the core CIFS protocol, and Linux/UNIX extensions to the CIFS network protocol are also standardized by SNIA. Standards proposals on additional "CIFS POSIX extensions" are in progress.

    Network protocols are often described in terms of an abstract layered model called the ISO/OSI model. This model, when applied to CIFS, can be used to represent the implementation in the layers, as shown in Figure 16-1.

    Figure 16-1. The ISO/OSI model applied to NFS.


  • NFS. Since 2003, NFS version 4 (NFSv4) has become the official Internet Engineering Task Force (IETF) network file system standard. However, more widely deployed are systems based on the much more primitive NFS versions 2 and 3, which are commonly used for UNIX-to-UNIX network file system access. NFS version 3, although it performs reasonably well in Linux-to-Linux (and UNIX-to-UNIX) environments, does not handle distributed caching as safely as some other protocols. It is also prone to security problems because of its primitive design. NFSv4, which includes many CIFS-like features, addresses many of the functional and security deficiencies of version 3, but it often shows worse performance than CIFS.

    The ISO/OSI model, when applied to NFS, divides the implementation into the layers shown in Figure 16-1.

  • WebDAV (HTTP extensions). The WebDAV extensions to HTTP can be used to mount (NET USE) from newer Windows clients to many popular web servers when DAV extensions are enabled on the server. WebDAV, sometimes described as Web Folders, also has been included in applications such as web browsers. The Internet Explorer (IE) browser has included WebDAV support since IE version 5.5. WebDAV is not optimized for performance, and Linux does not include a client file system for it in the kernel at this time (although some distributions include a user-space implementation of a Linux WebDAV client). Various popular Linux applications are WebDAV-enabled.

  • AFS/DFS. During the mid-1990s, these were the most sophisticated of the commonly deployed network file systems and commonly were used on mid- to high-end UNIX systems. AFS (and the loosely related DCE/DFS) is gradually fading in importance as IBM and others have announced plans to eventually discontinue support for them. An open full function implementation of AFS client and server, OpenAFS exists for Linux, as well as a second, more primitive AFS client, which has been accepted into the 2.6 kernel.

  • AFP. Older Apple Macintosh operating systems (prior to OS X) used a filing protocol called AFP by default, instead of CIFS or NFS, although third-party CIFS clients for these systems were available from companies such as Thursby. Some Linux distributions include the open-source Netatalk server for handling requests from AFP clients, and at least one commercial AFP server is available (from Helios). AFP is quickly fading in importance as these clients become less common. No AFP file system client is included in the Linux kernel.

  • Specialized network file system protocols, such as Coda and Intermezzo, and also various hybrid (some proprietary) SAN/NAS protocols, are less widely deployed. The most popular protocol from the early days of network file systems, Novell's NCP, is fading in popularity. NCP's current installed base is hard to measure, and NCP servers are rarely deployed on Linux.

  • Cluster file systems. A battle is looming over the best Linux cluster file system. Lustre and GFS (from Red Hat/Sistina) are the early leaders in this race to get accepted into the 2.6 kernel. Cluster file systems can be thought of as network file systems that support strict POSIX file system semantics (just like local file systems) designed for use by groups of homogeneous systems coupled by high-speed networks. In addition, some of the SAN file systems, such as IBM's StorageTank and GPFS, have similar characteristics and goals. None of these have been accepted into the Linux kernel as yet, but pressure is mounting to include at least one, and they are likely to become important for supercomputer clusters in particular.

We have discussed multiple popular network file systems (including NFSv3, NFSv4, CIFS, HTTP/WebDAV, and AFS). Because network file systems have been around for almost 20 years, as can be seen from the history mentioned here, why haven't we converged on one dominant network file system (as we have converged on TCP/IP for the lower layers)? Because network and SAN file systems present unique design problems that are not perfectly addressed by any one network file system. Why are network file systems so hard to design? The following are some reasons:

  • Network file systems, unlike local file systems, have to manage two views of the same data/metadata (local and server).

  • They should implement distributed cache coherence over multiple machines in order to guarantee POSIX file API semantics and preserve the integrity of the following:

    File data

    File metadata (date/timestamps, ACLs)

    Directory data

  • Network file systems have to work around the potential for "redundant" locks being enforced in conflicting ways on client and server.

  • Network file systems have to adjust to transient identifiers for files and differing inode numbering on client and server.

  • Network file systems cannot assume that the file namespace is the same on server and client, or even that the code page (the character set used for converting language characters and displaying filenames) is the same on client and server.

  • Network file systems must compensate for holes, omissions, and errors in the network protocol design and/or server operating system. The NFS network file system when running on Windows, for example, has to be able to translate to UNIX-like NFS protocol frames (which were designed to map to Solaris, Linux, and similar operating system functions) from Windows file requests. This is difficult because the Windows file API is so much bigger than the POSIX file API and includes flags that have no POSIX counterpart.

  • Most network file systems implement various network security features, such as those in the following list, to provide safer transfer of data across potentially hostile routers:

    Distributed authorization

    Access control

    PDU integrity

  • Network file systems deal with much more exotic, multiple-machine, network-failure scenarios, which are complex to analyze. Network file systems also have a harder time than local file systems in implementing file and server "migration" (movement/replication of data from one volume to another) and transparent recovery.

NFS version 4 and CIFS/Samba (CIFS kernel client and Samba server) on Linux address most of the issues listed here and are commonly used. But is one clearly better? Is NFS better than CIFS? Not always. The trade-offs to consider are as follows:

  • NFS version 3, although quite fast, is less secure than SMB/CIFS.

  • NFS maps slightly better than SMB/CIFS maps to the internal Linux VFS file system kernel API, although the CIFS VFS provides excellent POSIX semantics when mounted to a Samba server, which has enabled the optional CIFS UNIX Extensions (available in Samba version 3 or later).

  • The Linux NFS client lacks support for Linux xattrs (which the CIFS client does provide) but the NFS client can handle direct I/O (which the Linux CIFS implementation currently does not).

  • The CIFS mount protocol (SMB SessionSetupAndX and SMB TreeConnectAndX) does not require use of a UNIX UID, which is helpful in heterogeneous networks. UID mapping in networks that include Windows and Linux servers can be difficult. Therefore, having consistent UID mapping among servers and clients is helpful when using NFS versions 2 and 3.

Note that NFS is not popular on Windows, because most Windows versions do not include an NFS server or client in the operating system itself, and the NFS protocol maps better to the simpler UNIX VFS interface than to the complex, functionally rich Windows IFS. Microsoft does offer a simple NFS (version 2 and 3) server as a free download as part of its Services for UNIX.

NFS version 4, due to new security and caching features, will be appealing in the future, especially for Linux-to-Linux network file system access, but because it is not well supported on most Windows clients and servers, its adoption has been slow. Its Linux implementation in the 2.6 kernel is as yet unproven and is missing some optional features. NFSv3 performance from Linux clients over Gigabit Ethernet can be spectacular, especially to NFS servers based on the 2.6 kernel. NFS version 3 (over UDP, at least) receives the most testing and is most likely the most stable choice for network file systems when mounting from Linux clients to Linux servers.

Client and Server Implementation Choices

For NFS, the implementation choice is simple: The most popular client and server implementation is the one available in the kernel itself. However, there are choices for the RPC (SunRPC) daemon. The choices available when building the Linux 2.6 kernel are whether to enable support for the following:

  • NFS version 3 (highly recommended) client or server

  • NFS over TCP on the server (recommended); it is always on in the client

  • NFS version 4 (experimental)

  • NFS direct I/O (experimental)

For SMB/CIFS, there is a choice of two clients: the legacy smbfs and the newer CIFS VFS. For the server, by far the most popular choice is the Samba network file server, which provides not just SMB/CIFS network file serving but also the following:

  • A logon server for Windows and winbind Linux clients

  • A network print server

  • Administration tools and DCE/RPC-based management services

  • An RFC 1001/1002 NetBIOS name server

Tuning the Linux ClientSome Key Concepts

This section covers some of the key concepts you need to consider when tuning the Linux client. These concepts include the following:

  • Protocol layering

  • Opportunistic locking

  • Metadata

  • File change notification

  • Read-only volumes and read-only files

Protocol Layering

The following trace, taken from the Ethereal network analyzer, shows the typical 20 network frames (requests and responses) that occur at mount time (see Figure 16-2). The trace was taken from the mount command using CIFS VFS version 1.0.3 running on a Linux kernel 2.4.23 based client.

Figure 16-2. The typical 20 network frames that occur at mount time.


Figure 16-3 shows a more detailed view of a particular framein this case, the Tree Connect request clearly shows the layering of an SMB request (the mount data, with the SMB header) inside an RFC 1001 (NetBIOS session service) frame, inside a TCP/IP frame, inside an Ethernet frame.

Figure 16-3. A detailed view of the Tree Connect request.


In this example, the SMB request is 86 bytes (including 32 bytes of SMB header), preceded by a 4-byte RFC 1001 header (length), preceded by 32 bytes of TCP header, 20 bytes of IP header, and 14 bytes of Ethernet header. This layering is much like nesting envelopesthat is, enclosing an envelope with a letter for a child inside a larger envelope addressed to the child's parent.

Opportunistic Locking

Figure 16-4 shows the protocol flow involved in opportunistic locking (oplock) handling for CIFS. CIFS has two types of opportunistic locks that are used to control distributed access to files. By contrast, AFS and DFS have much more complex but heavyweight token-management mechanisms. NFS versions 2 and 3 have no locking mechanisms and therefore have relaxed UNIX file semantics at the risk of data integrity.

Figure 16-4. Protocol flow in opportunistic locking. (Source: www.microsoft.com/Mind/1196/CIFS.htm)


The first type of opportunistic lock is the whole file lock (exclusive oplock), which allows the client to do aggressive write-behind and read-ahead caching on the file, often greatly improving performance.

The second type of oplock is a read oplock. A read oplock allows multiple clients to read, but not update, a file. Attempts to update such a file by one client cause all clients with that file open to lose their caching privileges for that file. Distributed caching encounters a problem with Linux because files that are closed cannot be safely cached with the oplock mechanism alone. (Although not particularly common, the standard POSIX file semantics allow a memory mapped file to have no open file instances, but the data associated with the inode can still be read.)

A third type of oplock, the batch oplock, is more rare. It is used to address certain performance problems associated with the line-by-line interpretation of DOS batch files (scripts) by allowing limited read-ahead caching of batch files by the client. These distributed caching mechanisms in the Linux client are in addition to and unrelated to caching being done in the server file system's page manager, and again in the server disk controllers.

Metadata

File and directory timestamps reflect the time of the last update to a file or directory. The CIFS network file system client uses timestamps to determine whether data in its page cache needs to be discarded when reopening a file that has been closed, but this caching is transparent to the user. AFS and DFS clients have a much more complex token-management approach to achieving cache consistency, which can safely cache network files on disk on the client for long periods. NFS versions 2 and 3 generally have looser data consistency and read-ahead and write-behind cache data based on timers.

File Change Notification

File change notification is available in NFS version 4 and CIFS as a way of allowing a client application to be notified of changes to certain files and directories. File change notification can be used to augment the client file system's distributed caching facility, although this has not been proven to be efficient and is not currently implemented for this purpose in the Linux clients.

Read-only Volumes and Read-Only Files

Read-only clients can cache aggressively if the mounted volume is known to contain read-only data (such as a mount to a server that is exporting a read-only CD-ROM or DVD). However, read-only clients provide little benefit over oplock, which in effect allows the same thing.

The legacy smbfs client does not do safe distributed caching (oplock). Instead, it relies on timers to determine the invalidation frequency for client-cached network file data and has limited performance adjustments. A significant improvement in smbfs performance was obtained midway through the 2.4 kernel development when the smbfs network request buffer size was increased from 512 bytes to approximately one page (4096 bytes).

The Linux CIFS client implements oplock, which is enabled by default (oplock can be disabled by setting /proc/fs/cifs/OplockEnabled to 1 on the client). The Linux CIFS client attempts to negotiate a buffer size of about 16K. If the server supports this buffer size, more efficient read and write transfers with fewer network round trips can be achieved. By reducing the rsize and wsize (configurable by specifying the size as an option on mount), as with the NFS client, the default read and write size can be reduced in an attempt to minimize TCP packet fragmentation, but this usually slows performance. The lookup caching mechanism in the CIFS VFS, as in smbfs, is done based on a timer rather than with the CIFS FindNotify API. This caching of inode metadata, even for short periods, improves performance over the alternativethat is, revalidating inode metadata on every lookup, but with the risk that the client's view of stat information (such as file size and timestamps) on a file will be out of date more often. CIFS lookup caching can be disabled by setting /proc/fs/cifs/LookupCacheEnabled. A limited set of statistics is kept by the CIFS client in /proc/fs/cifs/Stats. Enabling debugging or tracing (setting /proc/fs/cifs/cifsFYI to 1, or setting /proc/fs/cifs/traceSMB to 1) can slow the client performance slightly.

Unlike the current 2.6 smbfs and cifs client file system modules, the NFS client does a good job of dispatching multiple read-ahead requests in parallel to a single server from a single client. This parallelism helps keep the server disk busy. The difference is even more significant when writing large files from a single process on the client to the server. A Linux client can copy files using NFS version 3 much faster to a lightly loaded Linux server on Gigabit Ethernet than using CIFS copying to a Linux/Samba server. This is due to the efficient implementation of multipage asynchronous write-behind in the NFS client. The differences in read performance between NFS and CIFS are not as dramatic because both implement multipage read-ahead (readpages). The CIFS client is in turn faster than the smbfs client for file copy from the same Samba server because CIFS can use a larger read size (16K versus 4K) of the CIFS VFS and can read more than one page at a time (via the new readpages function, which is now an optional feature that 2.6 kernel file systems may implement). Future versions of the CIFS VFS client should be able to narrow the performance gap against NFS. However, exceeding NFS performance for large file copy will require a redesign of the SMB dispatching mechanism of the Samba server, as is being done for Samba version 4. When multiple processes copy different files to the same server, CIFS benefits from the capability to queue as many as 50 simultaneous read or write requests to the server.

Linux File Server Tuning

Network file system tuning is complex. Bottlenecks can be caused by high CPU usage, disk usage, or network usage, and network latency can dramatically influence throughput. The following list summarizes some general principles to use when evaluating performance improvements:

  • Maximize parallelism from each client

  • Maximize server parallelism in adapter interrupt dispatching

  • Maximize server CPU parallelism

  • Maximize client caching opportunities

  • Minimize data sent

  • Minimize round trips from client to server

    Maximize command chaining

    Piggybacked ACKs

  • Minimize protocol overhead (frame headers)

  • Limit latency when lightly loaded (such as timers and TCP ACK settings)

  • Examine session establishment and authentication overhead

A common approach for evaluating potential performance improvements due to altered configuration settings is the following:

  1. Select a file copy workload or benchmark test.

  2. Perform the tests more than once to warm the cache and smooth out variations; perform the test on lightly loaded networks.

  3. Evaluate interim results against a baseline (such as performing the same test locally or over another network file system or FTP) to sanity-check your goals.

  4. Steadily increase the benchmark load until maximum overall throughput is achieved.

  5. Alter server or client configuration settings and retest, measuring changes to performance and to network traffic (such as number of frames and average response time).

  6. Analyze the side effects of the configuration changes by careful examination of network trace data (such as by using the Ethereal network analyzer).

  7. Divide and conquer. Simulate the test using two smaller testsone of local file calls and one for the network socket callsto focus on performance optimizations of a narrower subsystem (such as is done by running dbench and tbench locally on the server instead of running the larger NetBench simulator across the network).

NFS

For the NFS client, the default rsize and wsize can be specified on mount. Typical values are 4K to 32K. For NFS versions they are constrained by the server and were changed with the implementation of NFS server over TCP to support 32K. The Linux client supports up to a 32K rsize/wsize. Setting an rsize larger than the MTU (typically 1500 bytes) results in fragmentation and reassembly of the higher-level SunRPC frames across multiple TCP frames, which in some cases slows performance. You can experiment with IOzone and Bonnie to determine optimal values. The netstat and nfsstat tools can be used to get useful TCP and NFS statistics, respectively, to correlate with benchmark throughput and timing results, and tracepath can be used to determine network frame sizes (which can be changed via the ifconfig MTU option). 2.6 adds the capability to configure NFS/SunRPC for TCP, which is somewhat slower than the default (NFS/SunRPC over TCP). NFS over UDP is often used on local area networks, but if timeouts reported by the nfsstat command are excessive, consider increasing the values of the NFS mount options retrans and timeo. The number of server instances of nfsd can greatly affect performance and can be adjusted in the Linux server system's startup script. Linux NFS servers can export data using the "sync" or "async" flag, with the latter yielding better performance due to write-behind at the server at the risk of data integrity problems if the server fails.

Samba

Three configuration settings significantly impact Samba performance and should be examined in the server's smb.conf file:

  • Sendfile. Sendfile is a mechanism that reduces the number of copy operations performed on outgoing file data. Samba can be configured with the option with-sendfile that enables it to send data directly from the file system on the server to the network adapter, improving throughput and reducing server utilization.

  • Case-sensitive file matching for partial wildcard searches. Caching directory entries is complicated in Samba by the differences between CIFS (Windows/DOS/OS2), which typically requests case-insensitive file matching, and the POSIX file API on the Linux server, which provides only case-sensitive file searches. As an example, a CIFS search for file \\server\share\file*.exe to a case-insensitive file system would result in a relatively small number of server operations to perform. However, Samba on Linux requires checking for matches with all forms of "file" and "exe," including "File1.Exe," "FILE2.EXE," "file3.EXE," and "FILE4.exe." The Samba version 3 server matches paths against a buffer containing a list of the equivalent lowercase filenames (the "stat cache") to improve the performance of these operations, but this approach does not work well for large directories (although it has been much improved in Samba version 3.0.12 and later). By using Linux xattrs to store the case-preserved filenames, Samba 4 can handle search operations on large directories much more efficiently. Even on Samba version 3, performance can be greatly improved on partial wildcard searches (giving more than a 5% gain on some benchmarks) by enabling case-sensitive searches in Samba's configuration file, smb.conf. Setting case sensitive = on in smb.conf can affect the behavior of Windows applications (because they expect case-insensitive file matching behavior) unless the exported share is on a partition that has been formatted as case-sensitive (such as VFAT). Case-insensitive JFS partitions can be created by passing the O format option to the mkfs.jfs tool.

  • SMB logging. SMB logging can be a significant performance drain at higher log levels. Setting the log level in smb.conf down to 0 or 1 can result in a measurable performance improvement.

In addition to the preceding, Samba performance can be reduced by enabling kernel oplocks (rather than letting the Samba server manage oplocks internally) and by enabling ACLs and ACL inheritance in the file system (which puts additional load on the server's local file system to retrieve xattrs). Samba performance is also sensitive to changes in TCP socket options (such as TCP_NODELAY, which can be specified in the smb.conf parameter socket_options).

Performance Measurement

Many tools for performance measurement exist and can be used in conjunction with server utilization information. Some tools are now conveniently viewable as text via pseudo files in the /proc directory, in order to help you evaluate performance trade-offs. The most commonly used tools to measure file I/O performance are as follows:

  • iozone. A great benchmark for measuring network file I/O performance in various categories such as reads versus writes, different sizes, and random versus sequential.

  • Bonnie

  • NetBench. The classic CIFS benchmark developed by Ziff Davis labs is showing its age. It requires a prohibitively large number of clients to adequately load a modern Linux Samba server.

  • dbench, tbench, smbtorture. These offer more efficient simulation of the file activity generated by a NetBench run, the corresponding network socket activity, and the SMB activity, respectively.

  • spec cifs. Work is under way with SNIA and SPEC to develop a next-generation CIFS benchmark.

  • specsfs. The most common benchmark used to report NFS version 3 performance.

  • The Connectathon NFS suite. When run with runtests a t it generates useful and granular timing information on common operations.

These tools are discussed in detail in Chapter 6, "Benchmarks as an Aid to Understanding Workload Performance."

Load Measurement for Improved Capacity Planning

The CIFS client can measure the number of common requests by enabling CIFS statistics in the kernel configuration and by examining /proc/fs/cifs/Stats. This can be useful for determining when a server is responding slowly. An example of the statistics follows:

 Resources in use CIFS Session: 2 Share (unique mount targets): 2 SMB Request/Response Buffer: 2 Operations (MIDs): 0 0 session 0 share reconnects Total vfs operations: 550378 maximum at one time: 6 1) \\localhost\stevef SMBs: 11956 Oplock Breaks: 0 Reads: 89 Bytes 1145705 Writes: 3962 Bytes: 1888452 Opens: 868 Deletes: 934 Mkdirs: 118 Rmdirs: 118 Renames: 263 T2 Renames 0 2) \\192.168.0.4\c$ SMBs: 365570 Oplock Breaks: 0 Reads: 124712 Bytes 456637519 Writes: 152198 Bytes: 613673810 Opens: 3 Deletes: 0 Mkdirs: 0 Rmdirs: 0 Renames: 0 T2 Renames 0 

Print Server Performance

Print server performance is affected by four major factors:

  • Where the job is rendered

  • How large the job is when sent across the network and stored on the server disk

  • How many layers of software the print job passes through on the server

  • How efficient the server driver for the print device is

In the case of Windows systems printing to Linux servers due to the breadth of Windows print driver support, it is common for the print job to be processed mostly on the client (rather than partially on the client and partially on the server, as is often the case on Windows clients printing to Windows servers). When a print job is rendered on the client and sent as a raw print file to the server, the amount of network traffic required to print the network print job is much larger, and the file may use significant disk space on the server, but the amount of server CPU required for printing the job is less. When Samba in particular is used as a print server, print jobs usually pass through multiple additional layers on the serverthe CUPS subsystem, then Ghostscript, and then a print driver that can slow performance. Print drivers on Linux vary widely in quality of implementation, but the OMNI and CUPS projects are bringing more consistency to the Linux print architecture, which should be reflected in improved Linux print driver performance over time.

    team bbl



    Performance Tuning for Linux Servers
    Performance Tuning for Linux Servers
    ISBN: 0137136285
    EAN: 2147483647
    Year: 2006
    Pages: 254

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net