Section 3.4. Network File System

   

3.4 Network File System

Whereas CIFS is the dominant network file system for Windows clients, the Network File System (NFS) is the dominant network file system for UNIX clients . In addition, NFS was the first widely deployed network file system, dating back to the early to mid-1980s. But even though the overall objective of both NFS and CIFS is the same ”namely, to provide network file system functionality that allows clients to access resources on servers ”the two have remarkably differing philosophies. With the introduction of NFS version 4, these differences are being somewhat reconciled.

Whereas CIFS is a state-based protocol ”that is, the server retains a state associated with each client ”NFS versions up to version 3 have a stateless protocol because the NFS server does not maintain any kind of state associated with any client. As will be discussed later, NFS version 4 introduces state.

An NFS client does not negotiate a session with an NFS server. Security can be implemented on a session or on every exchange between a client and a server. The latter approach is too costly. So NFS takes an easy way out by insisting that all security be implemented at the client. The server assumes that any user ID on the client is the same as the user ID on the server (and that the client checked credentials before allowing somebody to log on at the client with that user ID). NFS does implement some security by controlling the file systems that a client can mount. Whereas a CIFS client opens a file and receives a handle (yet another state that the server must maintain ”the file handle) and uses that handle for doing read or write operations on the client, an NFS server implements a lookup call to the server that returns a file handle. This file handle was opaque to a client in NFSv3 and NFSv2. A client could cache this file handle and expect that it would always refer to the same file.

For readers who are familiar with UNIX, the file handle typically consists of the inode number, inode generation count, and a file system identifier associated with the disk partition. For readers who are unfamiliar with UNIX, suffice to say that an inode is an extremely important data structure used in UNIX file system implementations . Enough information is kept to invalidate handles cached by clients if the underlying file is changed to refer to a different file. For example, if a file were deleted and another file copied over with the same name , the inode generation count would change and the file handle cached by the client would be invalid. NFSv4 introduces variations here that are explained in Section 3.4.2.

Some NFS clients implement client-side caching on disk, similar to the CIFS client-side caching. Some NFS clients also dynamically change timeout values by measuring server response times and adjusting the timeout value to be higher for slower servers and lower for faster servers.

NFS is designed to be transport independent and originally used UDP (User Datagram Protocol) as the transport protocol. Variations of NFS implementations that use TCP and other transport protocols have also appeared.

Sections 3.4.1 and 3.4.2 provide some noteworthy highlights of NFSv3 and NFSv4. This discussion is not intended as a substitute for the excellent references listed at the end of the book.

3.4.1 NFSv3

NFSv3 improves performance, especially for large files, by having the client and server dynamically negotiate the maximum amount of data that can be sent in a read or write protocol unit. NFSv2 had a limit of 8K per protocol unit; that is, the client could send a maximum of 8K in a write request and the server could send a maximum of 8K data in a read response. NFSv3 also redefines the file offsets and data sizes to be 64 bits wide instead of the 32 bits that NFSv2 specified.

Here are some of the highlights of NFSv3:

  • NFSv3 file handles specify variable length and can be up to 64 bits wide.

  • NFSv3 allows the client and server to negotiate the maximum sizes of file names and pathnames.

  • NFSv3 specifies the errors that a server can return to a client and requires the server to return either one of these specified errors or none at all.

  • NFSv3 allows a server to cache data sent by a client and respond to the client write request before writing the data to disk. NFSv3 adds a COMMIT request that the client can send to the server to ensure that all of the data sent by the client has been committed to disk. This feature balances the needs of performance and ensuring data integrity.

  • NFSv3 reduces the number of round-trip operations between the client and the server by returning file attribute data with the original request. NFSv2 required the client to get the file names, and for each file, to acquire a handle and then acquire the file attributes.

3.4.2 NFSv4

NFSv4 has apparently totally changed the NFS philosophy and adopted a lot of features from CIFS, rather to the dismay of some NFS purists . Perhaps a better way to look at the situation is to remember that NFS was widely deployed first, SMB benefited from observing the strengths and weaknesses of NFS, and now, at least in the client space, CIFS/SMB enjoys a wider adoption and NFS is simply benefiting from the strengths and weaknesses of CIFS/SMB. In general, NFSv4 adds features for performance, security, and CIFS interoperability, as described here:

  • NFSv4 introduces COMPOUND requests whereby multiple requests can be packed into a single request and the responses themselves are also packed into a single response. The idea is to improve performance by minimizing network protocol overhead and eliminating round-trip time delays on the network. If this sounds very similar to the CIFS AndX SMB feature (see Section 3.3.5.1), it's probably not a coincidence .

  • NFSv4 borrows some features from Sun's WebNFS. In particular, NFSv4 subsumes some secondary protocols into the core NFS specification in order to make NFS firewall friendly. NFSv3 and earlier clients had to use a mount protocol to mount a share exported by the server onto their local file system. Because the mount protocol daemon did not have a TCP or UDP port assigned to it, the client would first have to request a port mapper daemon on the server to obtain the port on which the mount daemon would be listening. So besides NFS, a mount protocol and a port mapper protocol are involved here. Further, since the mount daemon could be using any arbitrary port, it is hard to configure fire walls to allow this access. NFSv4 does away with the mount protocol and the necessity to invoke the services of the port mapper. Locking has also been fully integrated into the core NFS protocol, and the Network Lock Manager protocol used by prior NFS versions is obsolete.

  • NFSv4 mandates the use of a transport protocol that provides congestion control. This implies that NFS clients or servers will gradually migrate to using TCP instead of UDP, which is typically used with NFSv3.

  • NFSv2 and NFSv3 allowed for the use of either the U.S. ASCII character set or the ISO Latin I character set. This led to problems when a client using one locale (character set) created a file and that same file was accessed by a client using a different locale. NFSv4 uses UTF-8, which can compactly encode 16-bit and 32-bit characters for transmission. UTF-8 also stores enough information that the proper action can be taken when a file is created via one locale and accessed through another locale.

  • NFSv4 requires the client to treat file handles differently. With NFSv3, a client could simply treat the handle as an opaque entity that it could cache, and then it could rely on the server to ensure that the handle always referred to the same file. NFSv4 now defines two kinds of file handles. One of them is persistent file handles , which have the same functionality as NFSv3 handles. The other one is called volatile file handles , in which the handle is invalidated after a certain time period or event. This functionality is meant for servers with file systems (e.g., NTFS) that cannot easily provide a persistent handle to file mapping.

  • NFSv4 adds an OPEN and CLOSE operation with semantics that allow interoperability with CIFS clients. The OPEN command creates a state at the server.

  • NFSv4 extends the protocol to include an OPEN request that a client can use to provide semantics that are similar to a file open request made by a Windows application. The semantics include a way for the client to specify whether it is willing to share the file with other clients or whether it wants exclusive access to the file.

3.4.2.1 NFSv4 Security

NFSv4 has introduced features to enhance security. In particular, NFSv4 adds support for more types of attributes on files. One of these attributes is Windows NT “style ACLs. This allows for better interoperability and better security as well.

Whereas NFSv3 and NFSv2 simply recommended security, NFSv4 mandates it. NFSv4 mandates that security be implemented via the RPCSEC_GSS (Generic Security Services) API in general and use Kerberos 5 or LIPKEY in particular. Note that RPCSEC_GSS simply acts as an API and a transport for the security tokens and information. NFSv4 allows for the possibility of using multiple authentication and security schemes, as well as for a way to negotiate which scheme will be used.

Taking some time to understand LIPKEY is worthwhile. LIPKEY uses a combination of symmetric and asymmetric cryptography. The client encrypts the user identity and password with a randomly generated 128-bit key. This encryption is done via a symmetric encryption algorithm, meaning that exactly the same key needs to be used for decryption. Because the server needs this key to decrypt the message, this randomly generated key needs to be sent to the server. The client encrypts the key (which it generated randomly ) using the server's public key. The server decrypts the data using its private key, extracts the symmetric key, and decrypts the user identity and password.

Clients can authenticate servers using the server certificate and a certificate authority's services to authenticate that certificate. One popular method that hackers use is to capture some packets and replay them later. While using Kerberos, NFS puts a timestamp on each packet. The server "remembers" timestamps it has received in the immediate past and compares those with timestamps on new RPC packets it receives. If the timestamp is an older one that the server has already seen before, the server can detect a replay attack and ignore those packets.


   
Top


Inside Windows Storage
Inside Windows Storage: Server Storage Technologies for Windows 2000, Windows Server 2003 and Beyond
ISBN: 032112698X
EAN: 2147483647
Year: 2003
Pages: 111
Authors: Dilip C. Naik

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net