13.2 Discussion


There are two types of locking that need to be performed by an SMB server. The first is record locking that allows a client to lock a range of bytes in a open file. The second is the deny modes that are specified when a file is open.

Record locking semantics under UNIX are very different from record locking under Windows. Versions of Samba before 2.2 have tried to use the native fcntl() UNIX system call to implement proper record locking between different Samba clients . This cannot be fully correct for several reasons. The simplest is the fact that a Windows client is allowed to lock a byte range up to 2^32 or 2^64, depending on the client OS. The UNIX locking only supports byte ranges up to 2^31. So it is not possible to correctly satisfy a lock request above 2^31. There are many more differences, too many to be listed here.

Samba 2.2 and above implements record locking completely independent of the underlying UNIX system. If a byte range lock that the client requests happens to fall into the range of 0-2^31, Samba hands this request down to the UNIX system. All other locks cannot be seen by UNIX, anyway.

Strictly speaking, an SMB server should check for locks before every read and write call on a file. Unfortunately with the way fcntl() works, this can be slow and may overstress the rpc.lockd . This is almost always unnecessary as clients are supposed to independently make locking calls before reads and writes if locking is important to them. By default, Samba only makes locking calls when explicitly asked to by a client, but if you set strict locking = yes, it will make lock checking calls on every read and write call.

You can also disable byte range locking completely by using locking = no. This is useful for those shares that do not support locking or do not need it (such as CDROMs). In this case, Samba fakes the return codes of locking calls to tell clients that everything is okay.

The second class of locking is the deny modes . These are set by an application when it opens a file to determine what types of access should be allowed simultaneously with its open. A client may ask for DENY_NONE , DENY_READ , DENY_WRITE , or DENY_ALL . There are also special compatibility modes called DENY_FCB and DENY_DOS .

13.2.1 Opportunistic Locking Overview

Opportunistic locking ( Oplocks ) is invoked by the Windows file system (as opposed to an API) via registry entries (on the server and the client) for the purpose of enhancing network performance when accessing a file residing on a server. Performance is enhanced by caching the file locally on the client that allows:

Read-ahead: ” The client reads the local copy of the file, eliminating network latency.

Write caching: ” The client writes to the local copy of the file, eliminating network latency.

Lock caching: ” The client caches application locks locally, eliminating network latency.

The performance enhancement of oplocks is due to the opportunity of exclusive access to the file ” even if it is opened with deny-none ” because Windows monitors the file's status for concurrent access from other processes.

W INDOWS DEFINES 4 KINDS OF O PLOCKS :

Level1 Oplock ” The redirector sees that the file was opened with deny none (allowing concurrent access), verifies that no other process is accessing the file, checks that oplocks are enabled, then grants deny-all/read-write/exclusive access to the file. The client now performs operations on the cached local file.

If a second process attempts to open the file, the open is deferred while the redirector " breaks " the original oplock. The oplock break signals the caching client to write the local file back to the server, flush the local locks and discard read-ahead data. The break is then complete, the deferred open is granted, and the multiple processes can enjoy concurrent file access as dictated by mandatory or byte-range locking options. However, if the original opening process opened the file with a share mode other than deny-none, then the second process is granted limited or no access, despite the oplock break.

Level2 Oplock ” Performs like a Level1 oplock, except caching is only operative for reads. All other operations are performed on the server disk copy of the file.

Filter Oplock ” Does not allow write or delete file access.

Batch Oplock ” Manipulates file openings and closings and allows caching of file attributes.

An important detail is that oplocks are invoked by the file system, not an application API. Therefore, an application can close an oplocked file, but the file system does not relinquish the oplock. When the oplock break is issued, the file system then simply closes the file in preparation for the subsequent open by the second process.

Opportunistic locking is actually an improper name for this feature. The true benefit of this feature is client-side data caching, and oplocks is merely a notification mechanism for writing data back to the networked storage disk. The limitation of opportunistic locking is the reliability of the mechanism to process an oplock break (notification) between the server and the caching client. If this exchange is faulty (usually due to timing out for any number of reasons), then the client-side caching benefit is negated.

The actual decision that a user or administrator should consider is whether it is sensible to share among multiple users data that will be cached locally on a client. In many cases the answer is no. Deciding when to cache or not cache data is the real question, and thus " opportunistic locking " should be treated as a toggle for client-side caching. Turn it " on " when client-side caching is desirable and reliable. Turn it " off " when client-side caching is redundant, unreliable or counter-productive.

Opportunistic locking is by default set to " on " by Samba on all configured shares, so careful attention should be given to each case to determine if the potential benefit is worth the potential for delays. The following recommendations will help to characterize the environment where opportunistic locking may be effectively configured.

Windows opportunistic locking is a lightweight performance-enhancing feature. It is not a robust and reliable protocol. Every implementation of opportunistic locking should be evaluated as a tradeoff between perceived performance and reliability. Reliability decreases as each successive rule above is not enforced. Consider a share with oplocks enabled, over a wide area network, to a client on a South Pacific atoll , on a high-availability server, serving a mission-critical multi-user corporate database during a tropical storm . This configuration will likely encounter problems with oplocks.

Oplocks can be beneficial to perceived client performance when treated as a configuration toggle for client-side data caching. If the data caching is likely to be interrupted , then oplock usage should be reviewed. Samba enables opportunistic locking by default on all shares. Careful attention should be given to the client usage of shared data on the server, the server network reliability and the opportunistic locking configuration of each share. In mission critical high availability environments, data integrity is often a priority. Complex and expensive configurations are implemented to ensure that if a client loses connectivity with a file server, a failover replacement will be available immediately to provide continuous data availability.

Windows client failover behavior is more at risk of application interruption than other platforms because it is dependent upon an established TCP transport connection. If the connection is interrupted ” as in a file server failover ” a new session must be established. It is rare for Windows client applications to be coded to recover correctly from a transport connection loss, therefore, most applications will experience some sort of interruption ” at worst, abort and require restarting.

If a client session has been caching writes and reads locally due to opportunistic locking, it is likely that the data will be lost when the application restarts or recovers from the TCP interrupt. When the TCP connection drops , the client state is lost. When the file server recovers, an oplock break is not sent to the client. In this case, the work from the prior session is lost. Observing this scenario with oplocks disabled and with the client writing data to the file server real-time, the failover will provide the data on disk as it existed at the time of the disconnect.

In mission-critical high-availability environments, careful attention should be given to opportunistic locking. Ideally, comprehensive testing should be done with all affected applications with oplocks enabled and disabled.

13.2.1.1 Exclusively Accessed Shares

Opportunistic locking is most effective when it is confined to shares that are exclusively accessed by a single user, or by only one user at a time. Because the true value of opportunistic locking is the local client caching of data, any operation that interrupts the caching mechanism will cause a delay.

Home directories are the most obvious examples of where the performance benefit of opportunistic locking can be safely realized.

13.2.1.2 Multiple-Accessed Shares or Files

As each additional user accesses a file in a share with opportunistic locking enabled, the potential for delays and resulting perceived poor performance increases . When multiple users are accessing a file on a share that has oplocks enabled, the management impact of sending and receiving oplock breaks and the resulting latency while other clients wait for the caching client to flush data offset the performance gains of the caching user.

As each additional client attempts to access a file with oplocks set, the potential performance improvement is negated and eventually results in a performance bottleneck.

13.2.1.3 UNIX or NFS Client-Accessed Files

Local UNIX and NFS clients access files without a mandatory file-locking mechanism. Thus, these client platforms are incapable of initiating an oplock break request from the server to a Windows client that has a file cached. Local UNIX or NFS file access can therefore write to a file that has been cached by a Windows client, which exposes the file to likely data corruption.

If files are shared between Windows clients, and either local UNIX or NFS users, turn opportunistic locking off.

13.2.1.4 Slow and/or Unreliable Networks

The biggest potential performance improvement for opportunistic locking occurs when the client-side caching of reads and writes delivers the most differential over sending those reads and writes over the wire. This is most likely to occur when the network is extremely slow, congested , or distributed (as in a WAN). However, network latency also has a high impact on the reliability of the oplock break mechanism, and thus increases the likelihood of encountering oplock problems that more than offset the potential perceived performance gain. Of course, if an oplock break never has to be sent, then this is the most advantageous scenario to utilize opportunistic locking.

If the network is slow, unreliable, or a WAN, then do not configure opportunistic locking if there is any chance of multiple users regularly opening the same file.

13.2.1.5 Multi-User Databases

Multi-user databases clearly pose a risk due to their very nature ” they are typically heavily accessed by numerous users at random intervals. Placing a multi-user database on a share with opportunistic locking enabled will likely result in a locking management bottleneck on the Samba server. Whether the database application is developed in-house or a commercially available product, ensure that the share has opportunistic locking disabled.

13.2.1.6 PDM Data Shares

Process Data Management (PDM) applications such as IMAN, Enovia and Clearcase are increasing in usage with Windows client platforms, and therefore SMB datastores. PDM applications manage multi-user environments for critical data security and access. The typical PDM environment is usually associated with sophisticated client design applications that will load data locally as demanded. In addition, the PDM application will usually monitor the data-state of each client. In this case, client-side data caching is best left to the local application and PDM server to negotiate and maintain. It is appropriate to eliminate the client OS from any caching tasks , and the server from any oplock management, by disabling opportunistic locking on the share.

13.2.1.7 Beware of Force User

Samba includes an smb.conf parameter called force user that changes the user accessing a share from the incoming user to whatever user is defined by the smb.conf variable. If opportunistic locking is enabled on a share, the change in user access causes an oplock break to be sent to the client, even if the user has not explicitly loaded a file. In cases where the network is slow or unreliable, an oplock break can become lost without the user even accessing a file. This can cause apparent performance degradation as the client continually reconnects to overcome the lost oplock break.

Avoid the combination of the following:

  • force user in the smb.conf share configuration.

  • Slow or unreliable networks

  • Opportunistic locking enabled

13.2.1.8 Advanced Samba Opportunistic Locking Parameters

Samba provides opportunistic locking parameters that allow the administrator to adjust various properties of the oplock mechanism to account for timing and usage levels. These parameters provide good versatility for implementing oplocks in environments where they would likely cause problems. The parameters are: oplock break wait time , oplock contention limit .

For most users, administrators and environments, if these parameters are required, then the better option is to simply turn oplocks off. The Samba SWAT help text for both parameters reads: " Do not change this parameter unless you have read and understood the Samba oplock code ." This is good advice.

13.2.1.9 Mission-Critical High-Availability

In mission-critical high-availability environments, data integrity is often a priority. Complex and expensive configurations are implemented to ensure that if a client loses connectivity with a file server, a failover replacement will be available immediately to provide continuous data availability.

Windows client failover behavior is more at risk of application interruption than other platforms because it is dependant upon an established TCP transport connection. If the connection is interrupted ” as in a file server failover ” a new session must be established. It is rare for Windows client applications to be coded to recover correctly from a transport connection loss, therefore, most applications will experience some sort of interruption ” at worst, abort and require restarting.

If a client session has been caching writes and reads locally due to opportunistic locking, it is likely that the data will be lost when the application restarts, or recovers from the TCP interrupt. When the TCP connection drops, the client state is lost. When the file server recovers, an oplock break is not sent to the client. In this case, the work from the prior session is lost. Observing this scenario with oplocks disabled, and the client was writing data to the file server real-time, then the failover will provide the data on disk as it existed at the time of the disconnect.

In mission-critical high-availability environments, careful attention should be given to opportunistic locking. Ideally, comprehensive testing should be done with all effected applications with oplocks enabled and disabled.



Official Samba-3 HOWTO and Reference Guide
The Official Samba-3 HOWTO and Reference Guide, 2nd Edition
ISBN: 0131882228
EAN: 2147483647
Year: 2005
Pages: 297

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net