Replication Concepts

   

Before we dive into designing our replication system, we should spend some time understanding basic directory replication concepts. These concepts are as follows :

  • Suppliers, consumers, and replication agreements

  • The unit of replication

  • Consistency and convergence

  • Incremental and total updates

  • Initial population of a replica

  • Replication strategies

    - Single-master replication

    - Multimaster replication

  • Replication protocols

Each issue is discussed in the following sections.

Suppliers, Consumers, and Replication Agreements

In replication systems, we use the terms supplier and consumer to identify the source and destination of replication updates, respectively. A supplier server sends updates to another server; a consumer server accepts those changes. These roles are not mutually exclusive: A server that is a consumer may also be a supplier.

The configuration information that tells a supplier server about a consumer server (and vice versa) is termed a replication agreement . This configuration information typically includes the unit of replication (discussed next ), the host name and port of the remote server, and other information about the replication to be performed, such as scheduling information. In other words, the replication agreement describes which consumer should receive the updates, what part of the directory is to be replicated, how the supplier connects to the consumer, and how the supplier authenticates to the consumer. Most directory server software stores replication agreements as entries in the directory, which means that you can often use utility software like the ldapsearch command-line utility to examine replication agreements.

The Unit of Replication

When we talk about replication, we need some common language to describe what is to be replicated. With most directory server software, the unit of directory partitioning is also the unit of replication. When you decide on a partitioning scheme, you are also deciding on your units of replication.

For example, with Microsoft Active Directory, the domain is the unit of replication. You cannot replicate part of a domain; all entries contained in the domain are replicated to all Active Directory replicas. Similarly, the Netscape Directory Server 6 unit of replication is the directory database. All entries contained in the database are replicated to all replicas of that database. If you need to replicate a particular subtree, that subtree must be contained in its own domain (Active Directory) or directory database (Netscape Directory Server 6).

Some directory server software allows you to create replicas that are subsets . We'll discuss these capabilities later in this chapter, in the section titled Advanced Replication Features.

Consistency and Convergence

Degree of consistency describes how closely the contents of replicated servers match each other at a given point in time. A strongly consistent replica is one that provides the same information as its supplier at all times; that is, the effect of an update is not visible to any client until all of the replicated servers have received and acknowledged the update. From a client's point of view, the update occurs simultaneously on all the replicated servers.

On the other hand, a weakly consistent replica is permitted to diverge from its supplier for some period. For example, Figure 11.4 shows that there is a period of time after a supplier has been updated but before the update has been propagated to a replica; during that time the replica contains stale data with respect to the current data on the supplier. We say that a supplier and a replica have converged when they contain the same data. It is important that replication systems eventually converge over time so that all clients see a consistent view of the directory.

Figure 11.4. Weakly Consistent Replicas

In a directory system that uses weakly consistent replication, directory clients should not expect their updates to be reflected immediately at each replica. For example, a directory application should not expect that it can update an entry and then immediately be able to read it to obtain the updated values. If the client is connected to a read-only replica, its update will have been redirected to an updatable replica and may take some time to be replicated to the server the client is connected to.

It may come as a surprise that all practical directory systems use weakly consistent replicas. Why? The answer has to do with performance. Imagine that a single supplier feeds three replicas, and that each of the replicas handles a large client load of search requests . If the supplier maintains strong consistency with its replicas, it must send a change to each one and receive a positive acknowledgment before returning a result to the client that sent the change. Because each replica is heavily loaded, it may be slow in sending the result to the supplier. The supplier can therefore return a result to the client no faster than the slowest replica acknowledges the update. This limitation can reduce performance unacceptably.

In addition, implementing strong consistency among replicas requires that replicas support a two-phase commit protocol. Such support is necessary so that the supplier server can back out (undo) an update if any of the consumers fails to acknowledge the change. The supplier would then return an error code to the client, and the client would presumably retry the operation later. This means that if any consumer server is offline or unreachable, the supplier server cannot accept any changes.

In addition to its lower performance, strong consistency is incompatible with scheduled replication, an advanced feature we'll discuss later in this chapter. Briefly, scheduled replication permits updates to be deferred to a particular window in time, perhaps to take advantage of reduced bandwidth costs or lower utilization of slow WAN links. Because a strongly consistent system requires that updates be propagated immediately, it is essentially at odds with scheduled replication.

Given all these challenges, weakly consistent replication systems are much easier to implement and provide better performance at the expense of temporary inconsistencies between supplier and replica servers. For virtually all directory applications, this trade-off is perfectly acceptable and represents a well-informed compromise on the part of directory designers.

Incremental and Total Updates

To make two servers consistent, we might choose either to completely replace the contents of the consumer server or to transmit only the minimum information necessary to bring the servers into synchronization. The former approach, termed a total update or a replica refresh , is useful when you're initially creating a replica (you'll learn more about this creation operation later in this chapter). But always using a total update strategy when updating consumer servers is inefficient because all entries are transmitted even if they have not been modified.

In an incremental update , only the changes made to the supplier's directory are sent to the consumer server. For example, if a directory client modifies an entry by replacing its description attribute, it is necessary to perform only that same change on all replicas to bring them into synchronization. It is not necessary to send the entire entry, and it is certainly not necessary to transmit the entire contents of the database to all replicas. Incremental updates are much more efficient, and all widely used LDAP directory server software supports them.

Note

If a replica's directory tree is in an unknown state (perhaps it has been damaged or reloaded from an extremely out-of-date backup), it may then be desirable to wipe out any existing contents and perform a total update. This is also what is done when a replica is initially populated with data.


To understand better how the incremental update process works, let's look at the process in general, and then we'll examine how real-world directory services perform incremental updates. Following is an outline of the incremental update process:

Step 1. The supplier server connects to the consumer server and authenticates.

Step 2. The supplier determines which updates need to be applied.

Step 3. The supplier sends the updates to the consumer.

Step 4. The consumer applies those updates to its copy of the directory data.

Step 5. The supplier and/or the consumer save state information that records the last update applied. This information is used in Step 2 of subsequent incremental updates.

In this way, a supplier transmits only the minimum number of updates necessary to make the consumer server consistent with the supplier. For some more concrete examples, let's examine how a popular directory service ”Netscape Directory Server 6 ”incrementally updates a consumer.

The Netscape Directory Server 6 Update Process

Netscape Directory Server 6 updates consumers by replaying changes it receives. For example, if a client connects to a Netscape Directory Server 6 server and adds a new entry, the supplier connects to all its consumers and adds the same entry. Each change, when received by the supplier, is assigned a unique change sequence number ( CSN ); the change, along with the CSN, is then logged to a changelog , a database that records all changes made to the server. The supplier keeps track of the changes it has replayed to the consumer by storing in the consumer's directory tree the CSN of the last change applied.

Netscape Directory Server 6 performs the following steps when incrementally updating a replica, as illustrated in Figure 11.5:

Step 1. The supplier server connects to the consumer server and authenticates.

Step 2. The supplier reads the nsds50ruv attribute in the entry at the top of the replicated subtree. The nsds50ruv attribute contains the CSN of the last change replayed to this consumer from every known updatable replica.

Step 3. The supplier server replays to the consumer any changes that it has not yet received. It continues to do this until it runs out of changes to replay.

Step 4. The supplier server stores on the consumer an updated nsds50ruv attribute. The consumer is now consistent with the supplier.

Figure 11.5. The Netscape Directory Server 6 Update Process

The preceding sequence of steps is simplified; the process is actually more complicated. We'll discuss the update process later in this chapter when we cover multimaster replication in detail.

Initial Population of a Replica

When a consumer server is initially configured, it contains no data. The replica must somehow be populated with a snapshot of the supplier's data so that it can subsequently be made consistent. Or in the event that a consumer server has become damaged, the consumer must be made consistent, usually by removal of the damaged data and creation of a fresh copy of the directory data from the supplier.

Tip

A replica that is being initialized cannot service requests until initialization is complete. If it began servicing requests before being completely populated, it might give erroneous results. For example, it might claim that a given entry does not exist when in fact it simply has not yet received the entry from the supplier. Virtually all directory server software automatically takes care of arranging for a replica to be offline during replica initialization. The replica typically issues a referral to the master server or chains the operation to the master. Clients that connect to a replica that is being initialized may experience degraded performance during initialization of a replica.


How is a replica initialized? Directory vendors accomplish this task by various methods , although all are similar. For example, Netscape Directory Server 6 goes through the following process to perform a total update:

Step 1. The supplier server sends a special LDAPv3 extended operation to the consumer that signals the beginning of a total update.

Step 2. The consumer server takes the appropriate directory database (partition) offline and configures itself so that any client operations (searches or updates) are referred to the supplier during the update. The consumer then signals to the supplier that it is ready to accept the total update.

Step 3. The supplier server sends a series of LDAPv3 extended operations that contain the directory data ”one entry per extended operation. The consumer server buffers this data and feeds it to a bulk update process that efficiently creates the directory database and all needed indexes.

Step 4. When all entries have been sent, the supplier sends a special LDAPv3 operation to the consumer that signals the end of the total update. It also replaces the consumer's nsds50ruv attribute.

Step 5. The consumer server places its database back in the online state and resumes servicing client requests.

Replication Strategies

The term replication strategy refers to the way updates flow from server to server and the way servers interact when propagating updates. There are two main approaches: single-master replication and multimaster replication.

Single-Master Replication

In single-master replication , only one server contains a writable copy of a given directory entry. All replicas contain read-only copies of the entry. Whereas the master server is the only one that can perform write operations, any server may perform a search, compare, or bind operation (see Figure 11.6).

Figure 11.6. Single-Master Replication

Because a typical directory-enabled application performs many more search operations than modify operations, it's beneficial to use read-only replicas. The read-only replica server can handle search operations just as well as the writable master server can.

If the client attempts to perform a write operation on the read-only server (for example, adding, deleting, modifying, or renaming an entry), we need some way to arrange for the operation to be submitted to the read/write server. There are two possibilities: The first is to submit the operation via a referral, which is simply a way for a server to say to a client, "I cannot handle this request, but here is the location of a server that should be able to." Figure 11.7 shows the steps involved when a directory client submits a change to a read-only replica.

Figure 11.7. Directing an Update to a Master Server by Using Referrals

Note

Some directory clients do not automatically follow referrals or do not authenticate properly to the referred-to directory. For example, the ldapmodify command shipped with the Solaris operating system will follow a referral generated by a read-only consumer but will not authenticate with the user 's credentials when connecting to the supplier server. Instead, the client will bind anonymously and will not have sufficient privileges to perform the update.


The other way to submit a write operation to the read/write copy is by chaining the request. That is, the server resubmits the request, on behalf of the client, to the read/write copy; then it obtains the result and forwards it to the client (see Figure 11.8). A more thorough discussion of referrals and chaining may be found in Chapter 10, Topology Design.

Figure 11.8. Directing an Update to a Master Server by Chaining

Typically, all these multistep interactions between clients and servers are handled automatically by the application software. Directory client users are unlikely to witness all this; instead they simply see the modify operation complete, and the change is eventually available on the replica. (Note that for a period of time the read/write copy of the server contains newer data than the read-only copy, as mentioned in the earlier discussion on consistency and convergence.)

Astute readers will notice that in a single-master replication system there is a single point of failure: the read/write server. Only one server can process write operations for a given entry; if it goes down, no client can modify that portion of the directory (although search and read operations can continue at read-only replicas). Depending on the type of directory client software and directory-enabled application in use, this may or may not be acceptable. If it is not acceptable, another replication strategy, discussed next, can remove the single point of failure for write operations.

Multimaster Replication

In a multimaster replication system, more than one read/write copy of an entry is almost always available. Clients may submit an update operation to any of the read/write replicas. It then becomes the responsibility of the set of cooperating servers to ensure that changes are eventually propagated to all servers and that consistency is maintained . Figure 11.9 shows two replicated read/write servers capable of handling client write requests.

Figure 11.9. Multimaster Replication

Multimaster replication eliminates the single point of failure for updates and thus offers greater reliability for directory clients. However, allowing more than one server to accept write operations brings additional complexity, most notably the need for an update conflict resolution policy . This policy is used to resolve an update conflict, which can occur when an attribute of an entry is modified at the same approximate time on two different master servers.

Conflict Resolution

In multimaster replication systems, more than one directory server may accept modifications for a given entry. Sometimes the result is a situation in which two directory clients modify the same entry on two different servers at the same time. But what happens when the clients modify the entry in such a way that the changes are in conflict?

In Figure 11.10, Client 1 modifies the entry cn=John Doe,dc=example,dc=com and replaces the telephoneNumber attribute with the single value +1 408 555 1212 , submitting the change to Server A. At the same time, Client 2 modifies the entry cn=John Doe,dc=example,dc=com and replaces the telephoneNumber attribute with a different value, +1 650 555 1212 , submitting the change to Server B. After these operations complete on each server, the entries are in conflict: It's impossible for both changes to be retained, so one must be discarded.

Figure 11.10. Setting the Stage for an Update Conflict

Because the cooperating servers are required to converge eventually, we need to invent some way of resolving this conflict. There is really no correct way to resolve the conflict; each client's change is as good as the other's. Of course, each user thinks that his change will be made on all replicas, and he may be somewhat surprised to discover otherwise .

All currently available multimaster directory replication systems use a "last writer wins" policy to resolve such conflicts. Every attribute of every entry in the directory is marked with a unique sequence number that allows a server to determine which update should be used and which update should be discarded. In the next section we'll discuss, in abstract terms, the multimaster update process. After we've introduced some important common concepts, we'll examine how multimaster replication works in Netscape Directory Server 6.

Sequence Numbers

All multimaster replication systems assign a unique sequence number to each update operation received from an LDAP client. Sequence numbers have the following properties:

  • They are unique.

  • They always increase with time. Every sequence number generated by a server is larger than all previously generated sequence numbers.

These properties ensure that it's always possible to determine the ordering of two updates. To the greatest extent possible, multimaster replication systems try to make this ordering reflect the real-world ordering of events (sometimes referred to as wall-clock time ). Most systems base their sequence numbers directly on the system clock.

All multimaster systems also provide a way to avoid a situation in which two updates on two different servers are assigned exactly the same sequence number. In some systems, such as Netscape Directory Server 6, each replica is assigned a unique, small integer identifier, and this identifier is inserted into the sequence number. In other cases, including Microsoft Active Directory, a more complicated system is used in which each attribute value is marked with versioning information and timestamps. Whatever the method used, the outcome is that no two updates are ever assigned the same sequence number. Thus, each server in a multimaster environment can make consistent update conflict resolution decisions.

All multimaster replication systems that we are aware of derive their sequence numbers from the server's clock in some fashion. When two servers detect that their clocks are out of sync, they take one of two approaches, depending on the type of directory server software in use. Netscape Directory Server 6 and Novell eDirectory use a special clock for the timestamps that can move relative to the system clock. If a server detects that its clock is behind, it advances its timestamp clock to the largest value seen from other servers. This time is sometimes referred to as synthetic time because it does not accurately reflect wall-clock time. Other systems, such as Microsoft Active Directory, have a maximum allowable clock skew. In Active Directory, if two servers detect that their clocks are out of sync by five minutes or more, they will not replicate data, and an administrator must remedy the problem.

In practice, clock synchronization problems can be avoided through use of a clock synchronization utility. Novell eDirectory and Microsoft Active Directory include software that automatically keeps server clocks in sync. For systems running Netscape Directory Server 6, it is highly recommended that you install Network Time Protocol (NTP) software on the server. NTP can synchronize server clocks to highly accurate external time sources that are freely available on the Internet.

Granularity

Another important concept in a multimaster system is the granularity of sequence number assignment. Whether the system assigns sequence numbers to attributes or to the individual attribute values has an effect on the behavior of a replicated system.

If sequence numbers are assigned to attributes instead of individual attribute values, when resolving conflicts a server must choose between the values of an attribute on one server and the values of the attribute on the other server. For example, if one user adds a member to a group on Server A, and another user adds a different member to a group on Server B, when replicating, the resolution policy can choose only one list of members or the other. It's not possible to merge the lists of members because they don't have any individual sequence numbers. This means that one of the changes will disappear without a trace. The version of Microsoft Active Directory that ships with Windows 2000 suffers from this design problem. Microsoft recommends that administrators alter group membership on a single domain controller to prevent this situation. A future release of Active Directory will resolve this issue.

An alternative approach is to assign sequence numbers to individual attribute values. In this case, finer-grained update resolution is possible. In the previous example, instead of group membership updates being discarded, the member lists of the group would be merged during the conflict resolution process. In general, assigning sequence numbers to individual attribute values results in fewer unpleasant surprises for directory users, at the expense of larger database sizes. Netscape Directory Server 6 uses this approach. We will discuss the specific procedures used by Netscape Directory Server 6 in the section Update Resolution Policies later in this chapter.

Unique Identifiers

Multimaster systems assign a unique, unchangeable identifier to every entry at creation time. The entry's distinguished name (DN) is not acceptable for this purpose because it can be changed with the rename operation. Instead, an identifier is assigned that is guaranteed to be unique for all space and time. There are several variants on this theme, but all are similar to the global unique identifier (GUID) found in the Open Group's Distributed Computing Environment. Although it's technically incorrect to say that GUIDs are guaranteed to be unique for all space and time, the probability that the same GUID will be assigned to two different directory entries is extremely small.

All replication operations that flow between servers in a multimaster environment use unique identifiers when naming entries. For all practical purposes, the DN of the entry is just another attribute of the entry that can change, although there are some special considerations that we will cover in the section titled Update Resolution Policies later in this chapter.

Client Updates versus Replica Updates

In all multimaster systems, updates received from clients are treated differently from updates received from replicas. When a server processes an update from a client, it assigns the update a sequence number before committing the change to its database. It also advances the global sequence number counter so that the next change processed will be assigned a larger sequence number. Microsoft terms these types of updates originating writes .

When an update is received from another replica, by contrast, the sequence number arrives with the update (because it was assigned by the replica that originally received the update from a client). A new sequence number is not assigned. This means that sequence numbers on directory data reflect the order in which the changes were originally received from clients, at whichever replicas happened to accept the updates.

Replica Update Vectors

A replica update vector ( RUV ) is a collection of information stored on each replica that describes how up-to-date that replica is with respect to every other replica of that partition. An RUV consists of a series of sequence numbers, one from each updatable replica, that describe the most recent update received from that replica. When one replica sends replication updates to another server, it first retrieves the destination server's RUV and sends only updates with sequence numbers larger than the sequence numbers in the RUV. This means that the minimal number of updates is sent to bring the replica up-to-date.

Another, more subtle benefit, is that RUVs allow the construction of more complicated replication topologies. For example, consider the replication topology shown in Figure 11.11. Notice that the three replicas are fully connected; each replica has a replication agreement with the other two. An originating write made at Replica A can arrive at Replica B via one of two paths. Either it can be sent directly to B, or it can be sent to C, which forwards it to B. Depending on the scheduling of replication sessions, either case might happen.

Figure 11.11. Three Servers ”A, B, C ”Fully Connected

Because each server has an RUV that records the largest sequence number seen from each of the other replicas, updates are never sent unnecessarily. As illustration, suppose that the following sequence occurs:

Step 1. An originating write ( W ) is received at Replica A. It is assigned a sequence number ( S ).

Step 2. Replica A begins a replication session with Replica B. It sends W to B. The RUV on B now records the fact that it has seen all updates with sequence numbers up to S from Replica A.

Step 3. Replica B begins a replication session with Replica C. It sends W to C. The RUV on C now records the fact that it has seen all updates with sequence numbers up to S from Replica A.

Step 4. Replica A begins a replication session with Replica C. When A examines the RUV on C, it discovers that C has already received updates up to sequence number S from it. Therefore, it avoids sending update W .

Microsoft calls this property replication dampening because it suppresses (dampens) the unnecessary transmission of replication updates.

Another benefit of allowing redundant replication agreements is that the system is more resistant to network failures. For example, even if Replica A loses contact with Replica C, if it can still contact Replica B, and Replica B can contact Replica C, updates can still get from A to C via B.

Note

Although Netscape Directory Server 6 uses a robust multimaster replication algorithm, it is currently certified for use only where there is a maximum of two updatable replicas. The intent is to allow the updatable replicas to be configured as a highly available pair, such that the failure of one server will not prevent clients from making updates. It is not recommended that you configure more than two updatable servers in your replication configuration, although you are not prevented from doing so. Future versions of Netscape Directory Server may remove this restriction.


Update Resolution Policies

As mentioned previously, all multimaster replication systems in use today attempt to provide "last writer wins" semantics when resolving update conflicts. This policy makes sense because it most closely matches the behavior of a single-master system. If you are using a single server, and you update an entry's e-mail address at 1 P.M. and then you observe someone else updating the entry's e-mail address 30 seconds later, you will expect your change to be overwritten.

The algorithms for resolving conflicting updates to an existing entry are relatively straightforward. Generally, the state of the attribute observed by the last writer is propagated to all servers. However, other conflict scenarios are more difficult to resolve. Let's examine several of these.

Entry Naming Conflicts

Suppose that two people create the entry uid=jsmith, ou=people,dc=example,dc=com on two different servers simultaneously. When the servers replicate the changes to one another, the entries are in conflict. The two entries may or may not refer to the same person. Perhaps one person is Jane Smith, and the other is John Smith. It's impossible for the replication system to know. Because each entry in an LDAP directory must have a unique DN, it's impossible to keep both entries with their original names .

The update resolution policy used in this case is to rename one of the entries so that its DN is different. But which entry should be renamed ? If the same scenario occurred with a single server, the attempt to add the second entry would have failed with an "entry already exists" error. Therefore, the most natural policy is to rename the entry with the larger sequence number, which would be the entry added later. Netscape Directory Server 6 renames the entry by prepending its unique ID to the RDN of the entry. For example, if the second entry added has been assigned the unique identifier abef601f-1dd111b2-8084bc31-2dc0fde3 , then the renamed entry will have the DN nsuniqueid=abef601f- 1dd111b2-8084bc31-2dc0fde3+uid=jsmith,ou=people,dc=example,dc=com . The renamed entry will also be marked with the operational attribute nsds5ReplConflict . You can locate all entries that were altered because of a replication conflict by using the search filter nsds5ReplConflict=* .

It's also possible for an entry naming conflict to arise as a consequence of simultaneous add and rename operations, or simultaneous rename operations. If the new names of the entries are the same, a naming conflict has occurred. For example, if one user adds a new entry with the name uid=jsmith,ou=people,dc=example,dc=com on one server, and another user renames the entry uid=jjones,ou=people,dc=example, dc=com to uid=jsmith,ou=people,dc=example,dc=com on another server, the result is two entries with the same name. The same policy of renaming the entry with the larger sequence number is also used in this case.

Conflicts Involving Deleted Entries

Some types of multimaster update conflicts arise because an entry has been deleted on one server, and before that update can propagate to other servers, the entry is deleted or has subordinate entries added beneath it. Consider the following example:

The entry ou=Engineering,dc=example,dc=com is added by a directory administrator. This update is submitted to Server A, and multimaster replication propagates this update to all other servers.

A departmental administrator, seeing that the Engineering department's entry is now in the directory, begins adding employees to that department. This administrator submits the updates to Server B.

The directory administrator realizes that the department name should have been "Research Engineering." She deletes the Engineering entry and creates a new entry named "Research Engineering," performing the operation on Server A.

Servers A and B now replicate with one another. The conflict is that the entries added by the departmental administrator have been "orphaned" because their parent entry was deleted. Because every entry in the directory (except the root entry) must have a parent, we need to apply a policy to ensure that the orphaned entries again have a parent.

There are several possible approaches to solving this problem. One is to resurrect the deleted parent and mark it in such a way that an administrator can discover its existence. Another is to move the set of orphaned entries into a special lost-and-found directory container that holds orphaned entries. Netscape Directory Server 6 resurrects the deleted entry in such situations.

Clearly, to allow resurrection of a deleted entry, a server must keep track of entries that have been deleted. Many vendor implementations use a special type of entry named a tombstone entry for this purpose. When an entry is deleted, it is not physically removed from the database. Instead, it is converted to a tombstone entry. Such entries are not visible to LDAP clients and are used only by update resolution procedures.

To prevent the database from becoming unnecessarily large, tombstone entries are typically purged on some sort of a schedule. After a tombstone entry has been purged, all records of the entry's existence are gone. Problems can arise if an updatable replica is disconnected from its replication partners for a period longer than the tombstone purge interval. In that case the disconnected replica may contain entries that have been deleted, tombstoned, and purged from the other replicas. Deleted entries can be reintroduced in this case. If you know that a replica has been disconnected for a long time, it's often a good idea to reinitialize the replica before reconnecting it.

Conflicts Involving Single-Value Constraints

When an attribute is marked as single-valued in the schema, a directory server will reject any attempt to add more than one value to the attribute. However, consider what happens when two clients simultaneously add a value to an initially empty attribute and submit those operations to two different servers. Each server individually allows the update because the single-valued nature of the attribute is not violated. When the server replicates the update, however, it's impossible to accommodate both client updates because the resulting state of the attribute would contain two values. To resolve this conflict, Netscape Directory Server 6 discards the value with the smaller CSN. The value that will appear in the entry is the one added later.

Replication Protocols

A replication protocol is the flow of information over the network that directory servers use to send replication updates. At the time of this writing, efforts are under way in the Internet Engineering Task Force (IETF) to develop a standard, vendor-independent replication protocol. However, there is currently no common replication standard that allows you to directly replicate from one vendor's directory server to a different vendor's. If you do need to synchronize directories from different vendors, you will need to develop your own synchronization tools or use one of the commercially available metadirectory solutions. For more information, see Chapter 23, Directory Coexistence.

Because there is no replication protocol standard, each vendor implements its protocol slightly differently. Netscape Directory Server 6 uses a replication protocol based on LDAPv3. The basic LDAP protocol has been augmented with controls and extended operations that carry the extended information required for multimaster replication to operate .

Microsoft Active Directory supports two different replication protocols. The more common protocol is a proprietary remote procedure call (RPC) protocol that is very efficient, especially when Active Directory replicas have good network connectivity. Active Directory also supports a replication protocol that sends replication updates as Simple Mail Transfer Protocol (SMTP) messages. Replication updates transmitted via SMTP can pass through store-and-forward message networks and can be used when replicas are less well connected.

Novell eDirectory uses a proprietary replication protocol, and X.500 supports a single-master replication protocol called Directory Information Shadowing Protocol (DISP). X.500 does not currently support multimaster replication.

   


Understanding and Deploying LDAP Directory Services
Understanding and Deploying LDAP Directory Services (2nd Edition)
ISBN: 0672323168
EAN: 2147483647
Year: 2002
Pages: 242

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net