Partition and Replica Types

     

Now that you have an understanding of what makes up the eDirectory database, let's look at how the database is distributed.

The eDirectory database is a loosely consistent, partitioned, hierarchical database. This means that the data can be divided into many different logical pieces, called partitions , and you can put a copy, or replica , of any user -created partition on a number of servers. The DS module will keep the information in different replicas synchronized, but bear in mind that at any given point in time, the information in one replica may not fully match the information in another replica. The DS module handles the discrepancy between copies by maintaining information about which copy has the most current changes and propagating, or replicating , that information to the servers that have older information. It is important to note that the eDirectory database is continually converging to a consistent state. When it completes synchronization on a partition, the partition is consistent until the next time data is changed.

When you first installed your server, the system created a number of special partitions in the eDirectory database:

  • The System partition

  • The Schema partition

  • The External Reference partition

  • The Bindery partition

The partitions discussed here are logical partitions that exist within the eDirectory database (see Figure 2.14). They are not physical partitions on a hard disk.

Figure 2.14. eDirectory partitions.
graphics/02fig14.gif

In addition, a user-defined partition may have been automatically added if the server is one of the first three servers installed in the partition.

The System Partition

The System partition keeps track of information specific to the local server. This information is not synchronized with other servers in the tree. This is the partition that the Limber process operates on. See Chapter 6, "Understanding Common eDirectory Processes," for more information on the Limber process.

Information contained in the System partition includes the following:

  • Information on where the server is located in the eDirectory tree, including its typeful FDN (for example, CN=RH9-VM.O=Testing ).

  • eDirectory indexes defined on the server. (See Chapter 16, "Tuning eDirectory," for information about the use of indexes.)

  • The state of background processes (including errors), if the server is running NDS 5.95 or later (for NetWare 4.11 and NetWare 4.2) or any version of NDS/eDirectory on NetWare 5.x and higher.

The partition ID of the System partition is always 0.

The Schema Partition

The Schema partition keeps track of all the object class and attribute definition information for the server. This information is synchronized between servers, using a process called Schema Skulk or Schema Sync, and it means that each server has a complete copy of the schema. The Schema Sync process starts with the server that contains the Master replica of the [Root] partition and propagates to the other servers with a copy of [Root] . Then it continues with the servers in the child partitions, until all servers have received a copy of the schema.

The partition ID of the Schema partition is always 1.

The External Reference Partition

The External Reference partition (commonly referred to as the ExRef partition) contains information about objects that don't exist in a partition on the local server. For instance, when a user logs in to the network, NDS looks up the user information by performing a name resolution process (discussed in the "NDS Name Resolution and Tree Walking" section in Chapter 6). The client software navigates the different partitions in the NDS tree until it finds the User object. If this process is repeated, however, every time the user logs in or the object is referenced, efficiency suffers. To avoid having to repeat the process, NDS builds an external reference (a pointer, essentially ) to that object and stores it in the ExRef partition on the server from which the User object is making the request. The next time this user authenticates to the network, the external reference (exref) is used to quickly locate the object within NDS. Such exrefs are deleted if not used for an extended period of time.

NOTE

In addition to providing tree connectivity (that is, tree-walking ) and speeding up object authentication, exrefs are also used to keep track of nonlocal objects when an object is added as a file system or local object trustee, or when an object is added as a member of a group .


TIP

When NDS creates an exref, a Back Link attribute is added to the referenced object to keep track of the server on which the exref was created. This becomes inefficient as the number of exrefs increases . Back links require a server to communicate with every server that contains a Read/Write replica of the partition the back link resides on. eDirectory 8.7 and higher uses the Distributed Reference Link ( DRL ) attribute instead of the Back Link attribute. Distributed reference links have the advantage of referencing a partition rather than a specific server. When information is needed about a DRL , any server with a replica of the partition can supply the information.


Although the ExRef partition exists on every server, only servers with exrefs populate it. Like the System partition, the ExRef partition is not synchronized with other servers. The partition ID of the ExRef partition is always 2.

NOTE

Objects stored in the ExRef partition are called, appropriately, external reference objects . These objects are simply placeholders (but are not pointers, like Alias objects) that are simply representations of the real objects existing in the tree. An exref object is not a copy of the object because it does not contain any of the attributes that the real object has.


The Bindery Partition

All servers that have IPX enabled keep track of services learned from SAP traffic; SLP services are stored in the DS database. Each of these services is stored as a bindery SAP object, and these services are classified as dynamic bindery objects because they are automatically deleted when the server is shut down or when the offered service is no longer available. To provide backward compatibility with NetWare 2.x and NetWare 3.x and bindery-based applications, every server has a SUPERVISOR (pseudo) bindery user and maintains a bindery NetWare Core Protocol (NCP) file server's Type 4 SAP object. These two bindery objects are static in nature and cannot be removed. All this information is maintained in the server's Bindery partition .

Like the System and ExRef partitions, the Bindery partition is not replicated to all servers. Rather, it is kept specific to the server in question. The partition ID of the Bindery partition is always 3.

User-Defined Partitions

The last type of partition is the user-defined (or user-created ) partition . This is the most common type of partition, and it is likely the type you are already familiar with. Any DS server may hold a copy of a user-defined replica. Changes to a user-defined partition must be distributed to other servers that hold a copy of the same user-defined partition. When these changes occur, they are replicated under the control of the Synchronization process.

A replica is a copy of a user-defined partition that is placed on an NDS server. There is no limit to how many replicas a server can hold, subject to disk space availability. Only one replica of the same user-defined partition can exist on a server. There are six replica types:

  • Master

  • Read/Write

  • Read-Only

  • Subordinate Reference

  • Filtered Read/Write (eDirectory 8.5 and higher)

  • Filtered Read-Only (eDirectory 8.5 and higher)

Table 2.4 shows a summary of the capabilities of the various replica types. Each replica type is discussed in detail in the following sections.

Table 2.4. Replica Types and Their Capabilities

CHARACTERISTIC

MASTER

READ/WRITE

READ-ONLY

SUBORDINATE REFERENCE

FILTERED READ/WRITE

FILTERED READ-ONLY

Maintains a list of all other replicas

x

x

x

x

x

x

Contains a complete copy of all object information of the partition

x

x

x

     

Controls partition boundary changes (merging, splitting, moving, creating, deleting, and repairing)

x

         

Controls object changes (creating, moving, deleting, and modifying objects and object property values)

x

x

   

x

 

Supports authentication

x

x

   

x

 

Supports viewing of objects and their information

x

x

x

 

x

x

Can have multiple replicas per partition

 

x

x

x

x

x

Can be changed into a master replica

 

x

x

(see the "SubRef" section, later in this chapter)

   

Can be changed into a Read/Write replica

x

 

x

   

x

Can be used on a server where bindery services are required

x

x

   

x

 

Only contains the partition root object

     

x

   

Is automatically removed if you add a replica of that child partition to the server

     

x

   

Can be created by the network administrator

x

x

x

 

x

x

Cannot be created by the network administrator (created automatically by the system)

     

x

   

Controls background processes

x

         

The partition ID of a user-defined partition is always 4 or higher.

NOTE

Typically, the Master replica of [Root] will have a partition ID of 4 because that is the first user-defined partition that gets created. This may not always be the case, however, because another (Read/Write or Read-Only) replica may be designated as the Master replica at a later time and thus will have a different partition ID.


Master Replicas

The Master replica is the first copy of a new partition. When you install a new server into a new tree, that server automatically receives a Master replica of the [Root] partition. If you then create a new partition by using NDS Manager or ConsoleOne, the already-installed server receives the Master copy of that partition as well because it has the Master replica of the parent partition.

As discussed in Chapter 6, the Master replica must be available for certain partition operations, such as a partition join, a partition split, or an object/partition move. The Master replica can also be used to perform NDS operations such as object authentication, object addition, deletion, and modification.

A Master replica may be used to provide bindery emulation services because it is a writable replica type.

Read/Write Replicas

Similar to Master replicas, a Read/Write (R/W) replica is a writable replica type that can be used to effect object changes. Unlike Master replicas, however, Read/Write replicas are not directly involved in replica operations. The NDS server installation process will ensure that there exists one Master replica and two "full" replicas. (Filtered replicas do not count, as discussed later in this chapter.) When you install an NDS server into a partition that has only two replicas (one Master and one Read/Write replica, for instance), a third replica (Read/Write) will automatically be added to the new server. You normally create additional Read/Write replicas of a given partition to provide fault tolerance and to provide faster access to eDirectory data across WAN links.

TIP

If a Master replica is lost or damaged, a Read/Write replica can be promoted to become the new Master replica. See Chapter 11, "Examples from the Real World," for some examples of this.


A Read/Write replica may be used to provide bindery emulation services because it is a writable replica type.

Read-Only Replicas

The Read-Only (R/O) replica type is seldom ”if ever ”used. It was added because Novell built NDS based on the X.500 directory standard, which specified Read-Only replicas.

Use of R/O replicas is strongly discouraged. They do not provide any advantages with regard to traffic management because they can actually generate more traffic than a Read/Write replica due to referral and redirection.

Any change directed at a server that holds an R/O replica of a partition would end up being redirected by the server to a server with a Read/Write or Master replica. The change would then be synchronized back to the server holding the R/O replica, through the normal synchronization process.

R/O replicas have been used to provide NDS data lookup, as in the case of an address book application. However, Filtered replicas (discussed later in this chapter) are better suited for these types of applications.

Read-Only replicas cannot be used to provide bindery emulation services because they are not writable.

Subordinate Reference Replicas

The Subordinate Reference (SubRef) replica type is the only user-defined replica type that is not actually placed manually. Rather, its creation and deletion are managed automatically by NDS. SubRef replicas are used primarily to provide tree connectivity. In simplest terms, a subordinate reference (subref) is a (downward) pointer to the child partitions. It links a parent partition to a child partition. A SubRef replica contains a complete copy of the partition root object of the child partition. It does not, however, contain any other data for the child partition. A SubRef replica has a complete copy of the partition root object, and it has a Replica attribute that contains the following information:

  • A list of servers where replicas of the child partition are stored

  • The servers' network addresses (both IPX and IP)

  • Replica types stored on these servers

  • Other NDS partition information, such as an ACL summarizing all the effective rights at this point in the tree

In essence, a SubRef replica can be considered the glue that binds parts of the NDS tree together.

SubRef replicas cannot be used to provide bindery emulation services because they are not writable. Also, they cannot be used for fault tolerance purposes because they do not contain all the objects of a partition.

WARNING

It is possible to promote a SubRef replica to a Master replica as a last resort. However, because a SubRef replica does not contain any objects in the partition, you will lose all data in that partition. Refer to the "Server and Data Recovery" section in Chapter 11 for more details.


Filtered Replicas

eDirectory 8.5 introduced two new replica types: Filtered Read/Write and Filtered Read-Only replicas. A Filtered replica is essentially a Read/Write or Read-Only replica that holds only a subset of the objects or attributes found in a normal Read/Write or Read-Only replica. Filter replicas are used to specify which schema classes and attributes will be allowed to pass during synchronization. A Filtered replica ignores changes made to objects outside the filter.

Filtered replicas can be sparse or fractional replicas. A sparse filtered replica contains only objects of selected object classes. All other objects are filtered and not placed in the local database. A fractional Filtered replica contains only attributes of selected attribute types. All other attributes are filtered and not placed in the local database. A typical Filtered replica is both sparse and fractional because it filters both object classes and attribute types.

NOTE

An eDirectory sparse or fractional Filtered replica is also known as a virtual replica .


Filtered replicas are useful in the following situations:

  • To control the size of the eDirectory Directory Information Base (DIB) on a server

  • To improve search efficiency on specific object classes by storing just those object types and attributes in the replica

  • To reduce eDirectory synchronization traffic directed at specific servers by eliminating unneeded objects and attributes from the synchronization process

To create a Filtered replica, you must first create a replication filter on the server that will host the Filtered replica. This replication filter determines which objects and attributes are allowed in the Filtered replicas that reside on the server. You create the replication filter via ConsoleOne.

NOTE

Each eDirectory server can hold only one replication filter. Consequently, all Filtered replicas stored on a server must use the same replication filter. It is possible to have a mixture of filtered replicas and unfiltered replicas on the same server.


WARNING

After a replication filter is modified, all filtered replicas that use it will be placed in the New replica state until they are refreshed with up-to-date data from the unfiltered replicas. This ensures that the information in the Filtered replicas is consistent and complete with respect to their unfiltered counterparts.


In addition to the desired classes and attributes stored in a Filtered replica, eDirectory must always synchronize attributes that are critical to the operation of eDirectory. The following schema flags cause the affected object classes and attributes to be included in a Filtered replica:

  • SF_SPARSE_REQUIRED ” Object classes and attributes designated with this flag will always pass through the replication filter, regardless of the filter settings. The ACL attribute is an example (see Figure 2.15).

    Figure 2.15. The SF_SPARSE_REQUIRED schema flag, which causes the ACL attribute to pass through a Filtered replica filter.
    graphics/02fig15.jpg

  • SF_SPARSE_DESIRED ” This flag allows desired classes and their required attributes to pass through the replication filter.

  • SF_SPARSE_OPERATIONAL ” This flag identifies classes and attributes that must be cached on Filtered replicas (because they are part of the operational schema). Setting this flag on an object class or attribute type definition guarantees that the class or attribute is created as a reference object if it is not in the replication filter. The DS Revision attribute is an example of this.

eDirectory will also allow any objects referenced by an allowed attribute to pass through the filter in a reference state. For example, say a replication filter allows the object class User and attribute Group Membership but filters the object class Group . The Filtered replica will create the Group object and flag it as a reference object. The Group object is required to ensure database consistency, and the reference tag is used to ensure that the incomplete Group object does not synchronize to other servers.

Container objects in an eDirectory database are also allowed to pass through to any filtered replica. This ensures that all objects beneath the container can be created, if the filter allows them through. Container objects created in this manner are also flagged as reference objects. The Unknown object class is flagged as a container. This causes all objects with an object class type Unknown to also be created as reference objects in a Filtered replica.

NOTE

The eDirectory replica synchronization process takes advantage of Filtered replica types. If the outbounding server is running eDirectory 8.5 or higher, the destination server's replication filter is read, and only the required data is sent. Network traffic is significantly reduced during these types of replica synchronizations. If the outbounding server is not running eDirectory 8.5 or higher, the standard replica synchronization process takes place (that is, sending all changed data), but the inbounding server accepts only the data that is allowed through the replication filter.


Filtered Read/Write replicas can make modifications to objects and attributes that pass through the replication filter. These changes will be passed to all other servers in the replica ring. However, a Filtered Read/Write replica is not allowed to fully participate in the transitive synchronization process (see the section "The Synchronization Process" in Chapter 6) and will not send changes that did not originate in the local database. This is necessary to ensure database consistency for all changes.

NOTE

Because Filtered Read/Write replicas do not contain complete information about objects, you should avoid using them to provide bindery emulation services. Filtered Read-Only replicas cannot be used to provide bindery emulation services. Furthermore, Filtered replicas do not provide fault tolerance because they do not contain complete information about the objects in a replica.


Like their Read-Only replica cousins, Filtered R/O replicas cannot make modifications to objects and attributes that pass through the replication filter. Filtered Read-Only replicas can be used for data lookup only.

NOTE

Because data in a Filtered replica is incomplete, an LDAP search could produce constrained or incomplete results. Therefore, by default, an LDAP search request does not examine filtered replicas. While you're performing a Filtered replica search, the search may not return the results as per the replica filter due to either chaining or referral (see the "LDAP Name Resolution Models" section, later in this chapter). If you are certain that a Filtered replica holds the data you need, you can configure the LDAP server to search Filtered replicas (through the Filtered Replica Usage tab in the LDAP server object properties).


Parent/Child Relationships

Each server that contains a replica of the parent partition also contains a Subordinate Reference replica of every child partition that is not physically located on that server. Consider the sample NDS tree shown in Figure 2.16. This tree contains four partitions: Root, A, B, and C. Three file servers ”FS1, FS2, and FS3 ”are installed in this tree, one server in each of the O= containers. Each server holds the only copy of the partition (Master replica) in which the server is contained; the [Root] partition is stored on FS1.

Figure 2.16. A sample NDS tree with four partitions.

graphics/02fig16.gif


Table 2.5 shows the replica structure in this tree.

Table 2.5. Replica Structure of a Sample NDS Tree

SERVER

[ROOT]

PARTITION A

PARTITION B

PARTITION C

FS1

Master

Master

Subordinate Reference

Subordinate Reference

FS2

   

Master

 

FS3

     

Master


FS1 contains Subordinate Reference replicas because the parent of Partitions B and C, Partition [Root] , resides on the server, but Partitions B and C do not. Neither FS2 nor FS3 needs a SubRef replica because neither Partition B nor Partition C has a child partition. If the Master partition of [Root] were placed on FS2 rather than FS1, FS2 would contain Subordinate Reference replicas for Partitions A and C.

NOTE

As a rule of thumb for determining where a subordinate reference partition is going to be placed, remember that a SubRef replica will be placed everywhere the parent partition is but the child partition is not.




Novell's Guide to Troubleshooting eDirectory
Novells Guide to Troubleshooting eDirectory
ISBN: 0789731460
EAN: 2147483647
Year: 2003
Pages: 173

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net