Directory Service Maintenance

   

In this section we describe the various procedures used to maintain HugeCo's extranet directory.

Data Backups and Disaster Recovery

Backups of the extranet directory servers are handled in the same manner as backups of the other directory servers. A digital linear tape (DLT) drive is attached to the master extranet directory server, and backups are performed nightly at 0200 hours (2 A.M. ) local time via the Unix cron daemon. Specifically, at 0200 hours the Netscape Directory Server db2bak utility is run to generate a database backup on the local disk. Immediately after the db2bak utility completes, the backup directory is copied to the backup media via the Unix tape archive (tar) command, and verified .

After the tar file has been copied to disk, the cron job removes any backup files older than three days. This means that the most recent three days' worth of backups are always on the server's disk and can be quickly restored in an emergency (for example, if an errant update procedure deletes important data from the directory). The tapes are transferred off-site twice a week and stored in a secure facility.

Disaster recovery services for extranet applications were added to the contract that HugeCo maintains with a disaster recovery vendor. The disaster recovery approach for the extranet application provides for a cold standby site. In the event that a disaster renders HugeCo's corporate data center unusable, the cold site will be brought online, the directory server software installed, and data restored from off-site backups. The use of a cold standby saves a considerable sum of money, but it means that recovery time for the application will be measured in days, not hours.

This recovery time was deemed acceptable, given that HugeCo maintains a hot standby for the main internal order entry system. In the event of a disaster, the main database and order entry application will fail over to the hot site. The extranet order entry site will be unavailable, but retailers will be able to place orders via the HugeCo call center and track those orders via the extranet application once it is brought back online.

Maintaining Data

Maintenance of extranet directory data involves two data sources: the Oracle database that tracks information about the authorized HugeCo retailers, and the managers at the individual retailers. Each of these data sources is considered authoritative for certain directory information.

The Oracle database is considered the authoritative source for information about the individual retail outlets. The retailer name , telephone and fax numbers , mailing address, retailer number, and list of authorized products are all synchronized from the Oracle database into the directory on a regular basis via a set of Perl scripts developed by HugeCo's IS staff. An established procedure is used to add or remove retailers from the Oracle database, and these changes propagate to the directory via the synchronization scripts.

The entries corresponding to individual employees at the retailers, on the other hand, are owned by the manager at each particular retailer. When a new employee is to be granted access to the extranet applications, the manager uses a special Web-based application to create a new entry in the directory; this application creates the directory entry for the employee and sets an initial password. Similarly, when an employee leaves a retailer, access must be revoked . The manager can accomplish this by using the same Web-based application.

20/20 Hindsight: Preventing Stale Directory Data from Accumulating

Delegating the creation of new employee entries to the individual retail managers is an effective way to cut down on costs. In fact, it's absolutely necessary because HugeCo's Human Resources division has no record of these employees at all.

However, it's also necessary to take steps to ensure that stale directory entries do not accumulate. When employees are terminated , it's the responsibility of the retailer's manager to remove their records. But what happens if the manager forgets to do this? The initial directory design depended on managers to perform this task, and it did not include any method for alerting the manager to the presence of stale data.

To prevent stale directory entries from accumulating, an automatic expiration system was put in place. Each employee entry in the database (except for the manager's entry) is created with an expiration date six months in the future. One month before expiration, the manager is alerted to the fact that the entry is about to expire. The manager can easily reinstate the employee for an additional six-month period by clicking a button in the user management application; if the employee is not reinstated, the entry is removed from the directory. Behind the scenes, a Perl script (which uses the PerLDAP module) runs nightly and searches for employee directory entries that are about to expire. For each such entry found, the script arranges for the manager of that entry to be notified. The same script removes entries that have expired .

When a manager leaves his or her position with a retailer, HugeCo administrative staff remove his or her entry, and they add a new record when a new manager is hired . HugeCo representatives periodically contact the HRP retailers via telephone as part of a regular administrative procedure, and this is frequently the point when they discover that a manager has left a retailer. If a new manager has been appointed, a new entry is created, and access control lists (ACLs) in the directory are altered to grant appropriate privileges to the new manager.

These steps keep directory data from becoming stale, thereby improving its quality and usefulness .

Monitoring

HugeCo extended its existing monitoring system to monitor the extranet application services in the following ways:

  • eHealth SystemEDGE agents from Concord were installed on each of the three directory servers (master and two replicas). These agents verify that the ns-slapd process is running and generate an alert if it is not.

  • Custom code written in PerLDAP (see Chapter 19, Monitoring, for similar code) probes the directory on the LDAPS (LDAP-over-SSL) port (636) by retrieving an entry. The response time of the server is recorded, and alerts are generated if the probe fails or does not respond in a reasonable amount of time. The monitoring tools perform this type of probe because it closely matches the type of operation that the directory will service in handling client requests .

Troubleshooting

Procedures were added to the existing HugeCo directory escalation process to accommodate the Web and directory servers that support the new extranet applications.

   


Understanding and Deploying LDAP Directory Services
Understanding and Deploying LDAP Directory Services (2nd Edition)
ISBN: 0672323168
EAN: 2147483647
Year: 2002
Pages: 242

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net