A Piloting Road Map

   

The steps you will follow in establishing your pilot vary depending on your environment and the design of your service. The steps outlined in the following sections are typical and cover the most common scenarios. Don't worry if you need to deviate from these steps, or if you don't have the time or money to cover everything we suggest. Just make sure you cover all the bases that are important to you in your environment.

Prepilot Testing

Before you go to the trouble of setting up a pilot involving users, you should test some aspects of your service in a lab environment. Testing is different from piloting. Testing is done in a closed environment, usually just by you and your staff; piloting is more open , users are involved, and the scope is expanded. For example, you should do preliminary scale, performance, and functionality testing to select the software on which to run your pilot. These tests are best performed in a laboratory environment, where mistakes are easily corrected and configurations are easily changed.

In Chapter 13, Evaluating Directory Products, we talked about the steps to follow when selecting software for your directory. However, laboratory testing should be used for more than just selecting software. In the laboratory you can find out whether the system works for you. Such testing is a crucial step before you pilot or deploy any significant change to your service. Although successful testing doesn't necessarily mean the change will work for your users in the environment outside the lab, it gives you some confidence that it will.

Laboratory testing is aimed at answering objective questions about the system being tested . Does it do what it's supposed to do? Does it perform within acceptable limits? Can it scale to the required size ? Naturally, to answer these questions effectively you need to have appropriate goals in mind for the features you're testing.

Make a list of objective, measurable goals for your system, similar to the one shown in Table 14.1. In this example the component being tested is a new directory server. The objective questions to be determined are whether the new server scales and gives acceptable performance while holding a certain number of entries, serving a predefined number of client connections, and performing a predefined set of queries. The entries, connections, and queries are chosen to reflect the expected typical load that the server will experience in production.

Laboratory testing cannot answer subjective questions about the system being tested. Questions such as, Will users like the system? can be answered only if we ask users during a pilot. Other questions that seem to be objective at first glance also cannot be answered without piloting in a real environment. For example, the interaction between your service and other services on the network, your network topology, various hard-to-predict failure modes, and real user behavior are difficult to produce in a laboratory environment. Piloting helps produce the appropriate conditions to answer these questions.

You should enter the piloting stage only after you have answered some basic questions by testing in the laboratory. Having done so, you can be confident that your pilot will succeed.

Table 14.1. Examples of Objective Criteria to Measure in a Laboratory Testing Environment

Description

Goal

Comments

Number of directory entries

200,000

150,000 people entries; 30,000 group entries; 10,000 organizational unit entries; 10,000 miscellaneous entries.

Number of simultaneous connections

500

Some of these connections may be idle.

Number of simultaneous active connections

50

These connections are all performing the operations listed below.

Response time

Average of less than one second. No client should experience a delay longer than two seconds.

Response time is for clients performing simple equality searches on a single indexed attribute.

Defining Your Goals

What do you want to achieve with your pilot? You will have different goals depending on your directory service type and your environment. How you define your goals leads you to focus your pilot on different aspects of the service. For example, consider the following goals:

  • To produce a directory service for direct use by many demanding users . In this case you might focus your pilot on the user experience. This means spending extra time measuring end-user response time, designing user interfaces, involving human factor expertise, piloting with a large and diverse user community, and conducting focus groups. You should measure your success by how much users like your service, how efficiently the service answers their queries, and how completely it serves their needs.

  • To produce a directory service for use by application developers . In this case your emphasis should be on the interfaces by which application developers access the directory. Measure your success by how easy the system is to use, how quickly new applications can be developed, and how much functionality the system provides. More information on LDAP-enabled applications can be found in Chapters 21, Developing New Applications, and 22, Directory-Enabling Existing Applications.

  • To produce a directory service containing sensitive data that serves the authentication and security needs of applications. If this is your goal, you should focus your pilot on security. This means completing a security analysis to ensure that the security measures protecting your directory are adequate and easy to use. You should measure your success by the security (both perceived and actual) that users of the system are afforded, the ease with which applications can use the security services provided by the directory, and the degree to which the security needs of all applications are covered. Chapter 12, Privacy and Security Design, discussed this topic in detail.

Other potential areas of focus exist, but it's important to realize that you cannot fully pilot every aspect of your service. If you could, you wouldn't have a pilot; you'd have a full-fledged directory service! Try to choose representative aspects of the service that validate your design and pilot those. When you finish your pilot, you should have a high degree of confidence in your overall approach, and you should be confident that any loose ends that you didn't address in your pilot are not going to be showstoppers.

Defining Your Scope and Time Line

The goals you want to achieve by piloting will help you define the scope of your pilot. Pilot scope has several possible dimensions, including the following:

  • How much will end users be involved in your pilot? Will your group of users be small and focused or large and diverse?

  • What aspects of your service will you pilot? Will you try to pilot a few aspects thoroughly, or will you cover all aspects in less depth?

  • Will the pilot have the same number of entries as the planned production service?

  • Will the pilot attempt to simulate the client load anticipated for the production service?

  • Will you pilot all of the applications planned for the production service, or a subset of them?

You will determine part of your scope by the time and resources you have to devote to your pilot. External constraints may be placed on you, or you may place constraints on yourself. A successful pilot is focused and has clear goals and objectives. Try to avoid endlessly piloting with no way of knowing when you're done. Your pilot may end because of either success or failure, and it's important to be able to recognize both outcomes . In the case of a successful pilot, your next step may be full deployment of the service. In the case of a failed pilot, your next step is probably to redesign, retest, and repilot.

A good practice is to draw a time line showing the major milestones in your pilot. This time line serves two purposes. First, it helps you map out the stages of your pilot, which helps you decide what needs to happen when. Second, it gives you a good reality check on the pilot itself. If your time line leaves only a week for locating, training, and getting feedback from your pilot users, you know that you haven't budgeted enough time.

The sample time line in Table 14.2 includes some time for testing, locating pilot users, rolling out the pilot, gathering feedback, and even applying that feedback to the pilot itself. This time line covers just over 12 weeksan aggressive schedule. Unless you have very dedicated and motivated pilot users, don't expect to be able to do things this quickly.

Remember that the purpose of restricting the scope of your directory pilot is to ensure that it happens in a reasonable amount of time and with a reasonable amount of resources. Be as explicit as you can about your scope; avoid extending the pilot into areas that are beyond it.

Developing Documentation and Training Materials

Your pilot may involve users who are not familiar with the service being piloted. Documentation, training materials, and other information can help prepare your pilot users to be effective participants . You might be able to revise these materials and give them to your production users, so it's important to pilot these materials along with the directory service itself.

Table 14.2. A Sample Pilot Time Line

Task

Start Date

Duration

Laboratory testing

+0 weeks

2 weeks

Locating pilot users

+0 weeks

2 weeks

Pilot environment setup and rollout

+2 weeks

1 week

Pilot operation

+3 weeks

4 weeks

Data gathering

+4 weeks

3 weeks

Incorporation of pilot feedback

+7 weeks

2 weeks

Revised pilot environment setup and rollout

+9 weeks

1 week

Revised pilot operation

+10 weeks

2 weeks

Data gathering

+10 weeks

2 weeks

Incorporation of pilot feedback into design

+12 weeks

1 week

There are at least three broad categories of users you may need to address:

  1. End users . If your service exposes end users directly to your directory service, you should provide them with documentation and training materials. End-user documentation is often tutorial. You cannot assume that your users know much about your service or directories in general. You must educate them if you expect them to use the system and be effective pilot participants.

    On the other hand, if your directory service is not directly accessible to end users (for example, if the directory is used to provide authentication and personalization services for a Web-based portal), then it's likely that any required end-user documentation will be provided by the portal designers.

  2. Administrators . Administrative users typically require a different kind of documentation. There are three types of administrators: directory system administrators, directory content administrators, and directory-enabled application administrators. Document the procedures they follow, provide troubleshooting guides for when things go wrong, and train them in the use of the system. Try not to cut corners on documentation for your administrative users; they are responsible in large part for making the system run smoothly.

  3. Application developers . Application developers are often the most sophisticated users of your directory. They also require the most extensive training and documentation materials. Application developers usually need to know everything users need to know, but they also have to understand how to access the directory from their application. Furthermore, they need to know about your directory's naming conventions, available schemas, how to access the directory through an API, and more. You can usually count on developers to be willing to tolerate rougher edges than users, but do not underestimate the amount of information they need to do their job.

Selecting Your Users

Selecting your users is important, especially if your service is targeted at end users (as opposed to a small set of applications you control). The users of your directory service are the least predictable variable in your directory equation. Technical problems, such as inadequate capacity, can be solved relatively easily; problems involving user perceptions and expectations can be much harder to solve.

If you do a good job of selecting pilot users, you will have a representative sample of your ultimate directory service user community. Making your pilot users happy translates directly into making your production users happy. No system is perfect, of course, but choosing your pilot users wisely goes a long way toward ensuring a successful directory deployment.

If you do a poor job of selecting pilot users, on the other hand, you will not have a representative sample of your ultimate directory service user community. Making your pilot users happy may then have no relation to the happiness of your production users. From a user perspective, you might as well have not piloted your directory service at all.

How do you select a good set of pilot users? There is no foolproof method, but here are some guidelines:

  • Know who the users of your production service are going to be . It's important to know the ultimate audience of your directory service. If you don't know this, there is little chance you will select a representative group of pilot users. Be explicit about this. Write down the types of people who you expect to use your directory.

  • Pick your users; don't let your users pick you . You may be tempted to ask for volunteers to pilot your directory service. Although this is fine in some environments, volunteers are a self-selecting group that tends to be more outgoing, more comfortable with computers, and usually more experienced than the general user population. As such, they often make poor representatives of your user community. On the other hand, if your pilot goals are focused on testing the system components of your directory more than perfecting the user experience, a self-selecting group of users might work just fine. In fact, the extra sophistication and experience these users bring to the pilot may even be an advantage in giving the system a better workout.

    If your goal is accurate representation, a better approach may be to recruit users from each group in your organization. This way you can be sure to get appropriate representation from all important constituencies. Also be sure to include users with varying degrees of sophistication. This approach may be difficult. Be prepared to offer some kind of incentive to your pilot participants, such as cash, T-shirts, a free lunch , or another perk. If your pilot offers real advantages to users (for example, better performance), be sure to explain that to your potential pilot users.

    Another good approach is to use a combination of volunteers and handpicked users. The volunteers are easy to get, perhaps by advertisement on the Web. Handpicked users ensure that good representation is maintained . You might accomplish a good balance by using a staged approach: Use volunteers first to work out the early bugs , and then use representative users to make sure the system works for your community.

  • Make your expectations clear . It's important to tell your pilot users what you expect from them and what they can expect from you. Making your expectations clear can help weed out inappropriate pilot users who are not prepared to contribute to the pilot. It also helps users prepare themselves for the pilot and budget their time.

    Explaining to your pilot users what they can expect from you also helps avoid mismatched expectations. This applies to any remuneration they will receive for participating in the pilot, the level of support you can provide to them, the quality of service they can expect, and how their feedback will be incorporated.

  • Don't forget administrators . Piloting is not solely about end users; you also have administrative procedures to test. Don't forget this important aspect of your service or the important administrative users who perform these procedures. Administrative procedures that you might want to pilot include data maintenance, exception handling, data backup and restoration, and recovery from disasters. Be sure to factor these procedures and the corresponding users into your plans.

After you've selected an appropriate number of pilot users, there are some steps you should follow before, during, and after the pilot process. The following list is a minimal set of things to accomplish:

  • Prepare your users . Make sure the users you select have the tools and training necessary to be effective pilot users. If the goal of your pilot is to see how users with no training cope with your directory, no training is necessary. On the other hand, if your goal is to exercise the system and ensure that a wide range of experienced users are served , make sure you provide any necessary training.

  • Be responsive . It's important to be responsive to your pilot users. After all, they are going out of their way to help you make your service better. Be as responsive as you can to their needs, by answering questions, responding to feedback, providing support, and so on.

  • Provide feedback . Your pilot users should feel a sense of ownership, or at least knowledge, of your directory service. One good way to do this is to provide them with constant feedback. If the pilot encounters problems, explain what went wrong. If you make changes to the service, explain what you did. If you gather statistics on the system during the pilot, share them with your users. All these steps will make your users feel more involved in the pilot and more likely to provide good feedback.

    One way to respond to users is to create a mailing list or discussion group containing everyone involved in the pilot. You can send regular status reports , notification of exceptional conditions, and other information using this list, and you'll know that they will reach all pilot participants.

All of these steps can help you develop a successful relationship with your pilot users, which is important if you expect to conduct pilot activities in the future.

Setting Up Your Environment

At the same time that you select your user population, you should set up the environment for your pilot. You want things ready to go as soon as your users are identified. Remember, you are not piloting just to see whether users like the service; you are also piloting to test all the procedures you've designed for creating and maintaining the service and its content. In addition, you are piloting to see whether the system works efficiently as a whole.

The multiple purposes of piloting make your choice of pilot environment even more important. Procedures that work well in one environment may not work at all in another. For example, suppose you rely on a local disk for your directory database during the pilot, but the production service needs to run over Sun's Network File System (NFS). The product you select may not work over NFS, and even if it does, performance may not be acceptable. Similar concerns can arise with your networking, hardware, and software.

The kind of environment you end up with for your pilot depends on the kind of environment you will have in production and the resources you have to duplicate it. Ideally, you will set up a pilot environment that exactly duplicates your production environment. In this way you minimize problems resulting from environment changes from pilot to production. Practical considerations, however, often will force you to create a pilot environment that doesn't match what will be used in production. The differences may run the gamut from using bits and pieces of leftover equipment to using less expensive versions of all your production machines. If you find yourself having to scrimp in this kind of situation, your pilot can still be effective, but you will need to be prepared to deal with the uncertainty that this situation will bring. For example, you may need to extrapolate to determine the capacity of your production environment on the basis of performance data collected from a much smaller pilot environment.

Tip

When your pilot is concluded, keep the equipment used during the pilot as a test bed. As you make changes to your service, you can pilot them on your test bed hardware. The test bed provides a convenient staging service for improvements to your directory service.


Whatever equipment you have at your disposal, keep the following advice in mind when designing your pilot environment:

  • Software versions . Use the same operating system versions (including patches or service packs ) on your pilot machines that you will use on your production machines. The same guideline applies to backup software, third-party software, and the directory server software itself.

  • Hardware configuration . Try to use similar hardware configurations in both the pilot and production environments, including such things as the type of processor, number of processors, type of disk drive, type of backup device, network controller, and other hardware. Some things are more likely to cause problems than others are. For example, moving to a multiple-CPU system in production might create problems not encountered on a single-CPU system in your pilot.

  • Network configuration . Try to ensure that the network configuration of your pilot servers and clients is similar to the production's network configuration, including things such as the available bandwidth, the topology of the network, the amount of other traffic on the network, and the reliability of the network links. Your pilot system may be snappy, but the production system could seem very slow because of too much traffic on the production network or a different topology creating longer network latency.

It's a good idea to make a map of your pilot system. Identify its major components and the network links between them. Label the map with the hardware, software, and type and speed of network links at each component. Identify links between replicas and the role each replica serves. Compare this to the similar map you have made for your production system. Look for any obvious differences, especially in the trouble areas just mentioned. An example of a pilot environment map is shown in Figure 14.1.

Figure 14.1. A Sample Pilot Environment Map

After you've designed your pilot environment, you need to build it. Do this in plenty of time to fix problems before the pilot's official start date. As with your production system, some problems will undoubtedly crop up only in the implementation phase. Leave yourself enough time to fix any glitches that arise.

Rolling Out the Pilot

There are several steps to rolling out your pilot. The steps you use may vary somewhat depending on how large your pilot is and the type of users involved. The most common steps are outlined here:

  1. Bring up the servers.

  2. Test the servers.

  3. Put system administrator feedback mechanisms in place.

  4. Roll out documentation, training, and clients to system administrators.

  5. Put end-user feedback mechanisms in place.

  6. Begin to distribute software to end users, if required.

  7. Begin to distribute documentation and perform training.

  8. Get early feedback.

  9. Widen the distribution of the pilot.

We recommend rolling out the pilot to system administrators before end users. System administrators, who are relatively few in numberand with whom you probably already have a good working relationshipshould go first. If things go smoothly for them, begin rolling out the pilot to a small group of end users. If things go well for them, expand the scope of the pilot with other end users.

Make sure that your feedback mechanisms are in place before rolling out each stage of the pilot. If you don't, you may lose important feedback and, more importantly, your pilot users' confidence. Also be sure to roll out documentation and training as you roll out the pilot to users. Failure to provide this type of assistance can confuse users, giving them a bad perception of the pilot.

Collecting Feedback

With your pilot up and running, you need to begin collecting feedback. Keep in mind that this is the whole point of your pilot, so this step is important. There are several different kinds of feedback you can collect:

  • User feedback . User feedback from your pilot is your one chance to get an early look at what your users think. Then you can modify your system accordingly .

  • System administrator feedback . System administrator feedback comes from your pilot users administering the directory and its content. Collecting and acting on this feedback will have a profound positive influence on the maintainability and reliability of your directory.

  • System feedback . During your pilot, monitor the performance of your servers, your network, and any other relevant system parameters. Collecting and incorporating this feedback into your production system will make it run more smoothly and perform better.

  • Problem reports . Collect all the failure and problem reports you receive during the pilot. It's a good idea to save these reports and analyze them after the pilot, looking for trends that you might miss if you were analyzing only one problem at a time.

You can use various methods to collect feedback, depending on the type. To collect user feedback, you might use the following methods :

  • Interviews . Interviewing users can provide very effective feedback. If conducted by someone with experience, interviews can afford much more effective feedback than practically any other method. The downside of this approach is that it is labor intensive . For an interview to be effective, it needs to be conducted one-on-one. This can be a time-consuming process, but it's an excellent idea.

  • Focus groups . In this approach, small groups of users are interviewed about the pilot. Focus groups have many of the same advantages as one-on-one interviews, with less cost. A great deal of expertise is still needed to conduct effective interviews, but the total time is reduced by the size of the focus group. However, finding a convenient time to schedule focus group sessions can be difficult.

  • Online comments . Setting up an online service through which users can respond is another effective feedback mechanism. You need to find a balance between allowing users to comment in a completely free-form manner and restricting feedback with a mechanism such as a multiple-choice comment form. The free-form mechanism allows maximum flexibility, but users are often unclear in their comments if left to their own devices; be prepared to follow up on these unclear comments. The restricted mechanism produces results that are easy to parse and interpret, but you must ask the right questions and provide the correct choicesa difficult thing to get right.

A good approach may be to use a combination of methods. You want to get good feedback but avoid spending too much time developing fancy feedback mechanisms. Most of what you want can usually be achieved through a simple e-mail collection mechanism.

You can use the same techniques for collecting administrator and developer feedback that you do for user feedback. Out of all the techniques, with administrators and developers you should probably do one-on-one interviews; because there are relatively few of them, this technique may be more feasible for these groups.

Tip

After you've solicited feedback from your pilot participants and acted on it, give them a summary of the feedback they provided and the actions you took. Such a summary helps your pilot users know that their participation was worthwhile, and it helps you obtain willing participants for future pilots.


When you're collecting system feedback, the techniques you use are quite different from those used to collect user feedback. Most of the techniques involve collecting data from your automated monitoring sources. Chapter 19, Monitoring, discusses monitoring of your directory in more detail, but some of the more common and useful techniques are listed briefly here:

  • Operating system monitoring . Use whatever tools your operating system provides to measure the general performance of your servers, including things like disk activity, paging activity, memory usage, system calls, and the ratio between user time and system time on the CPU.

    This kind of monitoring is aimed at identifying system bottlenecks. For example, if you notice an inordinate amount of activity on one disk, you might consider switching some files with high write traffic to a second disk on your system. Doing so distributes the write traffic more evenly, reducing the bottleneck. As another example, if you notice a lot of paging activity, you might either buy more memory for your machine or tune parameters on your software to make it use less memory.

  • Directory software monitoring . This technique involves using the directory's own monitoring and auditing capabilities to determine how smoothly the system is operating. Such capabilities include creating directory error and access log files, monitoring capabilities through Simple Network Management Protocol (SNMP) or via LDAP (for example, using Netscape Directory Server's cn=monitor entry), and providing any other information that the software obtains.

  • Directory performance monitoring . This technique involves directly monitoring the performance of the directory system by using scripts or other tools. The objective of collecting this kind of data is to understand the performance of the directory system itselfthe performance that end users experience. Important performance indicators are low average response times with a small standard deviation. Make sure you measure response time to typical queries so that the data you collect is meaningful.

  • Directory-enabled application monitoring . This technique involves monitoring the performance of directory-enabled applications that depend on the directory for their own performance. Depending on the focus of your service, such feedback can be very important. For example, if your directory serves the needs of a directory-enabled e-mail delivery service, you'll want to know how well that service is functioning and whether the directory service is a bottleneck.

When collecting data, be honest with yourself about what you know (because you measured it) and what you don't know. Your hunches about what's good and what's bad about the pilot may be valid, but there's no substitute for hard data. If the conclusion of your pilot is that you need to increase your budget, for example, having objective data to back up the conclusion is especially valuable .

Scaling Up the Pilot

By definition, your pilot is conducted on a smaller scale than your production service is. Of course, not every aspect of your pilot is necessarily scaled down from the production version. For example, your pilot may involve only a few users, but the pilot directory servers might contain as much data as the production servers do. Be careful to keep this in mind as you interpret data from your pilot.

You should also try to find ways to scale up selected portions of your pilot to increase your confidence in how the system will scale in production. You can scale up your pilot in several areas:

  • Number of entries

  • Size of entries

  • Number of connections

  • Number of queries

  • Number of replicas

Some of these dimensions can be tested in the laboratory. For example, you can test the number of entries and connections that a single server can handle. However, it's important to scale up some aspects of the service during the pilot while you have users using the service. Sometimes the interaction among several factors may combine to produce unexpected results. The more you can simulate these real-world interactions, the more realistic your scaling tests will be.

You can use many techniques to conduct realistic laboratory scaling tests. First formulate a model describing the kinds of loads and conditions you want to test, and then develop test clients that simulate these loads. Each test client may make many connections to the directory, simulating many real-world clients. Develop test data that increases the size of your directory. You can increase the number of entries along with their size, the number of values in each attribute, and so on (these don't have to be real entries). Think about your future needs in each area, and focus your testing on areas you think you'll need.

Introduce other factors into the system. For example, load the network links between clients and servers with other traffic. You might do this by writing a special-purpose client or by simply transferring some large files back and forth during your test runs. Most systems have good tools you can use to induce different kinds of loads. For example, spray is a good tool for loading the network, and mkfile is a good tool for loading the file system. Load the directory server machines with other processes to see whether the machine can be shared with other services.

Simulate network and hardware failures during the test by unplugging network cables and power cords. How does the system react ? Do clients fail over to directory replicas? Is the directory server able to recover its database? Does replication recover gracefully? These questions are important to answer in any kind of environment, but they become more crucial in a large-scale directory environment. For example, if your directory does not recover gracefully from a power outage , you may have to rebuild the database from scratch. Rebuilding from scratch may be tolerable on a small scale, but for a big directory, it can introduce unacceptable downtime.

Watch out for scaling behavior in which the directory system does not degrade gracefully. This kind of "brick wall," when met in a production environment, invariably comes as an unpleasant surprise. For example, consider a directory that can hold only a fixed number of entries or size of database: When these limits are reached, the directory ceases to function. Look for directory software that degrades gracefully as limits are reached.

Finally, make a note of any performance problems you encounter during your pilot and are unable to explain. For example, if you notice that performance of the directory degrades and then corrects itself, save the server logs and note the date, time, and any exceptional conditions you are aware of. It's a good bet that the problem will recur later (possibly after you've gone into the production phase), and the additional data you collect in the testing phase can be helpful in tracking down the underlying cause.

Applying What You've Learned

Applying what you've learned is the most important aspect of your pilot. After all, the whole point of doing the pilot is to learn how well your design works in practice. Naturally, if you learn of flaws in your design, you should make changes to correct them.

This is especially important during your directory's pilot stage. You will get feedback from your pilot that will change your design. Be prepared to incorporate these changes into the pilot itself, providing a feedback loop to let you know when you get things right.

In many areas you will receive feedback that you should incorporate. Here are some of the more important topics to listen for:

  • User experience . The directory experience of your users is perhaps the most subjective design criterion your directory service has. Therefore, it is the most important to validate through continuous refinement and user feedback.

  • Operating system configuration . If you find a problem with your operating system configuration, do your best to correct it during the pilot. Don't assume that upgrading to a newer version of the operating system will fix the problem you experienced; upgrading may work, but it could also introduce new problems. Be aware of the effect of configuration changes on your directory software.

  • Directory software configuration . You may need to change the configuration of your directory server software. For example, you may need to tune the software's database parameters to provide better performance. There are always trade-offs, however. Adding an attribute index will speed up searches on the indexed attribute, but it will increase the size of the database and reduce the speed of updates.

  • Directory-enabled application configuration . One or more of the directory-enabled application clients may need to be reconfigured or even recoded. For example, the application may be making inefficient use of the directory by opening a new connection each time it wants to do a query instead of reusing an open connection. Or the application may be using several searches when one would suffice. Chapter 21, Developing New Applications, describes various techniques that application developers can use to make their applications play nice with the directory.

  • Hardware configurations . You may need to upgrade your hardware because of capacity or other problems. Be sure to pilot with the upgraded hardware; it may introduce other problems, or it may not solve the problem you thought it was solving. Use your laboratory and your pilot to experiment with different hardware configurations and combinations of hardware and software.

  • Network configuration . The network topology of your directory servers may be inadequate. For example, you may find when you do scale testing that you need to move your servers to a high-speed network.

  • Server topology . Your server topology may be inadequate. For example, you may decide that you need a replica in each of your branch offices because your WAN links are not reliable enough. Or you may find that the distribution topology you planned leads to poor performance. In this case you may need to redesign your topology so that the data is closer to the clients that need it. Be sure to pilot these changes, and make sure your clients are configured to take advantage of the new topology.

It's important to incorporate as much feedback as possible and then repilot the service with the design changed accordingly, but you also need to be realistic. You will not be able to incorporate all the feedback you receive. Some feedback is so trivial that there is no need to repilot. Some of it will be bad advice. Some of it will not be practical. Some pieces of feedback may conflict with others. For each piece of feedback you receive, decide if it is important enough to try to incorporate into the pilot, or if it can safely be resolved during the transition to production. At the end of your pilot, each issue should have been addressed, or a plan of action for addressing that issue should exist.

   


Understanding and Deploying LDAP Directory Services
Understanding and Deploying LDAP Directory Services (2nd Edition)
ISBN: 0672323168
EAN: 2147483647
Year: 2002
Pages: 242

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net