Directory-Enabling Existing Applications Checklist

Understanding and Deploying LDAP Directory Services > 13. Piloting Your Directory Service > A Piloting Road Map

<  BACK CONTINUE  >
153021169001182127177100019128036004029190136140232051053054012007228047174239095036127

A Piloting Road Map

The steps you will follow in establishing your pilot vary depending on your environment and the design of your service. The steps outlined in the following sections are typical and cover the most common scenarios. Don't worry if you find you need to deviate from these steps, or if you don't have the time or money to cover everything we suggest. Just make sure you cover all the bases that are important to you in your environment.

Defining Your Goals

What do you want to get out of your pilot? You will have different goals depending on the type of service you have and the environment you're in. How you define your goals leads you to focus your pilot on different aspects of the service. For example, consider the following goals:

  • To produce a directory service for direct use by many demanding users.   In this case, you might focus your pilot on the user experience. This means spending extra time designing user interfaces, involving human factors expertise, piloting with a large and diverse user community, and conducting focus groups. You should measure your success based on how much users like your service, how efficiently it answers their queries, and how completely it serves their needs.

  • To produce a directory service for use by application developers.   In this case, your emphasis should be on the interfaces by which application developers access the directory. Your pilot users in this case would be developers. You should measure your success based on how easy the system is to use, how quickly new applications can be developed, and how much functionality the system provides. More information on LDAP-enabled applications can be found in Chapter 20, "Developing New Applications," and Chapter 21, "Directory-Enabling Existing Applications."

  • To produce a directory service containing sensitive data that serves the authentication and security needs of applications.   If this is your goal, you should focus your pilot on security. This means going through a security analysis to ensure that the security measures protecting your directory are both adequate and easy to use. You should measure your success based on the security (both perceived and actual) that users of the system are afforded, the ease with which applications can use the security services provided by the directory, and the degree to which the security needs of all applications are covered. Chapter 11, "Privacy and Security Design," discusses this topic in detail.

Other potential areas of focus exist, of course. The goals in the preceding list typically have an even tighter focus. For example, an application directory might be targeted specifically at extranet applications. In practice, you probably want to focus on all of these areas to some degree, but it's important to realize that you cannot fully pilot every aspect of your service. If you could, you wouldn't have a pilot ”you'd have a full-fledged directory service! The same goes for pre-pilot testing: You can't test everything, so focus your tests on the most important aspects of the service.

Defining and prioritizing the goals you have in piloting your directory service helps you focus your efforts and make your pilot more effective.

Defining Your Scope

The goals you want to achieve by piloting naturally define the scope of your pilot. This scope has several dimensions: How much will users be involved in your pilot? Will your group of users be small and focused or large and diverse? What aspects of your service will you pilot? Will you try to pilot a few aspects thoroughly, or will you cover all aspects in less depth?

Part of your scope will be determined by the time and resources you have to devote to your pilot. You may have external constraints placed on you, or you may place constraints on yourself. A successful pilot is focused and has clear goals and objectives. Try to avoid endlessly piloting with no way of knowing when you are done. Your pilot may end because of either success or failure, and it's important to be able to recognize both outcomes . In the case of a successful pilot, your next step may be full deployment of the service. In the case of a failed pilot, your next step is probably to redesign, retest, and repilot.

A good practice is to draw a timeline showing the major milestones in your pilot. This timeline serves two purposes. First, it helps you map out the stages of your pilot, which helps you decide what needs to happen when. Second, it gives you a good reality check on the pilot itself. If your timeline leaves only a week for locating, training, and getting feedback from your pilot users, you would know that you haven't budgeted enough time.

The sample time line in Table 13.2 includes some time for testing, locating pilot users, rolling out the pilot, gathering feedback, and even applying that feedback to the pilot itself. As you can see, this timeline takes just over 12 weeks ”an aggressive schedule. Unless you have very dedicated and motivated pilot users, don't expect to be able to do things this quickly.

Table  13.2. An example of a pilot timeline
Task Description Start Date Duration
Laboratory testing +0 weeks 2 weeks
Locating pilot users +0 weeks 2 weeks
Pilot environment setup and rollout +2 weeks 1 week
Pilot operation +3 weeks 4 weeks
Data gathering +4 weeks 3 weeks
Incorporating pilot feedback +7 weeks 2 weeks
Revisedpilot environment setup and rollout +9 weeks 1 week
Revised pilot operation +10 weeks 2 weeks
Data gathering +10 weeks 2 weeks
Incorporating pilot feedback into design +12 weeks 1 week

Constructing an explicit timeline also helps you work more efficiently. A timeline can help identify opportunities to perform tasks in parallel, decreasing the time necessary for the pilot. For example, you may decide that locating pilot users can be done in parallel with setting up the service, or that data gathering can completely overlap with pilot operation.

Remember that the purpose of restricting the scope of your directory pilot is to ensure that it actually happens in a reasonable amount of time with a reasonable amount of resources. Be as explicit as you can about your scope; avoid extending the pilot into areas that are beyond it.

Developing Documentation and Training Materials

Your pilot may involve users who are not familiar with the service being piloted. Therefore, you should develop documentation, training materials, and other information to prepare your pilot users to be effective participants . You might be able to revise these materials and give them to your production users, so it's important to pilot these materials along with the directory service itself.

There are at least three broad categories of users you should be sure to address:

  • End users

  • Administrators

  • Application developers

End user documentation is often tutorial in nature. You cannot assume that your users know much about your service or directories in general to begin with. You must educate them if you expect them to use the system and be effective pilot participants.

The complexity of your end user documentation and training materials depends on the complexity of your directory service, the tasks it will be used for, and the sophistication of your users. A simple phonebook service, for example, may require only online help. A more complicated directory service may require a printed user manual. An even more complicated system may require users to attend some kind of training session. Be sure to determine the level of documentation and training required by your users.

Administrative users typically require a different kind of documentation. There are three types of administrators: directory system administrators, directory content administrators, and directory-enabled application administrators. You must document the procedures they follow, provide troubleshooting guides for when things go wrong, and perhaps train them in the use of the system. Don't scrimp on documentation for your administrative users; they are responsible in large part for making the system run smoothly.

Application developers are often the most sophisticated users of your directory. They also require the most extensive training and documentation materials. Application developers usually need to know everything users need to know, but they also have to understand how to access the directory from their application. Furthermore, they need to know about your directory's naming conventions, available schema, how to access the directory through an API, and more. You can usually count on developers to be willing to tolerate rougher edges than users, but do not underestimate the amount of information they need to do their job.

Selecting Your Users

Selecting your users is important, especially if your service is targeted at end users (as opposed to a small set of applications you control). The users of your directory service are the least predictable variable in your directory equation. Technical problems, such as inadequate capacity, can be solved relatively easily; problems involving user perceptions and expectations can be much harder to solve. It's important to detect and correct these problems early.

It's also important to make a good first impression with your directory service. This is important in a pilot, but it's absolutely critical in a production deployment. A bad first impression , no matter what the cause, can spell disaster and sometimes demise for an otherwise worthy project. The best way to be confident that you will make good first and subsequent impressions in production is to know that you've already made a favorable impression on a similar audience. This is where your pilot users come in. A good pilot provides a smooth transition to a production service. (Chapter 15, "Going Production," describes in detail the process of moving from pilot to production.)

Tip

When making the transition from pilot to production, make sure you have a backup plan in case things go wrong. Ideally, you should be able to switch back to the old production system quickly and seamlessly in case of trouble .



If you do a good job of selecting pilot users, you will have a representative sample of your ultimate directory service user community. Making your pilot users happy translates directly into making your production users happy. No system is perfect, of course, but choosing your pilot users wisely goes a long way toward ensuring a successful directory deployment.

If you do a poor job of selecting pilot users, you will not have a representative sample of your ultimate directory service user community. Making your pilot users happy may then have no relation to the happiness of your production users. From a user perspective, you might as well have not piloted your directory service at all.

Naturally, this raises a question: How can you select a good set of pilot users? There is no foolproof method, but here are some guidelines that you should follow:

  • Know your users.   It's important to know the ultimate audience of your directory service. If you don't know this, there is little chance you will select a representative group of pilot users. Be explicit about this. Write down the types of people you expect to use your directory.

  • Pick your users ”don't let your users pick you.   You may be tempted to ask for volunteers to pilot your directory service. Although this is fine in some environments, you need to realize an important point: Volunteers are a self-selecting group that tends to be more outgoing, more comfortable with computers, and usually more experienced than the general user population. As such, they often make poor representatives of your user community.

    On the other hand, if your pilot goals are focused on testing the system components of your directory more than perfecting the user experience, a self-selecting group of users might work just fine. In fact, the extra sophistication and experience these users bring to the pilot may even be an advantage in giving the system a better workout.

    If your goal is accurate representation, a better approach may be to recruit users from each group in your organization. This way you can be sure to get appropriate representation from all important constituencies. Also, be sure to include users of varying degrees of sophistication. This approach may be difficult. Be prepared to offer some kind of incentive to your pilot participants, such as cash, T-shirts, a free lunch , or some other perk. If your pilot offers real advantages to users (e.g., better performance), be sure to explain that to your potential pilot users.

    Another good approach is to use a combination of volunteers and hand-picked users. The volunteers are easy to get, perhaps by advertising on the Web. The hand-picked users ensure that good representation is maintained . You might accomplish this by using a staged approach: Use volunteers first to work out the early bugs , and then use representative users to make sure the system works for your community.

  • Make your expectations clear.   It's important to tell your pilot users what you expect from them and what they can expect from you. Making your expectations clear can help weed out inappropriate pilot users who are not prepared to contribute to the pilot. It also helps users prepare themselves for the pilot and budget their time.

    Explaining to your pilot users what they can expect from you also helps avoid mismatched expectations. This applies to any remuneration they will receive for participating in the pilot, the level of support you can provide to them, the quality of service they can expect, and how their feedback will be incorporated.

  • Don't forget administrators.   Piloting is not solely about end users ”you also have administrative procedures to test. You should not forget this important aspect of your service, nor should you forget the important administrative users who perform these procedures. Administrative procedures you might want to pilot include data maintenance, exception handling, backups and restoring, disaster recovering, and more. Be sure to factor these procedures and the corresponding users into your plans.

After you've selected an appropriate number of pilot users, there are a number of steps you should be sure to follow before, during, and after the pilot process. The following list is a minimal set of things to accomplish:

  • Prepare your users.   Make sure the users you select have the tools and training necessary to be effective pilot users. If the goal of your pilot is to see how users with no training cope with your directory, no training is necessary. On the other hand, if your goal is to exercise the system and ensure that a wide range of experienced users are served , make sure you provide any necessary training.

  • Be responsive .   It's important to be responsive to your pilot users ”after all, they are going out of their way to help you make your service better (very few of them are in it for their health). You should strive to be as responsive as you can to their needs. This includes answering questions, responding to feedback, providing support, and so on.

  • Provide feedback.   Your pilot will be more successful if your pilot users feel a sense of ownership, or at least knowledge, of your directory service. One good way to do this is to make sure you provide constant feedback to the pilot users. If the pilot encounters problems, explain what went wrong. If you make changes to the service, explain what you did. If you gather statistics on the system during the pilot, share them with your users. All these steps will make your users feel more involved in the pilot and, therefore, more likely to be motivated to provide good feedback.

    One way to respond to users is to create a mailing list or discussion group containing everyone involved in the pilot. You can send regular status reports , notification of exceptional conditions, and other information using this list, and you'll know that they will reach all pilot participants.

All of these steps can help you develop a successful and long- lasting relationship with your pilot users, which is important if you expect to conduct pilot activities in the future. It can be very handy, not to mention less expensive, to have a batch of willing pilot participants on hand.

Setting Up Your Environment

At the same time that you select your user population, you should set up the environment for your pilot. You want things ready to go as soon as your users are identified. It's important to set up the proper environment for your pilot. Remember, you are not piloting just to see whether users like the service; you are also piloting to test all the procedures you've designed for creating and maintaining the service and its content. You are also piloting to see whether the system works efficiently as a whole.

This makes your choice of pilot environment even more important. Procedures that work well in one environment may not work at all in another. For example, suppose you rely on a local disk for your directory database during the pilot, but the production service needs to run over Sun's Network File System (NFS). The product you select may not work over NFS, and even if it does, performance may not be acceptable. Similar concerns can arise with your networking, hardware, software, and so on. Try to duplicate your production environment as closely as you can. If necessary, do so on a smaller scale. For example, use fewer servers, but try to use the same type of servers found in your production environment.

The kind of environment you end up with for your pilot depends on the kind of environment you will have in production and the resources you have to duplicate it. Ideally, you would set up a pilot environment that exactly duplicates your production environment. This way, you could minimize problems resulting from environment changes from pilot to production. The same holds true for any test equipment you use during the pre-pilot testing phase: For the tests to be useful, try to make the test equipment match the production equipment as closely as possible.

Tip

When your pilot is concluded, try to maintain the pilot equipment as a testbed. As you make changes to your service, you can pilot them on your testbed hardware. The testbed provides a convenient staging service for improvements to your directory service.



More often than not, resource and financial restrictions prohibit the development of an exact duplicate of your production system. Instead, you are often forced to make do with what you have. This may run the gamut from bits and pieces of leftover equipment to less expensive versions of all your production machines. If you find yourself having to scrimp in this kind of situation, be aware that your pilots will likely not be as effective.

Whatever equipment you have at your disposal, keep the following advice in mind when designing your pilot environment:

  • Software versions.   Make sure you use the same operating system versions on your pilot machines that you will use on your production machines. This also applies to backup software, third-party software, and, of course, the directory itself (unless the purpose of your pilot is to try out a new version of the directory software).

  • Hardware configuration.   As much as possible, make sure you use similar hardware configurations in both the pilot and production environments. This includes such things as the type of processor, number of processors, type of disk drive, type of backup device, network controller, and other hardware. Some things are more likely to cause problems than others. For example, moving to a multiple-CPU machine in production might create problems not exercised on a single-CPU machine in your pilot.

  • Network configuration.   Try to ensure that the network configuration of your pilot servers and clients are similar to the production's network configuration. This includes things such as the available bandwidth, the topology of the network, the level of other traffic on the network, and the reliability of the network links. Network configuration can have an especially significant impact on the response time that users of the system perceive. Your pilot system may be pretty snappy, but the production system could seem very slow because of too much traffic on the production network or a different topology creating longer network latency.

It's a good idea to make a map of your pilot system. Identify its major components and the network links between them. Label the map with the hardware, software, and type and speed of network links at each component. Identify links between replicas and the role each replica serves. Compare this to the similar map you have made for your production system. Look for any obvious differences, especially in the trouble areas just mentioned. An example of a pilot environment map is shown in Figure 13.1.

Figure 13.1 An example of a pilot environment map.

After you've designed your pilot environment, you need to build it. Make sure you do this in plenty of time to fix problems before the pilot's official start date. Like with your production system, some problems will undoubtedly crop up only in the implementation phase. Be sure to leave yourself enough time to fix any glitches that arise.

Rolling Out the Pilot

There are several steps to rolling our your pilot. The actual steps you use may vary somewhat depending on how large your pilot is and the type of users involved. The most common steps are outlined here:

  1. Bring up the servers.

  2. Test the servers.

  3. Put system administrator feedback mechanisms in place.

  4. Roll out documentation, training, and clients to system administrators.

  5. Put end user feedback mechanisms in place.

  6. Begin to distribute clients to users.

  7. Begin to distribute documentation and perform training.

  8. Get early feedback.

  9. Widen the distribution of the pilot.

The basic steps are simple: Bring up the service and distribute clients and documentation, and you are off and running. We've added a couple steps to show that you should test things at each step of the way and roll out the pilot incrementally, if possible.

We recommend rolling out the pilot to system administrators before end users. This often provides a good incremental deployment. System administrators, who are relatively few in number ”and with whom you should already have a good working relationship ”should go first. If things go smoothly for them, begin rolling out the pilot to a small group of end users. If things go well for them, expand the scope of the pilot with other end users.

Always make sure that your feedback mechanisms are in place before rolling out each stage of the pilot. Failure to do this may result in the loss of important feedback and, more importantly, the loss in confidence of your pilot users. Also, be sure to roll out documentation and training as you roll out the pilot to users. Failure to do this can result in confused users, which leads to a bad perception of the pilot.

Collecting Feedback

With your pilot up and running, you need to begin collecting feedback. Keep in mind that this is the whole point of your pilot, so this step is important. There are several different kinds of feedback you can collect:

  • User feedback.   This is among the most important feedback to collect from your pilot. Again, you get only one chance to make that important first impression on your user population. User feedback from your pilot is your one chance to get an early look at what your users think and then modify your system accordingly .

  • System administrator feedback.   System administrator feedback comes from your pilot users administering the directory and its content. Do not underestimate the importance of collecting and incorporating this feedback. It has a direct relationship to the ease with which your directory is maintained. A well-maintained directory is a happy directory.

  • System feedback.   This type of feedback you gather yourself. During your pilot, you should monitor the performance of your servers, your network, and any other relevant system parameters. Collecting and incorporating this feedback makes your production system run more smoothly and perform better.

  • Problem reports.   Make sure you analyze all the failures and problem reports you receive during the pilot. It's a good idea to save these reports and analyze them after the pilot, looking for trends that might be missed if you were to analyze only one problem at a time.

You can use various methods to collect feedback, depending on the type. To collect user feedback, you might use any of the following methods :

  • Interviews.   Interviewing users can provide very effective feedback. An experienced interviewer can obtain much more effective feedback than practically any other method. The downside of this approach is that it is labor- intensive . For an interview be most effective, it needs to be conducted one-on-one. This can be a time-consuming process, but it's a really good idea.

  • Focus groups.   With this mechanism, a small group of users is interviewed about the pilot. This has many of the same advantages as one-on-one interviews, with less of the cost. A great deal of expertise is still needed to conduct effective interviews, but the total time is reduced by the size of the focus group. Finding a convenient time to schedule focus group sessions can be difficult.

  • Online comments.   Setting up an online service through which users can respond is another effective feedback mechanism. You need to find a balance between allowing users a completely free-form comment mechanism and providing a restricted feedback mechanism such as multiple choice. The free-form mechanism allows maximum flexibility, but users are often unclear in their comments if left to their own devices; be prepared to follow up on these unclear comments. The restricted mechanism produces results that are easy to parse and interpret, but it requires that you to ask the right questions and provide the correct choices ”a difficult thing to get right.

A good approach may be to use some combination of both methods. You want good feedback, but avoid spending too much time developing fancy feedback mechanisms. Most of what you want can usually be achieved through a simple email-collection mechanism.

You can use the same techniques for collecting administrator and developer feedback that you do for user feedback. Out of all the techniques, you should probably do one-on-one interviews with administrators and developers; because there are relatively few of them, this may be more feasible . Keep in mind that you should ask administrators and developers different questions than you ask your users.

Tip

One important thing to keep in mind when asking for feedback from users and administrators is this: Do not ask for feedback unless you intend to listen. There is nothing more frustrating than to be asked for feedback but then feel that your feedback is ignored. This makes your users feel like they've wasted their time. Of course, this does not mean you need to take every suggestion made during the pilot. Sometimes you will be unable to incorporate feedback for very good reasons. In these situations, be sure to let users know why you didn't incorporate their feedback. When you do incorporate feedback, be sure to tell users about that, too. Remember that your pilot users are busy people just like you. Be proactive in soliciting feedback.



When collecting system feedback, the techniques you use are quite different. Most of the techniques involve collecting data from your automated monitoring sources. Chapter 18, "Monitoring," discusses monitoring of your directory in more detail, but some of the more common and useful techniques are listed briefly here:

  • OS monitoring.   Use whatever tools your operating system provides to measure the general performance of your servers. This includes things such as disk activity, paging activity, memory usage, system calls, and the ratio between user time and system time on the CPU.

    This kind of monitoring is aimed at identifying system bottlenecks. For example, if you notice an inordinate amount of activity on one disk, you might consider switching some files with high write traffic to a second disk on your system. This distributes the write traffic more evenly, reducing the bottleneck. As another example, if you notice a lot of paging activity, you might either buy more memory for your machine or tune parameters on your software to make it use less memory.

    Another way that system monitoring is useful is in evaluating your directory software. Look for unusual activity, especially as you scale up your service. It's better to identify bottlenecks before they degrade your production system.

  • Directory software monitoring.   This technique involves using the directory's own monitoring and audit capabilities to determine how smoothly the system is operating. These capabilities include directory error and access log files, monitoring capabilities through SNMP or via LDAP (for example, using Netscape Directory Server's cn=monitor entry), and any other information the software provides.

  • Directory performance monitoring.   This technique involves direct monitoring the performance of the directory system by using scripts or other tools. The objective of collecting this kind of data is to get a feel for the performance of the directory system itself ”the performance that end users experience.

    Important things to look for here are low average response times with small standard deviation. Make sure you measure response time to typical queries so that the data you collect is meaningful.

  • Directory-enabled application monitoring.   This technique involves monitoring the performance of directory-enabled applications that depend on the directory for their own performance. Depending on the focus of your service, this can be very important feedback to collect. For example, if your directory serves the needs of a directory-enabled email delivery service, you'll want to know how well that service is functioning and whether the directory service is a bottleneck.

When collecting data, try to be as analytical as possible. Your hunches about what's good and what's bad about the pilot may well be valid, but there's no substitute for hard data. If the conclusion of your pilot is that you need to increase your budget, for example, having objective data to back up this conclusion is especially valuable .

Scaling It Up

By definition, your pilot is conducted on a smaller scale than your production service. Of course, not every aspect of your pilot is necessarily scaled down from the production version. For example, your pilot may involve only a few users, but the pilot directory servers might contain as much data as the production servers. Be careful to keep this in mind as you interpret data from your pilot.

You should also try to find ways to scale up selected portions of your pilot to increase your confidence in how the system will scale in production. There are several areas in which you can scale up your pilot:

  • Number of entries

  • Size of entries

  • Number of connections

  • Number of queries

  • Number of replicas

Some of these dimensions can be tested in the laboratory. For example, you can test the number of entries and connections a single server can handle. But it's important to scale up some aspects of the service during the actual pilot while you have users using the service. Sometimes the interaction among several factors may combine to produce unexpected results. The more you can simulate these real-world interactions, the more realistic your scaling tests will be.

You can use many techniques to conduct realistic laboratory scaling tests. First, formulate a model describing the kinds of loads and conditions you want to test, and then develop test clients that simulate these loads. Each test client may make many connections to the directory, simulating many real-world clients. Develop test data that increases the size of your directory. You can increase the number of entries along with their size, the number of values in each attribute, and so on (these don't have to be real entries). Think about your future needs in each area, and focus your testing on areas you think you'll need.

Introduce other factors into the system. For example, load the network links between clients and servers with other traffic. You might accomplish this by writing a special-purpose client or by simply transferring some large files back and forth during your test runs. Most systems have good tools you can use to induce different kinds of loads. For example, spray is a good tool for loading the network, and mkfile is a good tool for loading the file system. Load the directory server machines with other processes; this will help you understand whether the machine can be shared with other services.

Simulate network and hardware failures during the test by unplugging network cables and power cords. How does the system react ? Do clients fail over to directory replicas? Is the directory server able to recover its database? Does replication recover gracefully? These questions are important to answer in any kind of environment, but they become more crucial in a large-scale directory environment. For example, if your directory does not recover gracefully from a power outage , you may have to rebuild the database from scratch. This may be tolerable on a small scale, but for a big directory it can introduce unacceptable downtime.

Watch out for scaling behavior in which the directory system does not degrade gracefully. This kind of "brick wall," when met in a production environment, invariably comes as an unpleasant surprise. For example, consider a directory that can hold only a fixed number of entries or size of database: When these limits are reached, the directory ceases to function. Look for directory software that degrades gracefully as limits are reached.

Applying What You've Learned

Applying what you've learned is the most important aspect of your pilot. After all, the whole point of doing the pilot is to learn how well your design works in practice. Naturally, if you learn of flaws in your design, you should make changes to correct them.

This is especially important during your directory's pilot stage. You will get feedback from your pilot that will change your design. Be prepared to incorporate these changes into the pilot itself, providing a feedback loop to let you know when you get things right.

There are many areas in which you will receive feedback that you should incorporate. Following are some of the more important topics to listen for:

  • User experience.   The directory experience of your users is perhaps the most subjective design criterion your directory service has. Therefore, it is the most important to validate through continuous refinement and user feedback.

  • Operating system configuration.   If you find a problem with your operating system configuration, be sure to correct it and repilot. Do not assume that upgrading to a newer version of the OS will fix the problem you experienced; upgrading may work, but it could also introduce new problems. Be aware of the effect of configuration changes on your directory software.

  • Directory software configuration.   You may find that you need to change the configuration of your directory server software. For example, you may need to tune the software's database parameters to provide better performance. However, make sure that better performance in one area does not degrade performance in another. Does turning on indexing to make the database faster also make it larger? Does making the database smaller to conserve disk space also make it slower? Understand the tradeoffs you are making.

  • Directory-enabled application configuration.   You may find that one or more of the directory-enabled application clients need to be reconfigured or even recoded. For example, the application may be making inefficient use of the directory by opening a new connection each time it wants to do a query instead of reusing an open connection. Or the application may be using several searches when one would suffice. These kinds of inefficiencies can have a life- threatening impact on your directory's performance. Use any leverage you can to encourage application developers and maintainers to make their applications play nice with the directory.

  • Hardware configurations.   You may discover that you need to upgrade your hardware because of capacity or other problems. Be sure to pilot with the upgraded hardware; it may introduce other problems, or it may not solve the problem you thought it was solving. Use your laboratory and your pilot to experiment with different hardware configurations and combinations of hardware and software.

  • Network configuration.   You may discover that the network topology of your directory servers is inadequate. For example, you may find when you do scale testing that you need to move your servers to a high-speed network.

  • Server topology.   You may find that your server topology is inadequate. For example, you may decide you need a replica on each subnet because the networks between clients and the main server subnet are not reliable enough. Or you may find that the distribution topology you planned leads to poor performance. In this case, you may need to cross-replicate or collocate the distributed partitions or alter your namespace design to allow a different partitioning strategy. Be sure to pilot these changes, and make sure your clients are configured to take advantage of the new topology.

Make sure you prioritize the feedback you receive. It's important to incorporate as much feedback as possible and then repilot the service with the design changed accordingly. But you also need to be realistic. You will not be able to incorporate all the feedback you receive. Some feedback is so trivial that there is no need to repilot. Some of it will be bad advice. Some of it will not be practical. And some pieces of feedback may conflict with others. In the latter case, you have to make a choice about which feedback, if any, gets incorporated. Prioritizing feedback enables you to determine which suggestions to incorporate given your limited resources.

As mentioned earlier, be sure to return feedback to those users who have taken the time to provide you with feedback. This will make users happy and more likely to participate in future pilots. It may not be practical to personally answer each suggestion that comes in. If this is the case, publish a summary of the feedback you've received along with your planned response. This is often sufficient to make users feel they are involved in the pilot and that their feedback has been listened to ”even if not implemented verbatim.



Understanding and Deploying LDAP Directory Services,  2002 New Riders Publishing
<  BACK CONTINUE  >

Index terms contained in this section

administrators
          pilot users
         piloting servers
                    collecting feedback
         piloting services
                    rolling out pilots
          training
applications
         configurations
                    pilot feedback
         developers
                    defining service goals
                    training
         directory
                    pilot monitoring
         extranet
                    defining service goals
authentication
          defining service goals
bottlenecks
          pilot monitoring
configuring
         applications
                    pilot feedback
         hardware
                    pilot feedback
                    piloting issues
         networks
                    pilot feedback
                    piloting issues
         operating systems
                    pilot feedback
connections
         number of
                    increasing pilot scale
data
         security
                    defining service goals
defining
         goals
                    piloting service 2nd 3rd 4th
         scope
                    piloting service 2nd 3rd 4th 5th
deployment
          pilots
                    order of operations 2nd
developers
         application
                    defining service goals
                    training
directories
         applications
                    pilot monitoring
         performance
                    pilot monitoring
         piloting service
                    application configuration feedback
                    defining goals 2nd 3rd 4th
                    defining your scope 2nd 3rd 4th 5th
                    deployment 2nd 3rd
                    directory software configuration feedback
                    documentation and training materials 2nd 3rd 4th 5th
                    environment setup 2nd 3rd 4th 5th 6th 7th 8th
                    feedback, collecting 2nd 3rd 4th 5th 6th 7th 8th 9th
                    hardware configuration feedback
                    network configuration feedback
                    OS configuration feedback
                    prioritizing feedback
                    returning feedback
                    scale, increasing 2nd 3rd 4th 5th
                    selecting users 2nd 3rd 4th 5th 6th 7th 8th 9th
                    server topology feedback
                    user experience feedback
         piloting services
                    maintaining equipment as a testbed
         software
                    pilot feedback
                    pilot monitoring 2nd
discussion groups
          responding to pilot users
documentation
          piloting service
                    administrators
                    application developers
                    end users 2nd
end users
         piloting servers
                    collecting feedback
         piloting service
                    rolling out pilots
          training 2nd
entries
          numbers of
                    increasing pilot scale
         size of
                    increaseing pilot scale
environments
          piloting service 2nd 3rd 4th
                    hardware configurations
                    mapping the system
                    network configurations
                    software versions
errors
         pilot
                    simulating
extranets
         applications
                    defining service goals
feedback
          helping pilot users
          piloting service 2nd 3rd 4th 5th 6th 7th 8th 9th
                    administrators
                    application configuration
                    application monitoring
                    directory performance monitoring
                    directory softwaare configuration
                    directory software monitoring
                    focus groups
                    hardware configuration
                    interviews
                    listening to
                    network configuration
                    online comments
                    OS configuration
                    OS monitoring 2nd
                    prioritizing
                    problem reports
                    returning
                    server topology experience
                    system feedback
                    user experience
                    users
focus groups
          collecting pilot feedback
goals
         defining
                    piloting service 2nd 3rd 4th
hardware
         configurations
                    pilot feedback
                    piloting issues
helping
          pilot users
increasing
          pilot scale 2nd 3rd
                    laboratory tests
                    simulating problems
interviews
          collecting pilot feedback
listening
          to pilot feedback
mailing lists
          responding to pilot users
mapping
         networks
                    piloting services
monitoring
         applications
                    collecting pilot feedback
         directory performance
                    collecting pilot feedback
         directory software
                    collecting pilot feedback
         operating systems
                    collecting pilot feedback 2nd
networks
         configurations
                    pilot feedback
                    piloting issues
         mapping
                    piloting services
online comments
          collecting pilot feedback
operating systems
         configurations
                    pilot feedback
         monitoring
                    collecting pilot feedback 2nd
performance
         directory
                    pilot monitoring
         system feedback
                    collecting
piloting
         directory services
                    application configuration feedback
                    defining goals 2nd 3rd 4th
                    defining your scope 2nd 3rd 4th 5th
                    deployment 2nd 3rd
                    directory software configuration feedback
                    documentation and training materials 2nd 3rd 4th 5th
                    environment setup 2nd 3rd 4th 5th 6th 7th 8th
                    feedback, collecting 2nd 3rd 4th 5th 6th 7th 8th 9th
                    hardware configuration feedback
                    maintaining equipment as a testbed
                    network configuration feedback
                    OS configuration feedback
                    prioritizing feedback
                    returning feedback
                    scale, increasing 2nd 3rd 4th 5th
                    selecting users 2nd 3rd 4th 5th 6th 7th 8th 9th
                    server topology feedback
                    user experience feedback
preparing
          pilot users
prioritizing
          pilot feedback
problem reports
         piloting servers
                    collecting
problems
         pilot
                    simulating
queries
         number of
                    increaseing pilot scale
replicas
         number of
                    increaseing pilot scale
returning
          pilot feedback
scale
         piloting service
                    increasing 2nd 3rd 4th 5th
scope
         defining
                    piloting service 2nd 3rd 4th 5th
security
         authentication
                    defining service goals
selecting
         users
                    piloting service 2nd 3rd 4th 5th 6th 7th 8th 9th
self-selecting pilot users
servers
         topologies
                    pilot feedback
services
         piloting
                    application configuration feedback
                    defining goals 2nd 3rd 4th
                    defining your scope 2nd 3rd 4th 5th
                    deployment 2nd 3rd
                    directory software configuration feedback
                    documentation and training materials 2nd 3rd 4th 5th
                    environment setup 2nd 3rd 4th 5th 6th 7th 8th
                    feedback, collecting 2nd 3rd 4th 5th 6th 7th 8th 9th
                    hardware configuration feedback
                    maintaining equipment as a testbed
                    network configuration feedback
                    OS configuration feedback
                    prioritizing feedback
                    returning feedback
                    scale, increasing 2nd 3rd 4th 5th
                    selecting users 2nd 3rd 4th 5th 6th 7th 8th 9th
                    server topology feedback
                    user experience feedback
simulating
          pilot problems
software
         directory
                    pilot feedback
                    pilot monitoring 2nd
         versions
                    piloting issues
system feedback
         piloting servers
                    collecting
testbeds
          maintaining piloting equipment
timelines
         defining scope
                    piloting service 2nd 3rd
topologies
         server
                    pilot feedback
training
          pilot users
                    feedback
                    helping
          piloting service
                    administrators
                    application developers
                    end users 2nd
troubleshooting
         pilots
                    simulating problems
users
          defining service goals
         piloting servers
                    collecting feedback
         piloting service
                    experience feedback
                    rollling out pilots
         selecting
                    piloting service 2nd 3rd 4th 5th 6th 7th 8th 9th
          training
                    administrators
                    application developers
                    end users 2nd
versions
         software
                    piloting issues
volunteers
          pilot users

2002, O'Reilly & Associates, Inc.



Understanding and Deploying LDAP Directory Services
Understanding and Deploying LDAP Directory Services (2nd Edition)
ISBN: 0672323168
EAN: 2147483647
Year: 1997
Pages: 245

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net