Understanding and Deploying LDAP Directory Services > 13. Piloting Your Directory Service > A Piloting Road Map |
A Piloting Road MapThe steps you will follow in establishing your pilot vary depending on your environment and the design of your service. The steps outlined in the following sections are typical and cover the most common scenarios. Don't worry if you find you need to deviate from these steps, or if you don't have the time or money to cover everything we suggest. Just make sure you cover all the bases that are important to you in your environment. Defining Your GoalsWhat do you want to get out of your pilot? You will have different goals depending on the type of service you have and the environment you're in. How you define your goals leads you to focus your pilot on different aspects of the service. For example, consider the following goals:
Other potential areas of focus exist, of course. The goals in the preceding list typically have an even tighter focus. For example, an application directory might be targeted specifically at extranet applications. In practice, you probably want to focus on all of these areas to some degree, but it's important to realize that you cannot fully pilot every aspect of your service. If you could, you wouldn't have a pilot ”you'd have a full-fledged directory service! The same goes for pre-pilot testing: You can't test everything, so focus your tests on the most important aspects of the service. Defining and prioritizing the goals you have in piloting your directory service helps you focus your efforts and make your pilot more effective. Defining Your ScopeThe goals you want to achieve by piloting naturally define the scope of your pilot. This scope has several dimensions: How much will users be involved in your pilot? Will your group of users be small and focused or large and diverse? What aspects of your service will you pilot? Will you try to pilot a few aspects thoroughly, or will you cover all aspects in less depth? Part of your scope will be determined by the time and resources you have to devote to your pilot. You may have external constraints placed on you, or you may place constraints on yourself. A successful pilot is focused and has clear goals and objectives. Try to avoid endlessly piloting with no way of knowing when you are done. Your pilot may end because of either success or failure, and it's important to be able to recognize both outcomes . In the case of a successful pilot, your next step may be full deployment of the service. In the case of a failed pilot, your next step is probably to redesign, retest, and repilot. A good practice is to draw a timeline showing the major milestones in your pilot. This timeline serves two purposes. First, it helps you map out the stages of your pilot, which helps you decide what needs to happen when. Second, it gives you a good reality check on the pilot itself. If your timeline leaves only a week for locating, training, and getting feedback from your pilot users, you would know that you haven't budgeted enough time. The sample time line in Table 13.2 includes some time for testing, locating pilot users, rolling out the pilot, gathering feedback, and even applying that feedback to the pilot itself. As you can see, this timeline takes just over 12 weeks ”an aggressive schedule. Unless you have very dedicated and motivated pilot users, don't expect to be able to do things this quickly. Table 13.2. An example of a pilot timeline
Constructing an explicit timeline also helps you work more efficiently. A timeline can help identify opportunities to perform tasks in parallel, decreasing the time necessary for the pilot. For example, you may decide that locating pilot users can be done in parallel with setting up the service, or that data gathering can completely overlap with pilot operation. Remember that the purpose of restricting the scope of your directory pilot is to ensure that it actually happens in a reasonable amount of time with a reasonable amount of resources. Be as explicit as you can about your scope; avoid extending the pilot into areas that are beyond it. Developing Documentation and Training MaterialsYour pilot may involve users who are not familiar with the service being piloted. Therefore, you should develop documentation, training materials, and other information to prepare your pilot users to be effective participants . You might be able to revise these materials and give them to your production users, so it's important to pilot these materials along with the directory service itself. There are at least three broad categories of users you should be sure to address:
End user documentation is often tutorial in nature. You cannot assume that your users know much about your service or directories in general to begin with. You must educate them if you expect them to use the system and be effective pilot participants. The complexity of your end user documentation and training materials depends on the complexity of your directory service, the tasks it will be used for, and the sophistication of your users. A simple phonebook service, for example, may require only online help. A more complicated directory service may require a printed user manual. An even more complicated system may require users to attend some kind of training session. Be sure to determine the level of documentation and training required by your users. Administrative users typically require a different kind of documentation. There are three types of administrators: directory system administrators, directory content administrators, and directory-enabled application administrators. You must document the procedures they follow, provide troubleshooting guides for when things go wrong, and perhaps train them in the use of the system. Don't scrimp on documentation for your administrative users; they are responsible in large part for making the system run smoothly. Application developers are often the most sophisticated users of your directory. They also require the most extensive training and documentation materials. Application developers usually need to know everything users need to know, but they also have to understand how to access the directory from their application. Furthermore, they need to know about your directory's naming conventions, available schema, how to access the directory through an API, and more. You can usually count on developers to be willing to tolerate rougher edges than users, but do not underestimate the amount of information they need to do their job. Selecting Your UsersSelecting your users is important, especially if your service is targeted at end users (as opposed to a small set of applications you control). The users of your directory service are the least predictable variable in your directory equation. Technical problems, such as inadequate capacity, can be solved relatively easily; problems involving user perceptions and expectations can be much harder to solve. It's important to detect and correct these problems early. It's also important to make a good first impression with your directory service. This is important in a pilot, but it's absolutely critical in a production deployment. A bad first impression , no matter what the cause, can spell disaster and sometimes demise for an otherwise worthy project. The best way to be confident that you will make good first and subsequent impressions in production is to know that you've already made a favorable impression on a similar audience. This is where your pilot users come in. A good pilot provides a smooth transition to a production service. (Chapter 15, "Going Production," describes in detail the process of moving from pilot to production.) Tip When making the transition from pilot to production, make sure you have a backup plan in case things go wrong. Ideally, you should be able to switch back to the old production system quickly and seamlessly in case of trouble . If you do a good job of selecting pilot users, you will have a representative sample of your ultimate directory service user community. Making your pilot users happy translates directly into making your production users happy. No system is perfect, of course, but choosing your pilot users wisely goes a long way toward ensuring a successful directory deployment. If you do a poor job of selecting pilot users, you will not have a representative sample of your ultimate directory service user community. Making your pilot users happy may then have no relation to the happiness of your production users. From a user perspective, you might as well have not piloted your directory service at all. Naturally, this raises a question: How can you select a good set of pilot users? There is no foolproof method, but here are some guidelines that you should follow:
After you've selected an appropriate number of pilot users, there are a number of steps you should be sure to follow before, during, and after the pilot process. The following list is a minimal set of things to accomplish:
All of these steps can help you develop a successful and long- lasting relationship with your pilot users, which is important if you expect to conduct pilot activities in the future. It can be very handy, not to mention less expensive, to have a batch of willing pilot participants on hand. Setting Up Your EnvironmentAt the same time that you select your user population, you should set up the environment for your pilot. You want things ready to go as soon as your users are identified. It's important to set up the proper environment for your pilot. Remember, you are not piloting just to see whether users like the service; you are also piloting to test all the procedures you've designed for creating and maintaining the service and its content. You are also piloting to see whether the system works efficiently as a whole. This makes your choice of pilot environment even more important. Procedures that work well in one environment may not work at all in another. For example, suppose you rely on a local disk for your directory database during the pilot, but the production service needs to run over Sun's Network File System (NFS). The product you select may not work over NFS, and even if it does, performance may not be acceptable. Similar concerns can arise with your networking, hardware, software, and so on. Try to duplicate your production environment as closely as you can. If necessary, do so on a smaller scale. For example, use fewer servers, but try to use the same type of servers found in your production environment. The kind of environment you end up with for your pilot depends on the kind of environment you will have in production and the resources you have to duplicate it. Ideally, you would set up a pilot environment that exactly duplicates your production environment. This way, you could minimize problems resulting from environment changes from pilot to production. The same holds true for any test equipment you use during the pre-pilot testing phase: For the tests to be useful, try to make the test equipment match the production equipment as closely as possible. Tip When your pilot is concluded, try to maintain the pilot equipment as a testbed. As you make changes to your service, you can pilot them on your testbed hardware. The testbed provides a convenient staging service for improvements to your directory service. More often than not, resource and financial restrictions prohibit the development of an exact duplicate of your production system. Instead, you are often forced to make do with what you have. This may run the gamut from bits and pieces of leftover equipment to less expensive versions of all your production machines. If you find yourself having to scrimp in this kind of situation, be aware that your pilots will likely not be as effective. Whatever equipment you have at your disposal, keep the following advice in mind when designing your pilot environment:
It's a good idea to make a map of your pilot system. Identify its major components and the network links between them. Label the map with the hardware, software, and type and speed of network links at each component. Identify links between replicas and the role each replica serves. Compare this to the similar map you have made for your production system. Look for any obvious differences, especially in the trouble areas just mentioned. An example of a pilot environment map is shown in Figure 13.1. Figure 13.1 An example of a pilot environment map.After you've designed your pilot environment, you need to build it. Make sure you do this in plenty of time to fix problems before the pilot's official start date. Like with your production system, some problems will undoubtedly crop up only in the implementation phase. Be sure to leave yourself enough time to fix any glitches that arise. Rolling Out the PilotThere are several steps to rolling our your pilot. The actual steps you use may vary somewhat depending on how large your pilot is and the type of users involved. The most common steps are outlined here:
The basic steps are simple: Bring up the service and distribute clients and documentation, and you are off and running. We've added a couple steps to show that you should test things at each step of the way and roll out the pilot incrementally, if possible. We recommend rolling out the pilot to system administrators before end users. This often provides a good incremental deployment. System administrators, who are relatively few in number ”and with whom you should already have a good working relationship ”should go first. If things go smoothly for them, begin rolling out the pilot to a small group of end users. If things go well for them, expand the scope of the pilot with other end users. Always make sure that your feedback mechanisms are in place before rolling out each stage of the pilot. Failure to do this may result in the loss of important feedback and, more importantly, the loss in confidence of your pilot users. Also, be sure to roll out documentation and training as you roll out the pilot to users. Failure to do this can result in confused users, which leads to a bad perception of the pilot. Collecting FeedbackWith your pilot up and running, you need to begin collecting feedback. Keep in mind that this is the whole point of your pilot, so this step is important. There are several different kinds of feedback you can collect:
You can use various methods to collect feedback, depending on the type. To collect user feedback, you might use any of the following methods :
A good approach may be to use some combination of both methods. You want good feedback, but avoid spending too much time developing fancy feedback mechanisms. Most of what you want can usually be achieved through a simple email-collection mechanism. You can use the same techniques for collecting administrator and developer feedback that you do for user feedback. Out of all the techniques, you should probably do one-on-one interviews with administrators and developers; because there are relatively few of them, this may be more feasible . Keep in mind that you should ask administrators and developers different questions than you ask your users. Tip One important thing to keep in mind when asking for feedback from users and administrators is this: Do not ask for feedback unless you intend to listen. There is nothing more frustrating than to be asked for feedback but then feel that your feedback is ignored. This makes your users feel like they've wasted their time. Of course, this does not mean you need to take every suggestion made during the pilot. Sometimes you will be unable to incorporate feedback for very good reasons. In these situations, be sure to let users know why you didn't incorporate their feedback. When you do incorporate feedback, be sure to tell users about that, too. Remember that your pilot users are busy people just like you. Be proactive in soliciting feedback. When collecting system feedback, the techniques you use are quite different. Most of the techniques involve collecting data from your automated monitoring sources. Chapter 18, "Monitoring," discusses monitoring of your directory in more detail, but some of the more common and useful techniques are listed briefly here:
When collecting data, try to be as analytical as possible. Your hunches about what's good and what's bad about the pilot may well be valid, but there's no substitute for hard data. If the conclusion of your pilot is that you need to increase your budget, for example, having objective data to back up this conclusion is especially valuable . Scaling It UpBy definition, your pilot is conducted on a smaller scale than your production service. Of course, not every aspect of your pilot is necessarily scaled down from the production version. For example, your pilot may involve only a few users, but the pilot directory servers might contain as much data as the production servers. Be careful to keep this in mind as you interpret data from your pilot. You should also try to find ways to scale up selected portions of your pilot to increase your confidence in how the system will scale in production. There are several areas in which you can scale up your pilot:
Some of these dimensions can be tested in the laboratory. For example, you can test the number of entries and connections a single server can handle. But it's important to scale up some aspects of the service during the actual pilot while you have users using the service. Sometimes the interaction among several factors may combine to produce unexpected results. The more you can simulate these real-world interactions, the more realistic your scaling tests will be. You can use many techniques to conduct realistic laboratory scaling tests. First, formulate a model describing the kinds of loads and conditions you want to test, and then develop test clients that simulate these loads. Each test client may make many connections to the directory, simulating many real-world clients. Develop test data that increases the size of your directory. You can increase the number of entries along with their size, the number of values in each attribute, and so on (these don't have to be real entries). Think about your future needs in each area, and focus your testing on areas you think you'll need. Introduce other factors into the system. For example, load the network links between clients and servers with other traffic. You might accomplish this by writing a special-purpose client or by simply transferring some large files back and forth during your test runs. Most systems have good tools you can use to induce different kinds of loads. For example, spray is a good tool for loading the network, and mkfile is a good tool for loading the file system. Load the directory server machines with other processes; this will help you understand whether the machine can be shared with other services. Simulate network and hardware failures during the test by unplugging network cables and power cords. How does the system react ? Do clients fail over to directory replicas? Is the directory server able to recover its database? Does replication recover gracefully? These questions are important to answer in any kind of environment, but they become more crucial in a large-scale directory environment. For example, if your directory does not recover gracefully from a power outage , you may have to rebuild the database from scratch. This may be tolerable on a small scale, but for a big directory it can introduce unacceptable downtime. Watch out for scaling behavior in which the directory system does not degrade gracefully. This kind of "brick wall," when met in a production environment, invariably comes as an unpleasant surprise. For example, consider a directory that can hold only a fixed number of entries or size of database: When these limits are reached, the directory ceases to function. Look for directory software that degrades gracefully as limits are reached. Applying What You've LearnedApplying what you've learned is the most important aspect of your pilot. After all, the whole point of doing the pilot is to learn how well your design works in practice. Naturally, if you learn of flaws in your design, you should make changes to correct them. This is especially important during your directory's pilot stage. You will get feedback from your pilot that will change your design. Be prepared to incorporate these changes into the pilot itself, providing a feedback loop to let you know when you get things right. There are many areas in which you will receive feedback that you should incorporate. Following are some of the more important topics to listen for:
Make sure you prioritize the feedback you receive. It's important to incorporate as much feedback as possible and then repilot the service with the design changed accordingly. But you also need to be realistic. You will not be able to incorporate all the feedback you receive. Some feedback is so trivial that there is no need to repilot. Some of it will be bad advice. Some of it will not be practical. And some pieces of feedback may conflict with others. In the latter case, you have to make a choice about which feedback, if any, gets incorporated. Prioritizing feedback enables you to determine which suggestions to incorporate given your limited resources. As mentioned earlier, be sure to return feedback to those users who have taken the time to provide you with feedback. This will make users happy and more likely to participate in future pilots. It may not be practical to personally answer each suggestion that comes in. If this is the case, publish a summary of the feedback you've received along with your planned response. This is often sufficient to make users feel they are involved in the pilot and that their feedback has been listened to ”even if not implemented verbatim.
|
Index terms contained in this sectionadministratorspilot users piloting servers collecting feedback piloting services rolling out pilots training applications configurations pilot feedback developers defining service goals training directory pilot monitoring extranet defining service goals authentication defining service goals bottlenecks pilot monitoring configuring applications pilot feedback hardware pilot feedback piloting issues networks pilot feedback piloting issues operating systems pilot feedback connections number of increasing pilot scale data security defining service goals defining goals piloting service 2nd 3rd 4th scope piloting service 2nd 3rd 4th 5th deployment pilots order of operations 2nd developers application defining service goals training directories applications pilot monitoring performance pilot monitoring piloting service application configuration feedback defining goals 2nd 3rd 4th defining your scope 2nd 3rd 4th 5th deployment 2nd 3rd directory software configuration feedback documentation and training materials 2nd 3rd 4th 5th environment setup 2nd 3rd 4th 5th 6th 7th 8th feedback, collecting 2nd 3rd 4th 5th 6th 7th 8th 9th hardware configuration feedback network configuration feedback OS configuration feedback prioritizing feedback returning feedback scale, increasing 2nd 3rd 4th 5th selecting users 2nd 3rd 4th 5th 6th 7th 8th 9th server topology feedback user experience feedback piloting services maintaining equipment as a testbed software pilot feedback pilot monitoring 2nd discussion groups responding to pilot users documentation piloting service administrators application developers end users 2nd end users piloting servers collecting feedback piloting service rolling out pilots training 2nd entries numbers of increasing pilot scale size of increaseing pilot scale environments piloting service 2nd 3rd 4th hardware configurations mapping the system network configurations software versions errors pilot simulating extranets applications defining service goals feedback helping pilot users piloting service 2nd 3rd 4th 5th 6th 7th 8th 9th administrators application configuration application monitoring directory performance monitoring directory softwaare configuration directory software monitoring focus groups hardware configuration interviews listening to network configuration online comments OS configuration OS monitoring 2nd prioritizing problem reports returning server topology experience system feedback user experience users focus groups collecting pilot feedback goals defining piloting service 2nd 3rd 4th hardware configurations pilot feedback piloting issues helping pilot users increasing pilot scale 2nd 3rd laboratory tests simulating problems interviews collecting pilot feedback listening to pilot feedback mailing lists responding to pilot users mapping networks piloting services monitoring applications collecting pilot feedback directory performance collecting pilot feedback directory software collecting pilot feedback operating systems collecting pilot feedback 2nd networks configurations pilot feedback piloting issues mapping piloting services online comments collecting pilot feedback operating systems configurations pilot feedback monitoring collecting pilot feedback 2nd performance directory pilot monitoring system feedback collecting piloting directory services application configuration feedback defining goals 2nd 3rd 4th defining your scope 2nd 3rd 4th 5th deployment 2nd 3rd directory software configuration feedback documentation and training materials 2nd 3rd 4th 5th environment setup 2nd 3rd 4th 5th 6th 7th 8th feedback, collecting 2nd 3rd 4th 5th 6th 7th 8th 9th hardware configuration feedback maintaining equipment as a testbed network configuration feedback OS configuration feedback prioritizing feedback returning feedback scale, increasing 2nd 3rd 4th 5th selecting users 2nd 3rd 4th 5th 6th 7th 8th 9th server topology feedback user experience feedback preparing pilot users prioritizing pilot feedback problem reports piloting servers collecting problems pilot simulating queries number of increaseing pilot scale replicas number of increaseing pilot scale returning pilot feedback scale piloting service increasing 2nd 3rd 4th 5th scope defining piloting service 2nd 3rd 4th 5th security authentication defining service goals selecting users piloting service 2nd 3rd 4th 5th 6th 7th 8th 9th self-selecting pilot users servers topologies pilot feedback services piloting application configuration feedback defining goals 2nd 3rd 4th defining your scope 2nd 3rd 4th 5th deployment 2nd 3rd directory software configuration feedback documentation and training materials 2nd 3rd 4th 5th environment setup 2nd 3rd 4th 5th 6th 7th 8th feedback, collecting 2nd 3rd 4th 5th 6th 7th 8th 9th hardware configuration feedback maintaining equipment as a testbed network configuration feedback OS configuration feedback prioritizing feedback returning feedback scale, increasing 2nd 3rd 4th 5th selecting users 2nd 3rd 4th 5th 6th 7th 8th 9th server topology feedback user experience feedback simulating pilot problems software directory pilot feedback pilot monitoring 2nd versions piloting issues system feedback piloting servers collecting testbeds maintaining piloting equipment timelines defining scope piloting service 2nd 3rd topologies server pilot feedback training pilot users feedback helping piloting service administrators application developers end users 2nd troubleshooting pilots simulating problems users defining service goals piloting servers collecting feedback piloting service experience feedback rollling out pilots selecting piloting service 2nd 3rd 4th 5th 6th 7th 8th 9th training administrators application developers end users 2nd versions software piloting issues volunteers pilot users |
2002, O'Reilly & Associates, Inc. |