While there are many techniques that deliver a successful CRM project, they derive from just a few general principles or success factors: "Think small, dream big" (have a general plan, but take small steps), "stay in the box" (don't overreach), "get users involved" ( user buy-in is key), and "measure success" (set a solid goal and prove you are succeeding). Think Small, Dream BigWhile some see CRM projects as multi-year projects that affect the entire organization and require massive change both to infrastructure and process, I believe such mega-projects are very risky and that many of the public failures we hear about are caused by overlong, over-complex projects. Huge projects have many weaknesses.
Huge projects are risky. This is not to say that long-term plans are not useful. Without a long-term plan the organization can make shortsighted mistakes. For instance, a single-minded focus on solving the issue of handling incoming customers' e-mail may bring a great e-mail processing system that fails to connect with any of the other systems in place, creating an unhelpful island of information. With a long-term plan, the organization can select a tool that has an integration component, even if the integration doesn't happen for a while. Take the time to create a high-level plan (the "dream") to coordinate the various CRM efforts. At the same time, avoid analysis paralysis: it does not make sense to create a detailed five-year plan since one can't even begin to imagine where CRM vendors will be then. Don't wait for the perfect tool to plug into your perfect dream. Once you have a long-term strategy in place, think small when it comes time to defining an actual project. It's much easier to shepherd small projects to a successful conclusion, and over time they are more effective than large projects. They are much easier to manage. There are fewer people involved, so there is less potential for communication breakdowns, and there's also less time to mess up. It's also a lot easier to adapt to changing circumstances when each step is small. Small projects make it easier to meet the users' expectations. It's easier to define realistic expectations on small projects to begin with, and because of the short duration, expectations don't have much time to inflate or change significantly. As a consequence, small-scope projects are less likely to fail than larger projects. When something goes wrong the feedback comes quickly, before much damage is done. It's much easier to fix a short-term deliverable than one that was months in the making. Small-scope projects do have drawbacks, although I think they are more than offset by the advantages described above. Isn't it more expensive to work with multiple small deliverables compared to one big deliverable? It's true that there is a fixed minimum overhead associated with any deliverable, so that a project with many small steps carries more overhead than one with fewer steps. However, large-scope projects have a much higher communication and coordination burden , so the difference is not that great in the end. Note that this optimistic assessment assumes the all-important point that you have created an overall strategy and you are deploying against that strategy. If you are taking small steps in an uncoordinated, haphazard approach, you run the risk of having to redo some of the steps when you realize that they don't fit well together, and that would be a very great expense indeed. Does it take more time to work through small steps? In a way, yes. If you could do everything perfectly with one large-scope project, you would probably do it faster than if you use multiple small steps. The reality is that the likelihood that a single large-scope project would be done perfectly is very low to nil, even with strong project management, because what makes a CRM system really work is the tuning that can only happen once the users are actually using it. It's more realistic to implement small steps quickly and to go through several tuning exercises rather than to implement one big step that needs no tuning. And now for the big question: can small steps really be used for large organizations with complex needs, or is it a technique that can only be used for small-scale deployments? Certainly , if your needs are complex you will find that even small steps are bigger than small steps suitable for a small organization with simple needs. However, I would argue that it's especially important for large deployments to be structured through smaller steps, each providing a complete solution to a particular issue, and each allowing a realistic validation by the end-users before proceeding. Only the initial step in the project should be significantly longer for large deployments to allow for the development of the overall data model (what data is tracked in the system and how it is organized) and of the system architecture. So go ahead and have big ( coherent ) dreams, but implement them in small steps. Stay In the BoxThere's the old story of the people who want to repaint their kitchen and fall prey to the "while we are at it" syndrome:
After many months and many, many times the cost of a fresh coat of paint, they get a new kitchen, the walls of which we can only hope are painted the right color . And they may keep going and decide to push a wall out (add two months), creating their own monster project. The same thing can happen to CRM projects. Here are some typical examples of the "while we are at it" syndrome.
It's important to address new issues and ideas encountered during the project, but it's usually best to keep pretty much within the scope defined at the beginning of the project and to defer new items to a second phase. Let's analyze the examples we just saw.
These examples illustrate situations where new requirements occur during the actual project, but the same rule of "stay in the box" applies to defining the initial requirements. For instance, if your CRM project is focused on adding an online sales tool but you also find that it would be good to improve the sales methodology, the marketing materials, and the color of the Palm Pilots, I very much recommend limiting the CRM project to implementing the online sales tools, at least if the sales methodology is clearly identified and agreed to before selecting the tool. Once the tool is launched (or in parallel, if you have enough resources), attack the other issues. Don't torture the tool. If you're trying to do something that's simply alien to the way the tool was designed, the results will be 1) expensive and 2) never quite right. By conducting a reasonable selection process, as will be described in the upcoming chapters, you should end up with a tool that does most of what you need. Bring your current process with you when you evaluate systems and evaluate the demos through the lens of the process. Aim for a good overall fit between your process and the tool (without being obsessed by a complete, 100% fit: perfection is not of this world). In the same vein, if your process does not fit well with the tool, consider changing your process rather than the tool. This is particularly true if your process doesn't seem to fit with any tool that you see. Yes, it's possible that you have discovered a secret way to do things that's better than everyone else's. On the other hand you could be driving the wrong way on the freeway , which is why everyone else is going the other way. If all the tools do things a certain way, chances are it is a best practice and you should simply adopt it. Keep customizations to less than 10%. This is an arbitrary number, for sure, but it illustrates that you should shop for a good overall fit for the tool and keep customizations to a minimum. Customizations are expensive and do not port well to new releases, so they are very expensive in the long run. As long as you have a good overall fit between your process and the tool, first consider adapting your processes to the tool rather than automatically customizing the tool. Get Users InvolvedOne of the key tests of the success of a CRM initiative is whether the users actually use the system. The problem is that, even when the new tool has clear advantages over the old one, it's difficult to switch to a new tool where familiar things are no longer familiar and even routine operations may require checking the handy cheat-sheet that was provided during the training sessions. By getting users involved early, you have a chance to build up enthusiasm and support for the benefits of the system that will help overcome the barriers to adoption. Take care to avoid overselling the system. No, the new tool will probably not double the users' productivity (let's face it, it will probably lower productivity in the short-term as users get used to it). No, the new tool will not allow 50% of customer requests to be fulfilled automatically (at least if the requests you get are reasonably complex). And no, the tool will not contain all the data the users ever need to do their job. Instead of overreaching claims, present a nuanced picture of realistic benefits: they will be able to see your pipeline at the touch of a button; they will be able to automatically attach interesting documents to customer e- mails , etc. Besides the advantages of psychological preparation, the other benefit of involving users early is that they are the ones who know how to do the job and what's needed to do it. End-users ”not their managers, not some mythical "super-user," and certainly not ersatz process managers ”need to be a part of the entire project cycle, from selection to implementation, to keep everyone honest. Certainly it's neither feasible nor desirable to involve everyone in each step. Have you ever tried to hold a demo for 500? And what about stopping all sales activities for two days while we debate the tool workflow? You can't get everyone involved every step of the way, but you need to involve actual end-users, and ideally some of the best-performing ones, at each crucial juncture. You may find that end-users are not very interested in the project because, especially if they are top performers, they are busy selling customers or helping them resolve service issues. Find the right levels of stick and carrot to get them to participate. In particular, understand that their time is valuable and make sure they are asked to participate only in high-value activities. For instance, if a vendor is doing a daylong dog-and-pony show, end-user input may be best spent on the demo portion. Even then, their participation may be required only if the tool is a real contender (did the CIO nix it because of vendor viability concerns?) and only if the demo can be interactive (we'll come back to techniques to get the vendor to deliver the information that you want in Chapter 6, "Shopping for a CRM System"). Another characteristic of top performers is that they are usually good readers. Don't spend 30 minutes giving them "background information" when they could read it in ten minutes when it's convenient for them. They also like to be kept informed (again, not in a long and tedious meeting). Why was the tool they liked so much not selected? Why are we behind schedule and what is being done to catch up? They'll want to know. In addition to top performers, it's well worthwhile to involve the group's informal leaders , who may not be top performers themselves but who have a lot of influence on the group and command respect and admiration. It's a challenge because they tend to be busy, but they will spread the word on the project very well, better than top performers who may be more of the loner type. There is tremendous value in getting individual contributors involved, not just managers. The nature of their job is different, and it's quite common that managers don't have a very good sense of what individual contributors in their groups do all day. If there are subgroups that are likely to use the tool differently (say, focused on different markets, or different products), make sure you get representatives from all groups, at least until you prove to yourself that the requirements are similar. We'll come back to this topic in more detail when we discuss the project team in Chapter 4. It's a mistake to assume that IT staffers or even process managers can substitute for actual end-users. Even when these individuals are well aware of the requirements of the end-users, they simply cannot have the same level of awareness about what really matters on a daily basis that the actual end-users have. Don't assume that IT or the process group can be a complete substitute. Measure SuccessHuman beings like to have tangible feedback for their efforts. This is particularly true of long-term, large-scope projects where it's hard to tell whether one is really making a difference. Setting up a system to capture and communicate progress on the project will help sustain the enthusiasm of the participants . At the same time, since there is always a contingent of skeptics attached to any CRM project, the same information can be used to contain their criticisms, if not to generate their enthusiasm. Metrics are typically the responsibility of the project manager, although on larger projects this can be delegated. How does one measure the success of a CRM project? We'll see detailed metrics suggestions in Chapter 10, "Measuring Success," but here are some high-level points. Start EarlyEstablishing metrics that are meaningful and acceptable to all parties before the start of the project makes them more credible in the long run, avoiding any suspicion of manipulation. It's fine to set only high-level targets when you start. You can then assign specific quantitative targets only when the implementation starts, once you have a better idea of what the tool will really be able to do. Be SimpleUsing 12 different calculations, or anything that requires knowledge of advanced statistics to understand, is counterproductive. Stick with three to five high-level measurements, each of them a simple arithmetic computation (you can use averages, but standard deviations are probably not required to make your point). Measure Results, not just ActivitiesHaving reviewed 10 vendors (an activity) is nowhere as important as having narrowed down the list of candidates to two (a result). During the implementation phase, having completed 80 test items (an activity) is an interesting achievement, but even better is having passed 78 of the test items (a result). In the same vein, measuring projects by elapsed time is nowhere near as useful as using milestones, which, if well defined, refer to actual results. Measuring for results implies the use of targets ”i.e., how much did we accomplish versus what we were planning. Make sure that the results you measure are meaningful both for customers and for internal users. Now for some examples for long-term (not project-oriented) business goals: For a marketing organization:
For a sale organization:
For a support organization:
Report Good and BadOne eerie characteristic of failing CRM projects is the incredible disconnect between the status reports and reality. While the status reports include vaguely-worded and minor delays and difficulties, the implementation team is overwhelmed with problems, its members either screaming at each other or no longer communicating at all, and so frustrated they are barely able to drag themselves to the office in the morning. The disconnect can be sustained for amazingly long periods in larger organizations with many layers and many people involved ”thankfully, that's not usually the case in smaller organizations. To gain credibility, share both good and bad news. This includes areas of potential over-investment. For instance, if you planned for a testing period of four weeks and you're done in two, could it be that the planners were sandbagging? Is it possible that the testing that was accomplished was insufficient? Meeting the goal by a mile should raise questions, not only congratulations. Be TransparentStatus reports are meant to be shared widely. A wide distribution is a great incentive to create accurate reports as well as a vehicle to get them corrected quickly when needed. In particular, special reports to the executives can bring confusion and misinformation . If they are required because the regular status reports are too long, then they should also be shared downwards to benefit from the effects of a wide distribution. While we will see many more practical pieces of advice as we work our way through the book, the top four success factors: "think small, dream big," "stay in the box," "get users involved early" and "measure success" are our guides to CRM success. |