Reporting on Results

 < Day Day Up > 



The keys to successful IT service delivery are, on the one hand, explicit standards and measures of performance, and on the other hand, the consistent and uniform reporting of performance results. Each side of this coin presents its own challenges. In terms of standards and measures, all too often IT personnel track activity rather than results. Typically they measure what was done rather than how closely these activities meet the customer's needs. For example, a field service organization might report on the number of desktops that it has installed in a given month. This statistic is a measure of activity. It does not tell us, however, if any of these implementations were done in keeping with customer timetables or if the associated software installs worked afterwards, or if the technician properly trained the customer to make best use of the system before leaving the customer site. The answers to these questions are actual results measures.

Clearly, activity counts are important in reporting on the scope and volume of activity, but they do not tell the whole story. At the same time, the primary focus of the IT organization is actual product and service delivery, not reporting on it. In devising metrics, ask the following question: For any given IT service, what drives customer satisfaction, and how do I best measure this? At times, you will be obliged to settle for a surrogate metric because the true measure of customer satisfaction is too difficult to ascertain. By way of illustration, here are some of the metrics that the author has employed in measuring the performance of his IT team:

  • Network services

    • Number of problem tickets logged, closed, and remaining open in a given month

    • Network availability (by segment or location if appropriate)

    • Network response time (by segment or location if appropriate)

  • Call center

    • Calls received

    • Calls abandoned

    • Calls addressed without escalation

    • Number of problem tickets logged, closed, and remaining open in a given month

  • IT training

    • Number of courses offered

    • Number of seats available

    • Number of students who complete courses (i.e., stay to the end or receive certification in the specific subject)

All of these metrics allow for the ready capture of easily quantifiable performance measures. In the case of network services, overall availability and response time are what customers care about most. However, measures of response time can be deceiving. Although the network as a whole may be up and running, a particular customer segment of service (i.e., business application) may be down. Thus, you must be clear about what you are measuring and how you are measuring it. Monitoring the flow of problem tickets is one of those surrogate metrics. If your network services unit has few to no problem tickets open, this is a good thing. Similarly, a spike or growing backlog in problem tickets would suggest that something is amiss. For the call center, which is responsible for the intake of customer problems, measures of efficiency in call handling and ticket processing are also important. By the same token, growth in abandoned calls would suggest issues with the intake process. In some instances, you will have more concrete measures of service delivery effectiveness. For example, if you test students on their knowledge following an IT training session, the results would serve as a clear indicator of instruction and course effectiveness.

Unfortunately, none of these metrics truly suffices because none indicates the satisfaction of the customer. To measure customer satisfaction, you must poll customers, asking them simple questions about what they liked or did not like about their interactions with IT. Here too, you must focus on those areas that get at what your customer expects from the service. For example, in surveying a recipient of audio/visual services, you might ask the following questions:

  • Was the equipment delivered in a timely fashion?

  • Was the equipment in proper working order?

  • Were the A/V personnel knowledgeable about equipment operation and uses?

  • Did the staff provide the user with sufficient instruction so that the user could operate the equipment on his or her own?

Similarly, in the case of desktop support, you could ask questions like these:

  • Was the support timely?

  • Did it address the need?

  • Did the IT employee seem knowledgeable?

  • What was the customer's overall satisfaction with the quality of the deliverable?

In each of these examples, a few questions get to the heart of the service instance and establish the customer's satisfaction with the experience. [10]

Once you have designed your survey tool, you must consider how best to deploy it. Your approach will depend on your corporate culture. Enterprises accustomed to internal measurement will be more receptive to survey forms and questionnaires than those where the practice is brand new. For other organizations, the only means of obtaining feedback is to call and or visit with customers one on one. If you have an enterprise intranet site, you might compromise between these two extremes by employing some user-friendly survey tool, such as eSurveyor, to poll people online. Whatever mechanism you choose, the process must be focused, quick, and painless. To begin, draw your survey population from your trouble ticket management system (i.e., survey those who have actually received service). Next, tailor the query to the particulars of the deliverable (i.e., ask questions that pertain to the specific service delivered). If the responses are unfavorable, ask for details or examples and whether an appropriate service manager may follow up with the customer. Track and consolidate your results to get an overall sense of IT team performance. The mere act of asking the customer is a win-win situation for the enterprise. On the one hand, it keeps the IT team customer-focused, better aware of customer needs and issues. On the other hand, the obvious effort at continuous improvement wins IT friends and support on the business side of the house. Indeed, the typical customer response runs something like this: "They actually care!"

The process does not end with data collection, however. Once collected, data should be aggregated, analyzed, and shared across IT, and perhaps with customers as well. My preferred mechanism for doing just that is an operations report. This is a monthly activity for which the IT management team comes together to scrutinize both SLA performance and project delivery. Because there tends to be a lot to talk about, you might consider dividing your own process into two segments: one for service performance and the other for project statuses. [11] The operations review and report focus the IT management team on customer service. Organizing and running the monthly sessions, as well as creating the report itself, fall to the PMO. The service review session should include all those with line responsibility for service level management; the project review should include senior IT managers and all project directors. The remainder of this chapter will consider the SLA side of the operations report, leaving the project side for detailed consideration in Chapter 5.

Meeting the standards set in the IT organization's SLAs is a communal responsibility. Therefore, successes and failures both should be explored collectively in the hope of identifying and addressing the root causes of service delivery problems. The focus is not on blame but on self-improvement. By participating in constructive inquiries into these matters, IT management demonstrates its commitment to a customer focus and to the team's continuous improvement. In larger and geographically dispersed IT organizations, the monthly meetings are also an opportunity to get all the players in the IT value chain around the table to share information and to coordinate assignments that cut across silos. For example, if data center management plans a shutdown on an upcoming weekend, the operations review session affords an opportunity to remind colleagues and to identify any issues that might arise from the planned event.

On a more interpersonal level, the prospect of a monthly review process encourages team members to get and keep their respective houses in order. No matter how friendly the atmosphere of the meeting might be, no one wants to parade bad news in front of one's peers on a regular basis. Rather than allow things to fester or, worse, to explode at the session itself, participants will sort out their issues with colleagues offline and bring forward solutions, not added conflict. If IT management sets the right tone at these sessions, over time they will have a constructive impact on the culture of the organization, break down the siloed mentality among IT departments, and promote collaboration. Thus, what may at first appear to some as added administrative overhead will go a long way toward building a better appreciation of the challenges and opportunities faced by IT in consistently delivering high-quality services to customers. The actual operation of these processes falls to the IT organization's PMO function. Working with the CREs, PMO personnel will ensure that the activities and documentation described previously are well coordinated and delivered to IT management and IT's customers in a timely fashion.

The service delivery components of the operations report include both qualitative and quantitative customer service measures. Each service unit should report on outages and other events that impact IT's customers. Your report should require a clear measure of the impact (e.g., the duration of the service disruption and the audience impacted) as well as the steps taken by the IT organization to address the problem. At the operations review meeting, this information will serve as a basis for discussion on why such problems occur and how to prevent them from recurring. The report will also include quantitative data drawn from the problem ticket system, customer surveys, and other measures reflecting this month's IT team performance against SLA metrics. Here, too, the point of the meeting is to compare the current month's performance against past months and to speculate on how best to improve service performance. Such discussions are good for the team and ultimately for the customer. Make this continuous improvement process part of the way you do things.

The report findings will prove useful to the CREs as they meet with their customers and communicate the extent and quality of the IT organization's service commitments to each line of business. If the report identifies problems, the customer will already know about these but will be comforted that IT is also aware and taking steps to correct the situation. Finally, as a body of information, the unit's operations reports serve as an excellent chronicle of day-to-day IT team performance. The PMO knowledge management process can employ this data to track improvements over time and identify areas in need of greater management focus or technology investment. The results of best practice are captured for all to see, with the expectation that service teams will learn from one another. Lastly, when next year's budgeting and planning process gets under way, the CIO will have easy access to information that demonstrates which IT investments paid the greatest return over the past year.

[10]For a complete example of how the various metrics for IT service delivery may be captured and reflected in a single management tool, see The Hands-On Project Office, http://www.crcpress.com/e_products/downloads/download.asp?cat_no=AU1991, chpt4~5~customer satisfaction measures~example.

[11]For an "Operations Report A" template, see The Hands-On Project Office, http://www.crcpress.com/e_products/downloads/download.asp?cat_no=AU1991, chpt4~6~monthly service delivery report~template. For an example of a completed template, see chpt4~7~monthly service delivery report~example.



 < Day Day Up > 



The Hands-On Project Office(c) Guaranteeing ROI and On-Time Delivery
E-Commerce Security: Advice from Experts (IT Solutions series)
ISBN: N/A
EAN: 2147483647
Year: 2006
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net