Part TwoA New, Rational Approach


In this section, we will illustrate a best-practice approach to IT portfolio alignment with an example application using Expert Choice to develop a portfolio of projects from those shown in Figure 23.1.

Figure 23.1. IT Project Proposals


Step 1: Design

Each project you propose for a portfolio should be the result of careful consideration by those proposing the project. Alternative project designs almost always exist, and quickly deciding on one design without searching for creative alternatives is usually short sighted. Each design will have its pros and cons, and a thorough evaluation of the alternatives with a process such as AHP not only will lead to the selection of the most preferred design, but also will lay a foundation for communicating the details and rationale for the project when it is evaluated as part of the project portfolio process. In some cases, it makes sense to design some projects at different levels of funding. If you do this, the benefits of the optimal designs at each funding level are evaluated and the subsequent optimization will determine not just whether to fund a particular project, but also at what level.

Step 2: Structuring ComplexityFocusing on Objectives

After the design phase is complete, the next step is to identify IT objectives to address the organization's strategic objectives from the IT perspective. Top-level decision makers then structure these objectives into an objectives hierarchy, as shown in Figure 23.2.

Figure 23.2. Objectives Hierarchy


Step 3: Measurement

The decision makers met to discuss the relative importance of the objectives. Using radio frequency keypads, they enter judgments about the relative importance of objectives, taken two at a time, from which they derive priorities for the objectives and subobjectives. Important aspects of this process include discussing and arriving at shared definitions of the objectives, as well as sharing information relative to the objectives, the organization's current capabilities and needs, and the competitive environment. Decision makers make pairwise judgments using a variety of input modes. The "verbal" mode, shown in Figure 23.3, entails stating which one of a pair of objectives is more important and, using the verbal scale shown in the figure, by how much.

Figure 23.3. Pairwise Judgments of One Decision Maker for Top-Level Objectives


For example, the judgment of one of the participants shown in Figure 23.3 was that Leveraging Knowledge was "equal to moderately" more important than Improve Organizational Efficiency.

The verbal scale is essentially an ordinal scale, with "equal" represented by 1, "moderate" by 3, "strong" by 5, "very strong" by 7, "extreme" by 9, and intermediate judgments by even numbers. However, as explained earlier, ordinal measures are not adequate for allocating resources.

Ratio level measures are necessary! Fortunately, Expert Choice, which is based on Saaty's AHP, can derive ratio level measures from ordinal verbal judgments, provided there is enough variety and redundancy in the cluster of elements being compared.[1] Even when this is not the case, you can use Expert Choice's graphical pairwise comparisons, as shown in Figure 23.4, to derive ratio level priorities.

Figure 23.4. Pairwise Graphical Judgment for Two of the Subobjectives


An "inconsistency" measure is displayed for each cluster of judgments (0.13 in Figure 23.3) to identify inconsistencies that can be caused by clerical errors (pressing the wrong key), lack of concentration or information, inadequate model structure, or real-world inconsistencies. The decision makers can examine which judgment is most inconsistent with their other judgments, and reflect whether they should revise this or any of their other judgments. The objective is not to achieve the lowest inconsistency, but to derive ratio measure priorities that best reflect the decision makers' knowledge and understanding. Figure 23.5 shows the priorities derived for the top-level objectives from the judgments shown in Figure 23.3.

Figure 23.5. Priorities Derived from Judgments in Figure 23.3


Each participating decision maker can enter his judgments about the relative importance of a pair of objectives simultaneously. These are then displayed (see Figure 23.6) and discussed. Geometric averages of the judgments are computed for each objective pair and priorities are derived that reflect the best judgments of the decision makers (see Figure 23.7). It is common that in large hierarchies, different decision makers participate in making judgments at different levels, corresponding to their responsibilities and knowledge. This process produces a synthesis of knowledge that is virtually impossible to achieve in other ways.

Figure 23.6. Individual Decision Makers' Judgments for One of the Pairwise Comparisons


Figure 23.7. Priorities of Top-Level Objectives


The pairwise measurement process produces priorities, such as those shown in Figure 23.7. These priorities possess rank information as well as meaningful intervals and ratios (or proportions). The ratio level meaning of these priorities is particularly important because it would be mathematically meaningless to multiply priorities that were only interval or ordinal level measures by interval or ordinal level priorities at lower levels of the hierarchy. It would also be mathematically meaningless, and misleading, to use interval or ordinal measures in optimizing the IT portfolio of projects. Unfortunately, many organizations don't appreciate this requirement of their measures and wonder why the results don't make sense. They eventually become disillusioned with all numerical methods, opting to use their intuition to decide what should go in their portfolios. However, the complexities of competing objectives, trade-offs, and constraints preclude an organization that uses an intuitive allocation of resources to be competitive with an organization employing well-founded methods that include ratio level measures.

After the decision makers derive priorities for their objectives and subobjectives in the objectives hierarchy, they evaluate the anticipated contributions of the proposed projects with respect to the lowest-level subobjectives in the hierarchy. They can do this in a variety of ways. Figure 23.8 shows a ratings approach whereby a panel of decision makers uses a verbal ratings scale to evaluate the anticipated contribution of a customer service call center project to the customer access/service objective. Prior to using the ratings scale shown in Figure 23.8, the decision makers performed pairwise comparisons for the words or "rating intensities" in this scalefor example, Excellent, Very Good, and so onto derive ratio level priorities for the intensities. Respondents could, if they so desired, enter any priority between 0 and 1 if they chose not to use the scale provided. You also can derive priorities for the projects with respect to each covering objective with pairwise comparisons, or by translating data using linear or nonlinear, increasing or decreasing utility curves, or step functions.

Figure 23.8. Ratings Approach


Figure 23.9 shows a datagrid reflecting some of the ratings for one of the participants. The data can contain hard data as well as verbal ratings. Regardless, the verbal ratings or data are translated into ratio scale priorities and then combined with the priorities for other participants, as shown in Figure 23.9.

Figure 23.9. Datagrid of Ratings for One of the Participants


Step 4: Synthesis

The measurement processes, if performed as described, makes it easy to produce a synthesis or fusion of priorities using simple multiplication and addition. (Note: If the priorities are not ratio scale measures, as might be the case using simple weights-and-scores approaches, the multiplication and addition may be mathematically meaningless and lack credibility.) The priorities in the "Total" column in Figure 23.10 are proportional or ratio scale measures of the total anticipated benefits to all relevant objectives, qualitative as well as quantitative, of each project. The "units" of the benefits are immaterialall that matters is that they are proportional. Multiplying all of the benefits by a constant would result in the same proportions. Figure 23.10 also shows the project costs.

Figure 23.10. Datagrid of "Combined" Priorities for All Participants


Step 5: Optimization

Now that we have ratio scale (or proportional) measures of the project's benefits and their costs, it might seem intuitive to determine our portfolio by sorting by the benefits and funding projects until the budget is exhausted. This is not advisable for two very important reasons. First, it will lead to a suboptimal portfolio of projects. By this we mean that the total anticipated benefit will be less than a procedure that seeks to find a combination of projects that maximizes the total anticipated benefit. Looking at Figure 23.11, we see that such a procedure would, with a budget of $10,000, result in a portfolio of projects with a funding of $9,545 and an anticipated benefit of 4.339 "units," or 4.339/8.428 = 51 percent of what the anticipated benefit would be if the budget were enough to fund all of the projects. If we contrast this portfolio with one obtained using an "optimization" algorithm to identify a combination of projects that results in the highest possible total benefit without exceeding the specified budget, we have a portfolio (shown in Figure 23.12) that has an anticipated benefit of 6.127 "units," or 6.127/8.428 = 72.7 percent of what the anticipated benefit would be if the budget were enough to fund all of the projects. Thus, the optimized portfolio is 41 percent more effective (and at a cost that is slightly less) than the "intuitive" approach!

Figure 23.11. Suboptimal Portfolio Obtained by Sorting and Allocating until Budget of $10,000 Is Exceeded


Figure 23.12. Optimal Portfolio Anticipated Benefit 41 Percent Greater at Lower Cost


A second reason that the "rack and stack" or intuitive approach is deficient is that it is extremely difficult (if not impossible with a large number of projects) to ensure that various constraints, such as funding pools, dependencies among projects, and limits on other resources besides money (such as personnel, machinery, building space, and so on) are taken into account. On the other hand, accounting for such constraints is straightforward and efficient when you determine the portfolio using an optimization approach. Figure 23.13 shows an optimal portfolio of the projects in our example, when sundry constraints are taken into account.

Figure 23.13. Optimal Portfolio with Additional Constraints


Not only can the optimization approach ensure that all relevant constraints are satisfied, but it also can tell decision makers how much an individual or a set of constraints reduces the portfolio's anticipated benefits. For example, since the optimal portfolio with a budgetary constraint of only $10,000, shown in Figure 23.12, has anticipated benefits of 72.7 percent of the base case maximum, and the optimal portfolio with additional constraints shown in Figure 23.13 has anticipated benefits of 61.49 percent of the base case maximum, the additional constraints reduce the anticipated benefits by (72.761.49)/(72.7) or 15.4 percent.

Decision makers sometimes impose constraints without realizing the impact of such edicts. For example, a decision maker might demand that the Oracle Upgrade project is a "must." Doing so results in the optimal portfolio shown in Figure 23.14, where the anticipated benefits are reduced from 61.49 to 28.39, or by 54 percent! If this constraint is political rather than absolutely necessary, the decision maker would most likely give some serious thought to imposing such a "must." On the other hand, decision makers could find comfort when imposing political constraints in situations where it is shown that the impact on anticipated benefits is minimal.

Figure 23.14. Optimal Portfolio with "Must" for Oracle Upgrade





Design for Trustworthy Software. Tools, Techniques, and Methodology of Developing Robust Software
Design for Trustworthy Software: Tools, Techniques, and Methodology of Developing Robust Software
ISBN: 0131872508
EAN: 2147483647
Year: 2006
Pages: 394

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net