Project Management Strategies and Risk

Spending money for information, or MFI, is generally considered to be part of the plan-driven management strategy. Reserving money for flexibility, or MFF, is considered to be part of the RAD/Agile management strategy. In the real world, I have observed that there is a balance between MFI and MFF in all projects. Moreover, testing is a critical element of both strategies. So, whether management wants to plan a little or a lot, they still need to know what and where the risks are. I find it ironic that whether management believes that they need testers to determine risk or not, managers use their own type of risk analysis to determine how to divvy up their budget in order to balance their MFI with their MFF. This managerial risk analysis is usually established informally using a "gut feel" approach, which is just another flavor of the I-feel-lucky approach.

Examples from the Real World

The plan-driven management is willing to spend MFI to mitigate risk when the outcome is uncertain, as with new technologies or systems. It is used for unpredictable and "all or nothing" scenarios, when we don't know actual usage patterns or system limits. These are things that can be measured or simulated in advance of development. That way, there is time to change course before the project hits the wall. Oddly enough, the best example I ever saw of good use of MFI was in a RAD project.

Real-World Money for Information

My RAD client was developing a specialized messaging server to run on a new carrier class unified communications platform. From the beginning, we were suspicious that a core server component, built by a third-party provider, would not scale to meet the required service levels. The MITs risk analysis showed that the successful performance of this single core component was the single most important factor to the success of the entire product development effort. Because of this information, my RAD client was willing to spend quite a bit of money trying to establish the real-world limits of this core system-before they invested heavily in developing their own server.

It was fortunate that they did invest in a good MFI test effort, because the core system did seem to scale, but it proved to be fatally flawed in a way that no one expected: It was built on a nonstandard protocol that was incompatible with the standards used by most carriers. The MFI test effort to establish the credibility of the core system probably saved this RAD division's financial life. The platform was never deployed. Their development efforts would have been for nothing, and all their investment would have been lost.

This was an extreme use of MFI. Normally, risk analysis conducted by the testers doesn't venture into the "actually test it and find out" stage. The more common use of MITs risk analysis is to identify the areas that testing should concentrate on.

On the other end of the spectrum, and in the interest of balance, is one of the best examples of MFF that I ever saw. It was accomplished by a plan-driven development manager from an office supply company. The office supply company had purchased an e-commerce engine from a heavyweight third-party vendor that promised to ease their transition from client/server to an HTML-based service running on a Web server. We look at this real-world scenario in the next section.

Real-World Money for Flexibility

The office supply company had implemented and integrated most of the system and had begun user testing by the customer service department when they realized that they had major performance problems in the browser. While they were trying to establish exactly what was causing the problems, both major browser makers released new versions, and it was discovered the system did not work on either new browser.

My company had been commissioned to perform the user acceptance testing and a human factors review on the product for one of the office supply company's business partners. I looked at the code that was being sent to the browsers and quickly realized that the system was one enormous applet that simply took over the browser. Their "Web" pages were not HTML, but rather, huge scripted files that were being processed by the applet in the client machine. Further, the third-party front-end bypassed the Web server and all things Internet, and relied entirely on its own monolithic proprietary application to process every screen for every client.

This client system that was supposed to be browser-based contained almost no HTML. Instead, it was packed with Visual Basic (VB) code. Apparently, the third-party programmers didn't bother to learn any HTML. For example, one screen that displayed query results in a table used over 70 lines of scripted code to set the font for the table and a background color for the table's heading row. This feat can be accomplished in one line of HTML with two style definitions: font and background color. Once I had seen the code, it was clear why there were performance problems.

The browser incompatibility problem had a similar cause. To take control of the browser, these developers had written specific code instructions to the browser. The newer releases of the browser did not accept this code, and so the applet did not work at all. My company's client failed the product, and our report was passed along to the office supply company, where it caused a firestorm in the office supply company's development group.

The development manager from the office supply company handed the third-party contract to his legal department with a sticky note on it reading, "Get our money back. This isn't HTML and it isn't a Web application." Then, he came to visit my test team. He said he wanted to see how we tested the product, and he wanted to borrow a book on HTML. He spent several hours with us going over our test results. When he left, he took several books and the names of some reputable Web programming consultants.

My company completed its testing and human factors review for the business partner, and some very good consultants joined the office supply company's development team. The consultants took the functional requirements and the results of our user acceptance testing and human factors reviews, and they turned out functioning screens within three weeks. This was just about the time frame we had estimated for the bug fixes to be turned around.

Meanwhile, the office supply company's own developers did some crash learning on integrating their own systems, which they knew well, with the standards-based Web server's Internet Server Application Programming Interface (ISAPI). The system was finished on time, and the business partners were very happy with its features and the maturity of its workflow.

If you are wondering where the money came from, the rewrite actually cost less than the original contract with the heavyweight third-party vendor. The developers at the office supply company learned some new skills, and the office supply company bootstrapped itself into the Web on its own.

So contrary to popular myth, RAD/Agile developers can plan, and plan-driven developers can change the plan. Understanding risk and being ready to deal with the outcome, or better still, avoid failures altogether, is independent of the development methodology. It's just good management.

Agile and plan-driven teams use different approaches. Agile will set up to absorb late-breaking changes; plan-driven teams will try to plan for them early by charting contingencies. We testers don't need to be much concerned with how the developers design; all we need to know is what it is supposed to do. Once you have your inventory to keep track of the testable bits, you can prioritize them and reprioritize them at will. I will talk about this again in Case Study: The 401k Web Project later in this chapter.

MITs helps the plan-driven effort predict where the difficulties will be because it is a formal methodology that supports a thorough and flexible analysis of project elements. This approach helps the Agile project retain flexibility because it uses as much or as little collaboration and measurement as befits the project at hand, and it produces visible results very quickly. MITs also makes it is easy to rebalance the focus of the test effort quickly. Even though this falls into the MFF category, it is popular with sensible managers from all parts of the development rainbow.

As with the inventory, the risks will be different depending on the type of project. However, the tools you, the tester, need to succeed are pretty much the same in both types of projects. In the end, it doesn't matter which development methodology is in use. Risk is risk, and the better the project risks are understood, the better they can be mitigated.



Software Testing Fundamentals
Software Testing Fundamentals: Methods and Metrics
ISBN: 047143020X
EAN: 2147483647
Year: 2005
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net