The Matrix of Pain


The Matrix of Pain

Let's assume that you've made the decision to support more than one platform. Perhaps the market is fragmented and supporting multiple platforms is the only way you can create sufficient revenue. Perhaps your target market simply demands multiple platforms. This is common in enterprise software when a customer standardizes on one platform for its IT infrastructure. Or perhaps your development organization has chosen a very low cost and highly portable implementation, such as a Web solution written in Perl that interfaces to MySQL, making the actual cost to support an additional platform extremely low.

One of the most important things that can be done to minimize the pain of portability is to make certain that development and QA understand the relative priorities of the market. Without clear priorities, everyone is going to waste precious time developing and testing the wrong parts of the system or not sufficiently testing the parts that matter the most to customers. The best technique for prioritizing is to create market-driven configuration matrices for use by development and QA (the matrix of pain ). They are a variant of the all pairs technique used by QA for organizing test cases. The marketect should drive this process, with participation from development, QA, services, sales, and support.

Suppose that you're building a Web-based system and you want to support the following:

  • Five operating systems on two platforms (Solaris 2.7 and 2.8, MS Windows NT 3.5.1 and 4.0, and XP Server)

  • Two Web servers (IIS and Apache, omitting versions for simplification)

  • Two browsers (Netscape and Internet Explorer)

  • Four databases (Oracle 8i and 9i, SQLServer 7.0 and 2000).

The total number of possible combinations for development and testing is thus {OS x Web server x Browser x DB } = { 5 x 2 x 2 x 4 } = 80, which, I'll take as a given, is too many for your QA staff. (If your QA staff is large enough to handle all 80 combinations it's too large.) Your QA manager estimates that his team of three people can handle perhaps 7 to 9 configurations. Your development manager follows the QA manager's lead, under the agreement that development will work on the primary configuration and QA will certify all others. As is common in cross-platform development, the strongest constraints come from QA, not development. Thus, you have to trim 80 possible configurations down to 7 to 9, in way that ensures that your most important configurations are covered.

Step 1: Remove Configurations

A simple reading of the list of supported components demonstrates that many don't make sense. No company that I know of runs SQLServer 2000 on Solaris 2.8! In addition, the marketect may explicitly choose not to support a possible configuration. For example, she may know that Oracle 8i is only supported on Solaris 2.6 because an important customer has not yet migrated to Solaris 2.7. Discussions with this customer indicate that when it migrates to Solaris 2.7 it will still use Oracle 8i until the system has proven itself stable, when it will migrate to Oracle 9i.

Table 6-1 identifies impossible configurations. It shows an interesting effect. Because you need one of each of the elements to create a complete configuration, the total number of configurations is dramatically reduced. There is only one supported configuration for Solaris 2.6: Apache, Netscape, and Oracle 8i, and only two for Solaris 2.7.

Table 6-1. Impossible Configurations
 

Operating System

 

Solaris

MS Windows

 

2.6

2.7

NT 3.5.1

NT 4.0

XP Server

Apache

PC

PC

PC

PC

PC

IIS

NA

NA

PC

PC

PC

Netscape

PC

PC

PC

PC

PC

IE

NA

NA

PC

PC

PC

Oracle 8i

PC

PC

NS

PC

PC

Oracle 9i

NS

PC

NS

PC

PC

SQLServer 7

NA

NA

PC

PC

PC

SQLServer 2000

NA

NA

NS

PC

PC

Total Configurations (39):

1

2

4

16

16

PC = Possible configuration;

NS = Not supported;

NA = Not applicable

         

Step 2: Rank-Order Configurations

Although 39 is smaller than 80, this matrix is still not sufficiently prioritized. The next step in making it manageable is to work with other groups to prioritize every possible/supported configuration. In the process you will gain insight into a variety of things, including which configurations

  • Are actually installed in the field, by actual or perceived frequency of installation

  • Are used by the largest, most profitable, or otherwise "most important" customers

  • Are going to be most heavily promoted by marketing in the upcoming release (these have to work)

  • Are most easily or capably supported by the support organization

  • Are most likely to provide you with the coverage you need to test full functionality

A variety of techniques achieve a suitable prioritization. I've spent the bulk of my career working in enterprise-class software, and for most of the products I worked on we were able to prioritize the most important configurations pretty quickly (usually in one afternoon). Once you're finished, consider color -coding individual cells red, yellow, and blue for "must test," "should test," and "would like to test but we know we probably won't get to it" to convert your matrix into an easily referenced visual aide.

For larger, more complex software or for developers or managers who insist on numbers, presumably because they believe that numbers will lead to a better decision, assign to each major area a number between 0 and 1 so that all the areas add up to 1. These numbers represent consensually created priorities. Suppose that in discussing the matrix it becomes apparent that no known customers actually use Netscape or Oracle 8i on Windows XP Server, nor are any expected to. This results in a further reduction of configurations to be tested , as shown in Table 6-2.

Step 3: Make the Final Cut

You still have more work to do, as 29 configurations are clearly too many. You have to finalize this matrix and get the configurations down to a manageable number. Be forewarned: This will take at least two passes . As you begin this process, see if you can find additional information on the distribution of customer configurations, organized as a Pareto chart. This will prove invaluable as you make your final choices.

Table 6-2. Configurations to be Tested
 

Operating System

 

Solaris

MS Windows

 

2.6

2.7

NT 3.5.1

NT 4.0

XP Server

Apache

1

1

0.2

0.5

0.3

IIS

   

0.8

0.5

0.7

Netscape

1

1

0.2

0.2

IE

   

1

0.8

1

Oracle 8i

1

0.5

 

0.1

Oracle 9i

 

0.5

 

0.3

0.3

SQLServer 7

   

1

0.5

0.2

SQLServer 2000

     

0.1

0.5

Totals (29):

1

2

4

16

6

In the first pass, consider the impact of the installed base. Suppose that only 20 percent of it uses Solaris but accounts for 53 percent of overall revenue. In other words, you're going to be testing all three Solaris configurations! This leaves you with four to six possible configurations for MS Windows.

Your product and technical maps (see Appendix B) tell you that NT 3.5.1 is going to be phased out after this release and that only a few customers are using it. Based on this information, everyone agrees that one configurationNT 3.5.1 with IIS, IE, and SQLServer 7will be okay.

You know you need to test Apache and Netscape. Furthermore, you believe that most customers are going to be on NT 4.0 for quite some time and so you want QA to concentrate its efforts here.

With your knowledge of the architecture you believe that the database access layer uses the same SQL commands for Oracle and SQLServer on any given operating system. Just to be sure, you ask your tarchitect or development manager to run a quick binary comparison on the source code. Yes, the SQL is the same. This doesn't mean that the databases will operate in the same manner but just that if you certify your code on one of them, such as Oracle 8i, you have reasonable evidence that it should work on the other. This knowledge produces Table 6-3. Note that all major configurations are tested at least once.

Unfortunately, you have 14 configurations, which is at least five more than your estimates allow. Removing IIS from NT 4.0 removes four more configurations. The final set is listed in Table 6-4, which is more than you think you can test in the allotted time. From here, you'll have to find additional ways to either test the product as you wish or further reduce the number of configurations. The most likely way to make the cut is by obtaining more hardware, either internally or from your customers, or by a beta program in which your beta customers handle testing configurations that QA won't.

Table 6-3. Simplifying Configurations
 

Operating System

 

Solaris

MS Windows

 

2.6

2.7

NT 3.5.1

NT 4.0

XP Server

Apache

graphics/tick.gif graphics/tick.gif   graphics/tick.gif  

IIS

    graphics/tick.gif graphics/tick.gif graphics/tick.gif

Netscape

graphics/tick.gif graphics/tick.gif   graphics/tick.gif  

IE

    graphics/tick.gif graphics/tick.gif graphics/tick.gif

Oracle 8i

graphics/tick.gif graphics/tick.gif   graphics/tick.gif  

Oracle 9i

  graphics/tick.gif     graphics/tick.gif

SQLServer 7

    graphics/tick.gif graphics/tick.gif  

SQLServer 2000

        graphics/tick.gif

Totals (14):

1

2

1

8

2

Table 6-4. Final Configuration Set
 

Operating System

 

Solaris

MS Windows

 

2.6

2.7

NT 3.5.1

NT 4.0

XP Server

Apache

graphics/tick.gif graphics/tick.gif   graphics/tick.gif  

IIS

    graphics/tick.gif   graphics/tick.gif

Netscape

graphics/tick.gif graphics/tick.gif   graphics/tick.gif  

IE

    graphics/tick.gif graphics/tick.gif graphics/tick.gif

Oracle 8i

graphics/tick.gif graphics/tick.gif   graphics/tick.gif  

Oracle 9i

  graphics/tick.gif     graphics/tick.gif

SQLServer 7

    graphics/tick.gif graphics/tick.gif  

SQLServer 2000

        graphics/tick.gif

Totals (10):

1

2

1

4

2

I have intentionally simplified this example. In a real Web-based system, you would probably support more versions of the browser (4 to 8) and more versions of the Web server (2 to 4), and you would probably specify other important variables , such as proxies and firewalls. You also need to add an entry for each version of the application that might exist on the same platform in order to check for any negative interactions between them. These would dramatically increase the potential number of configurations. In one multiplatform , multilingual, Internet-based client application, we had more than 4,000 possible configurations. We were able to reduce this to about 200 tested configurationswith a lot of work.

This approach to prioritizing works for smaller numbers of configurations, but it starts to break down when there are a lot of configuration elements or a lot of possible values for each element. When this happens, you need a more automated approach for organizing your initial set of configurations and then prioritizing them based on the market parameters I mentioned earlier. This is referred to as the all pairs technique, and it ensures that all pairings of parameters are covered in the test cases without covering all combinations. James Bach has posted a free tool to calculate all pairs test cases at www.satisfice.com. I want to stress that the output of all pairs should be reviewed to ensure that the market issues presented earlier are covered. Specifically, when you get into a "don't care" condition, prioritize based on marketing needs.

Understanding the matrix of pain makes it crystal clear that every time you add another platform you increase the total workload associated with your system.



Beyond Software Architecture[c] Creating and Sustaining Winning Solutions
Beyond Software Architecture[c] Creating and Sustaining Winning Solutions
ISBN: 201775948
EAN: N/A
Year: 2005
Pages: 202

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net