Metrics That You'll Use in Your Daily TestingProbably the most frequently used feature of a bug-tracking database that you'll encounter (besides entering bugs) is performing queries to obtain specific lists of bugs that you're interested in. Remember, bug databases can potentially have many thousands of bugs stored in them. Manually sorting through such a huge list would be impossible. The beauty of storing bugs in a database is that performing queries becomes a simple task. Figure 20.1 shows a typical query building window with a sample query ready to be entered. Figure 20.1. Most bug-tracking databases have a means to build queries that return the specific information you're looking for. (Mantis bug database images courtesy of Dave Ball and HBS International, Inc.)This bug database's query builder, as with most others, uses standard Boolean ANDs, ORs, and parentheses to construct your specific request. In this example, the tester is looking for a list of all bugs that match the following criteria:
Clicking the Run Query button causes the database to be searched for all the bugs that match these criteria and return a list of bug ID numbers and bug titles for review. The types of queries you can build are bounded only by the database's fields and the values they can hold. It's possible to answer just about any question you might have regarding your testing and how it relates to the project. For example, here's a list of questions easily answered through queries:
The results of your query will be a list of bugs as shown in the bug-tracking database window in Figure 20.2. All the bugs that matched the criteria in your query are returned in numerical order. The gaps you see between the numbersfor example, the gap between 3238 and 3247are simply bugs in the database that didn't match the query. Figure 20.2. The results of a query are returned as a list of bugs in the bug database's main window.Performing queries is a powerful feature of a bug-tracking database and can be very useful in providing the information you need to perform your job and measure your success. Despite their power, though, another step can be taken to make the information even more useful and that's taking the results of a query, or multiple queries, and turning it into printable reports and graphical forms. Figure 20.3 shows the method that this database uses for outputting its query results. Figure 20.3. This bug database allows you to export all the database fields to either a common tab-delimited raw data file or a word processing file.In Figure 20.2 you saw that the query results list showed the bug ID number, title, status, priority, severity, resolution, and the product name. In many cases that may be all the information you need, but in others you might want more or less detail. By exporting the data using the export window shown in Figure 20.3, you can pick and choose the exact fields you want to save to a file. If you're just interested in the bugs assigned to you, you could export a simple list of bug ID numbers and their titles. If you're going to a meeting to discuss open bugs, you might want to save the bug ID number, its title, priority, severity, and who it's assigned to. Such a list might look like the one in Table 20.1.
Rather than save the query results in word processor format suitable for printing, you can save the data in a raw, tab-delimited form that's easily read into another database, spreadsheet, or charting program. For example, if your database supports SQL, you could create the following query:
This would list all the bugs against a (fictitious) software product called Calc-U-Lot v2.0 that were opened by someone named Pat. If you then exported the results of this query with the bug severity data field, you could generate a graph such as the one shown in Figure 20.4. Figure 20.4. A bug-tracking database can be used to create individualized graphs showing the details of your testing.This pie chart has no bug title or description information, no dates, no resolutions, not even bug ID numbers. What you have is a simple overview of all the bugs that Pat has logged against the Calc-U-Lot v2.0 software project, broken out by severity. Of Pat's bugs, 45 percent are Severity 1, 32 percent are Severity 2, 16 percent are Severity 3, and 7 percent are Severity 4. There are a lot of details behind these numbers, but on the surface you could say that most of the bugs that Pat finds are fairly severe. Similarly, Figure 20.5 shows another kind of graph generated by a different query that show's Pat's bugs broken out by their resolution. The query to generate this data would be:
Figure 20.5. Different queries can generate different views of the bug data. In this case, you can see how one tester's bugs were resolved.Exporting the resolution field to a charting program would generate the graph in Figure 20.5 showing that most of Pat's bugs end up getting fixed (a good sign for a tester) and that only a small percentage are resolved as not reproducible, duplicates, deferred, or for whatever reason, not a problem. Once you start testing, you'll find certain metrics that you like to use, or that your team uses, to measure how the testing process is going. You might find that counting your bug finds per day is useful or, as in the previous test case, what your "fix ratio" is. The important thing is that by extracting information from the bug database, you can build just about any metric that you want. This leads to the next part of this chapter, which describes a few of the common higher-level metrics that measure how the entire project is doing. |