Your game code size is 200,000 AELOC. It had 35 defects you knew about when you released it. The people who bought it have reported 17 more. What sigma level is your code at? | ||
Describe the differences between the leader role in a walkthrough and the Moderator role in a Fagan Inspection. | ||
Add the following defects found in Beta testing to the data in Figure 6.4: Requirements ‚ 5, Design ‚ 4, Coding ‚ 3. What are the updated code PCEs for the requirements, design, and coding phases? | ||
Using the SPC Tool demo, create a control chart of the following test case review rates, measured in pages per hour :
Which reviews, if any, fall above or below the control limits? Describe which are "good" and which are "bad." How might a high or low review rate impact the number of faults found in those reviews? |
Answers
Your total released defects are 35 + 17 = 52. The table in Figure 6.1 has a column for 100,000 but not for 200,000 so double the defect count values in the 100,000 column. A defect count of 66 indicates a 4.9 sigma level and 48 is 5 sigma. Your 52 defects don't reach the 5 sigma level, so your game code is at 4.9 sigma. | |
The Fagan Inspection Moderator has the extra responsibility of scheduling and conducting an Overview meeting prior to the actual peer review of the work. The walkthrough Leader actively presents the material during the peer review, while the inspection Moderator's main purpose is to see that the meeting is conducted properly and collect inspection metrics. The walkthrough Leader is not well-suited to take notes during the meeting, while the inspection Moderator typically has enough bandwidth to do so. | |
New PCEs: Requirements = 0.69, Design = 0.73, Code = 0.66. | |
Your Control Chart should look like Figure A.1. Figure A.1: Control Chart for review rates. The value for Review 5 falls outside the Upper Control Limit (UCL). This rate is considered "too high." Reviewing material too quickly can be an indication of any of the following:
Whatever the reasons, the potential consequence of a high review rate is missed faults. On the other hand, going too slow could be the result of:
The consequences of low review rates are that the team should have covered more material during the same time which would have caught more faults. Managers will take quick notice when reviews become unproductive and too many bad experiences can jeopardize the use of reviews for the remainder of the project or water down the process to the point where it is ineffective . |