This coming semester (February 2001) will see the start of the next Software Hut exercise. As usual, three clients will each deal with four to six teams. We plan to divide the teams into two groups, one of which will be given some reinforcement in traditional software design techniques, and the other will get a crash course in XP. We will then monitor the progress of the two cohorts of students, some using XP and others not, as they attempt to build their solutions. We will do the monitoring by studying the way the students manage their projects. Each team must produce and maintain realistic plans, keep minutes of all their project meetings, and be interviewed weekly. We will also get all their working documents, requirements documents, analysis, test cases, designs, code, and test reports. These will provide many further opportunities for measuring the attributes of their output, ranging from function and object point analysis to bug densities. The XP experiments suggested by Ron Jeffries will be helpful in this respect.[3]
At the end of the semester, the clients will evaluate and mark all the delivered solutions. They use a structured marking scheme that we construct for them, which provides a final level of measurement of how well the solutions performed in terms of usability, installability, functionality, robustness, and so on. These are the key attributes because they will be applicable to all the solutions no matter how they were built. We will use this information in a statistical analysis to see whether there are any significant differences in the quality of the final products between XP and traditional "heavyweight" methods. Finally, we will require each student to give both a team evaluation and a personal commentary on how the project went, describing the strengths and weakness of what they did and how they did it. In the past, this has been useful in identifying issues and problems with approaches to software development. After delivery, we will be able to track the performance of the delivered systems to gain further information about their quality in their working environment. The three clients are as follows.
The overall arrangements are described in Figure 22.1. Figure 22.1. The organization of the teams and clientsIn all of this, the students will be basing their approach on what they have learned in the course so far. In the first year, they will have taken part in a group project that involves building a small software system specified by the course lecturers. The students do this as one-sixth of their work over the year, and it integrates what they have been taught in formal lectures dealing with requirements and specification, Java programming, and systems analysis and design (essentially UML). This exercise helps them start understanding some of the issues relating to working in teams, keeping accurate records, and producing high-quality documents; some of the problems in dealing with clients (a role played by their academic tutors) and the problems of delivering quality; and the need for thorough review and testing activities. Before they start on the Software Hut projects, they attend a practical course on teamwork, organized by the university's Careers Services Department. They will then be split into two cohorts, the XP teams and the Trad (traditional) teams, for further specific training in a methodology and an approach to software construction. One area that we must address concerns the advice we give about the form of the project plan. The XP-based plans will be very different from the traditional approach, and it will be a new phenomenon for the tutors to be managing a set of projects that are at very different stages at any one time. The students will also compare notes to some extent, and we hope that the teams using XP will be discreet about what they are doing so they do not influence the other teams too much. We have found in the past that the competitive element has minimized this. Part of this trial run will be learning about the sorts of metrics and data we need to enable us to carry out proper comparisons. We will then be able to run better experiments. |