Working with Customers


In large projects, you're likely to have stakeholders scattered through multiple departments, maybe with different agendas with respect to your project. On one of Lisa's large projects, where the customers weren't on site, a subteam of testers, analysts, and tech leads had regular "face time" with the customers. Customers were available via e-mail, phone and instant messaging for ad hoc questions, but most planning and feedback was done in one or two customer meetings per week. A weekly meeting was set up to go over stories and acceptance tests for the iteration. Analysts spent additional time meeting individually with stakeholders as much as possible. A subteam of analysts rotated this duty, so one analyst was always onsite with the development team, to act as a customer proxy.

The weekly meeting was also a time to show the customers the working code produced in each iteration. Their feedback resulted in updated or new stories. Additional meetings were scheduled each week whenever necessary. This limited interaction worked, because several team members were familiar with customer needs, the customers and the rest of the team trusted each other, and the customers were flexible.

In the paper quoted earlier, Gregory Schalliol points out that when your customer has several distinct and different "customers," each with peculiar requirements that aren't always compatible, you can have trouble. We've experienced this ourselves and have met this challenge by going to each customer for the stories that were most important to him and having him invest in the acceptance testing. You can't expect all customers to care about all the stories and tests.

Here's an example from one of Lisa's projects: "Once I went to various stakeholders with a list of issues left over after a release. I asked them to prioritize. Each stakeholder came up with a completely different list of priorities." This is no surprise. The tricky part is getting a consensus: what defects are important enough to take precedence over new functionality in the next iteration/release? What workarounds can we live with? You have to get all the customers together in a room for this type of discussion.

Getting customers to define acceptance tests can be a major challenge. As Gregory Schalliol says in his paper, customer teams

Needed not just specify the functionality to build, but also to develop the tests to verify its completion. They eventually did so, but only after having relied on many, many samples from our own analysts for a long time.

… There was a clear difference … between devising a business scenario and devising a functional test… . Our customer team did not need much coaching to provide us with business scenarios, but the functional test itself, in all of its detail, required us to do much more training with the customer than we had anticipated.

Having an analyst as intermediary here is no doubt a huge help. You, as the tester, can provide the same function. You can guide the customer team to think of, in Schalliol's words, "the proper functioning of all negative and atypical actions that could occur in the process, widget action on screens, behind-the-scenes dependencies… ."



Testing Extreme Programming
Testing Extreme Programming
ISBN: 0321113551
EAN: 2147483647
Year: 2005
Pages: 238

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net