Using MSF for Agile Software Development


Using MSF for Agile Software Development

Given the nature of the team, the project, and the shortened time frame (three months), Glenn has chosen to follow the MSF for Agile Software Development methodology. Its smooth integration with Team System supports a rapid, iterative development environment. Small iterations will allow the team to reduce the margin of error in their estimates. They also provide quicker feedback about the accuracy of the project plans.

In MSF 4.0, a team of peers advocates for the seven constituencies in the MSF Team Model. The Team Model was created to model all the views of a project, which must be represented and monitored to reduce risk and increase the likelihood of a successful project. Here is how this development team maps to the MSF Team Model.

MSF Team Model

Team Member

Program Management Advocates for Solution Delivery

The focus of program management is to meet the goal of delivering the solution within project constraints. This group ensures that the right solution is delivered at the right time and that all stakeholders' expectations are understood, managed, and met throughout the project.

Tamara (Stakeholder)

Jay (Sponsor)

Product Management Advocates for the Customer Business

Product management has to understand, communicate, and ensure success from the standpoint of the economic customer requesting the solution.

Robert (Business Analyst)

Glenn (Project Manager)

Architecture Advocates for the Solution

Architecture includes the services, technologies, and standards with which the solution will interoperate; the infrastructure in which it will be deployed, its place in the business or product family; and its roadmap of future versions. The architecture group has to ensure that the deployed solution will meet all qualities of service as well as the business objectives and be viable in the long term.

Martin (Architect)

Development Advocates for the Technical Solution

In addition to being the primary solution builders, development is responsible for thoughtful technical decisions, clean design, good bottom-up estimates, high-quality maintainable code, and unit tests.

Jeff (Lead Developer)

Amy, Joe, Donovan (Developers)

Test Advocates for Solution Quality from the Customer Perspective

Test anticipates, looks for, and reports on any issues that diminish the solution quality in the eyes of the users or customers.

Hubert (Tester—Code)

Mandy (Tester—Web applications and Web services)

Release/Operations Advocates for the Smooth Delivery and Deployment of the Solution into the Appropriate Infrastructure

This group ensures timely readiness and compatibility of the infrastructure for the solution.

Tim (Operations Manager)

User Experience Advocates for the Most Effective Solution in the Eyes of the Intended Users

User experience must understand the users' context as a whole, appreciate any subtleties of their needs, and ensure that the whole team is conscious of usability from their eyes.

Mandy (Tester—Web applications and Web services)

Project Timeline

This small project will take three months to complete. In that time frame, several iterations and development tracks will be accomplished, as you can see in Figure A-1. Iteration 0 is composed of setup and planning and will take three weeks. Development will span Iteration 1 and 2 and take six weeks. Stabilization, final test, and release will take the remaining three weeks.

figure a-1 project timelines

Figure A-1 Project timelines

Iteration 0: Project Setup and Planning (3 weeks)

Envisioning Track—March 1st to March 12th (2 weeks)

The envisioning track addresses one of the most fundamental requirements for project success, which is the unification of the project team behind a common vision. The team must have a clear vision of what it wants to accomplish for the customer and be able to state that in terms that will motivate both the entire team and the customer. By creating a high-level view of the project's goals and constraints, envisioning can serve as an early form of planning. It typically occurs during the setup iteration.

Storyboard

  • March 1st

    Tamara and Jay finalize the budget and time frame expectations.

  • March 2nd

    Jay approves plans and funding within his department.

  • March 3rd

    Jay meets with Robert to discuss specific needs and goals. A vision statement is created that summarizes the project background, explains the driving factors for the project, defines the application's key value, and identifies the application's users.

  • March 5th

    Robert and Jay begin defining the personas, which are the groups of users who will use the application; they then create a persona for each group. The two also start putting together a list of functional requirements using Excel.

  • March 12th

    Robert and Jay finalize the persona definitions and functional requirements.

Final Question

Are we doing the right thing?

Track Deliverables

  • Vision statement and persona document

Plan Track—March 15th to 19th (1 week)

The plan track is where the bulk of the planning for the project is completed. Within this track, the team prepares the functional specification; works through the design process; and prepares work plans, cost estimates, and schedules for the various deliverables. Each iteration also includes a planning cycle, in which the team revisits the plan, makes adjustments as needed, and performs the planning specific to the iteration.

Storyboard

  • March 15th

    Glenn, Robert, and Jay meet to discuss the new project. They review the vision statement, functional requirements, and persona documents and start putting together a project plan. They discuss objectives, time frames, and resources and put the team together.

  • March 16th

    Using Team Explorer, Glenn creates the new Team Project, selecting MSF for Agile Software Development as the methodology. Version control is configured as well; Glenn provides Robert the URL to the project portal and Robert uploads his vision statement and persona documents. Martin then uploads the datacenter Visio diagram that he and Tim previously constructed, in both VSD and JPG formats for good measure. An e-mail is sent to the entire team, letting them know the URL to the project portal.

  • March 17th

    Glenn, Robert, Jay, Martin, and Jeff meet and define the list of scenarios using Excel and record a single path of user interaction through the proposed system. This meeting is part technical and part functional—Jay attends only the first part of the meeting to ensure that things are progressing.

  • March 18th

    Tim is invited to the meeting to help address specific QoS requirements and whether those requirements will be satisfied by the capabilities of his datacenter. A list of requirements is defined in Excel. Martin makes several architectural notes during the meeting.

  • March 19th

    Robert and Glenn finalize the project plan and the other lists, uploading them to the project portal.

Final Question

Can we do this within time and budget, and is the business case justified?

Track Deliverables

  • Project plan, list of scenarios, and list of QoS requirements

Iteration 1: Release Candidate 1 (3 weeks)

Build Track—March 22nd to April 9th (3 weeks)

The build track is where the team accomplishes most of the construction of solution components (documentation as well as code). However, some additional development work might occur in the stabilization track in response to testing. The build track involves more than code development and software developers. The infrastructure is also developed during this track, and all roles are active in building and testing deliverables.

Storyboard

  • March 22nd

    Glenn uses Project to create a list of tasks. Initially only Martin, Jeff, and Tim are tasked to begin architecting the new blog application. Glenn publishes these tasks to Team System, as well as the personas and QoS requirements from the prior week that were entered into Excel. In addition, some risks that came up during the meetings are added. Glenn uses Team Explorer to verify and fine-tune all these work items. He then sends out an e-mail to the team, explaining that work items have been entered. In this e-mail, he requests the team to configure alerts within each of their Visual Studio 2005 environments, so he doesn't have to manually send out additional e-mails.

  • March 23rd

    Martin, Jeff, and Tim review their work items. Tim has been asked to review the datacenter. JPG diagram for completeness and to work with Martin to revise the model in Visual Studio. Martin is asked to begin creating the blog application diagram and update the logical datacenter diagram. Jeff is asked to begin researching the latest improvements to Web application and Web service security and performance.

  • March 24th

    Martin finishes his preliminary models and requests Tim's assistance in fine-tuning the datacenter diagram. Because there have been some recent Internet Information Server (IIS) upgrades and new firewalls put in place, they sit down in front of the distributed system designers and ensure that the model is accurate. A couple of new risk work items are added to the project, and the diagrams are checked in version control for safekeeping.

  • March 25th

    Martin creates a system diagram that composes aspects of the blog application diagram into a security system, which will simplify deployment. Together, Tim and Martin perform a trial deployment and successfully validate the models that they created. A deployment report is printed for Tim to study. It is also saved as a Web archive (.mht) file and uploaded to the project portal. All the team members can now review it. All files are checked in.

  • March 26th

    Martin meets with Jeff to decide upon important technical requirements, such as language, namespaces, project templates, file locations, object-oriented techniques, and service orientation for the project. These notes are saved into a Microsoft Office Word document and uploaded to the project portal. The appropriate properties are then set in the application diagram and saved. Martin checks in all files to version control. Jeff returns to his desk and opens up the application diagram from version control and implements the various applications and services. Jeff checks in all files to version control.

  • March 29th

    Jeff opens and reviews the blog solution and related projects; all projects have been stubbed-out and are ready to be implemented. Jeff uses Class Designer to create some of the classes to be used in the project in the interest of time because the project has now exhausted one third of its hours. Jeff asks Martin to help him create the other classes that they discussed. As they work through the design, they create task work items for each of the developers—and some for Jeff, too.

  • March 31st

    After going back and forth on design and philosophy, the object-oriented design and application framework is finally complete. Martin and Jeff come to an understanding and have all the classes, interfaces, and inheritance designed correctly.

  • April 1st

    Jeff meets with his developers, Amy, Joe, and Donovan, to go over the designs, expectations, and tasks to be assigned to the team. Jeff splits up the scenarios and QoS requirements to the developers, based on their background and skill sets. Jeff explains TDD once again and expects the developers to abide by it. Meanwhile, Glenn configures the team project security and classifications for the development ahead. Because it has been one month, Jay and Robert want some updates, so Glenn tells them which reports to run from the project portal.

  • April 2nd

    Coding begins. Amy, Joe, and Donovan review their work items and begin construction of their respective classes, services, and applications. Jeff mentors the developers individually to make sure that they are using the features of Team System correctly and following the proper TDD process. Unit tests, for the most part, are being executed regularly, and the code coverage numbers are getting better every day. Bug work items are created by the developers—sometimes to themselves as a “to do” item, and sometimes to other developers whose code is responsible for the bug. All files are checked in regularly, associating the changesets with the appropriate work items.

  • April 8th

    Enough coding has been completed that meaningful builds can start taking place. Glenn, Jeff, and Tim meet to discuss the specifics of a build server and QA environment. Glenn configures Team Build while Jeff performs some test builds.

  • April 9th

    A release candidate version of the blog application is created and given to Tim for installation onto the QA server. Other copies are archived for safekeeping in Jay's office and Robert's office.

Iteration 2: Release Candidate 2 (3 weeks)

Build Track—April 12th to April 30th (3 weeks)

  • April 12th

    Glenn reviews the Quality Indicators, Velocity, and Bug reports to get a feel for the project's pulse. He stops by Robert's office and then Jay's office to help them bookmark these reports on the portal; Jay does the same for Tamara.

  • April 13th

    Jeff notices that some of Joe's checked-in projects compile with errors and this is causing the nightly builds to fail. An e-mail goes out to the development team about quality control, but to be safe, Jeff asks Glenn to enable the “clean build” check-in policy on the team project.

  • April 16th/

    The blog application is starting to take shape; the nightly builds are occurring without error, and it's time to start functional testing. Glenn calls a meeting with Jeff, Hubert, and Mandy. Because Robert knows the functional requirements and expectations of the software, he is invited as well. Hubert is asked to work with the developers, writing additional unit tests and ensuring that all code is covered during those tests. Hubert is also asked to load test certain unit tests to ensure that the underlying data access layer and services are performing to their QoS requirements. Mandy is asked to start user interface testing of the blog Web application.

  • April 19th

    Mandy begins Web testing the blog application and finds a number of errors. She repeatedly right-clicks the various errors and creates bug work items. Glenn, seeing a spike in the number of bugs that day, visits the development team to find their source. It seems that it was caused by an oversight in the architecture. Later that day, Jay and Robert see the same reports and ask Glenn for an update.

  • April 20th

    Martin reopens the application diagram and splits a key Web service into two services for better interoperability and scalability. He revalidates the new diagram against the logical datacenter and generates a new deployment report, both for Tim and for the project portal. Everything looks good from the distributed system designers' points of view. Martin implements the new Web service and then works with Jeff to split the appropriate code between the services. Martin checks in all diagrams, associating the changes with the bug work items.

  • April 21st

    Jeff modifies the unit tests and then creates tasks for each of the developers to execute their unit tests. Amy, Joe, and Donovan rerun the tests and get green lights all the way down—they are learning to appreciate TDD more and more. All code is checked in, associating the changes with the bug work items. A clean build is generated that night.

  • April 22nd

    Mandy re-runs her tests, getting far fewer bugs this time. She closes out some open bug work items and generates a few new ones. Reports run later that day show a sharp drop in the number of bugs. Mandy and Hubert review what they've seen so far. Hubert makes some adjustments to the unit tests to accommodate Mandy's bugs. He checks in all code, associating the unit test changes with Mandy's bug work items.

  • April 23rd

    Glenn calls a meeting with Jay and Robert to discuss testing the blog application with some actual users. To this point, testing has been performed mostly through automation or using whatever spare time that the developers or testers could spare for functional testing. Ideally, Jay wants to have two groups of users: one internal and one external. Robert agrees that internal and external groups are needed so he starts organizing the beta testers. Glenn creates a few task work items for Hubert and Mandy to create manual tests and documentation, for Jeff to prepare another release candidate build, and for Tim to enable the QA environment for both internal and external testing.

  • April 26th

    Mandy and Hubert get alert e-mails from Team System; these alerts are generated by the tasks that Glenn sent out on Friday. They spend the day creating manual testing scripts and documentation and then uploading them to the portal. All files are checked in. Jeff's alert instructs him to create .MSI packages and share them with Tim, who will deploy them to the new QA server he is setting up.

  • April 29th

    Jay, Robert, and Glenn review the beta tester documentation and run through a few of the manual test cases together; some changes are made to the documentation on the project portal. Glenn submits a task to Mandy to update her manual tests as well.

  • April 30th

    Primary coding has completed and the blog application is feature complete. Unit tests, code coverage, load testing, and Web testing are all showing positive results. Amy and Donovan get pulled away to another project. Only Joe is left to do any final modifications.

Final Question

Have we built the scope envisioned in the business case and product vision statement?

Track Deliverables

Logical datacenter diagram, application diagram, system diagram, class diagram, code, changesets, builds, unit tests, load tests, Web tests, test results

Iteration 3: Stabilize and Deploy (3 weeks)

Stabilizing Track—May 3rd to May 21st (3 weeks)

The stabilizing track occurs when the team conducts testing on a solution whose features are complete. Testing during this phase emphasizes usage and operation under realistic environmental conditions. The team focuses on resolving and triaging (prioritizing) bugs and preparing the solution for release.

Storyboard

  • May 3rd

    The two testing groups meet at Adventure Works; one group is composed of various internal folks, and the other is a public group, friendly to the cause. All the testers are given user IDs and passwords to the testing environment. They also meet Mandy, who will be supporting the beta testing.

  • May 4th

    The two groups begin testing; they have many questions and begin finding bugs. Mandy enlists the help of Hubert and Joe to help classify the feedback as bugs, feature requests, or just lapses in the documentation. Joe makes any fixes that are warranted, checking in the code and associating the code with the appropriate work items. Regression testing is performed on any code changes—this process continues for the next two weeks.

  • May 18th

    The entire team, Jay, Robert, Glenn, Jeff, Joe, Hubert, Mandy, and Tim meet to discuss the progress of the beta testing. This triage session includes going through the feature requests from the users and deciding which are critical to this release, and which can be pushed back to “version 2.” Only a few features are deemed critical, and Joe assures everyone that he can perform the coding by close of business tomorrow. They each agree that the product is stable enough to deliver and decide to shoot for its release this coming weekend. Jeff offers to come in over the weekend to help Tim deploy the application to the production datacenter. Glenn creates the appropriate work items for Joe's programming tasks and Jeff and Tim's deployment tasks.

Final Question

Is the product stable enough to release, or will it be stable enough in the foreseeable future?

Track Deliverables

  • Bugs, code, changesets, test results

Deploy Track—May 22nd to 23rd (Weekend)

The deploy track occurs when the team deploys the core technology and site components, stabilizes the deployment, transitions the project to operations and support, and obtains final customer approval of the project. After the deployment, the team conducts a project review and a customer satisfaction survey.

Storyboard

  • May 22nd

    With the deployment report in hand, and the .MSI files on a public share, Jeff and Tim successfully install and configure the Adventure Works blog application on time, on budget, to specification, and to a high level of quality. Done by noon, they take the afternoon off and go golfing.



Working with Microsoft Visual Studio 2005 Team System
Working with Microsoft Visual Studio 2005 Team System (Pro-Developer)
ISBN: 0735621853
EAN: 2147483647
Year: 2006
Pages: 97

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net