Real-World Example: Heavyweight, Middleweight, and Lightweight Development Efforts

The case we will discuss was a real project that was extremely large. It included many subcomponents that were each large systems in their own right. These systems all had their own development groups, platforms, hardware, and so on. In the real project, the main integration effort was responsible for integrating 12 major systems. For this book I have selected three of these projects that were particularly good examples of what I characterize as heavyweight, middleweight, and lightweight development efforts.

The heavyweight project was an old-style mainframe application. The middleweight project was an object-oriented multi-tier client/server application. The lightweight project, my personal favorite, was a first-generation Web application that migrated an old proprietary document management behemoth to a cutting-edge standards-based architecture. Each of these efforts used MITs methods somewhere in their process. You will see many of the examples in this book taken from these projects.

Overview of the Project

This 1998 project involved two shipping companies. One company was buying the other and integrating the assets, employees, customers, and business systems of the two. The day-to-day business conducted by the companies was highly dependent on some very large, complex scheduling systems. The object of this project was to integrate the business-critical scheduling system of the parent company with that of the acquired company.

The scheduling system was linked to several other systems for the purposes of sharing or accessing data and business logic. Most notably in this example, scheduling accessed the billing system and the automatic document generation system. It performed tasks like checking schedules against contract agreements and generating documents, which included everything from weight tables to contracts, bills of lading, and invoices. These other systems were also being modified or created and then integrated as part of the buyout. The integration of all of these systems was in the scope of the overall integration test effort.

The various development groups at the parent company were operated in what is called a silo structure. That is, each one was autonomous with its own budget, management, programmers, culture, and methodology. Integrating the many systems that ran the business had been accomplished one at a time over a period of years. The interaction of these many systems was only partially understood in any given silo.

To minimize the impact on the day-to-day business activities, upper management wanted all existing and new systems to come online at virtually the same time. The buyout and the subsequent integration of the acquired business was to be undertaken as a big bang effort with all the new systems scheduled to come online on the same day; this first day was called the "split" date. The integration test effort was chartered to make this big bang a success from the very first day.

The system integration effort was to be conducted by an independent group reporting to their own vice president in charge of split day integration. They were to have the full cooperation and support of the operations group and all the development groups. They were responsible for planning and designing a test effort that would integrate and test all of the new systems. The system integration team came from many different parts of the company and various contract testing organizations.

A change control board was formed, and procedures were written to control the migration of code, modules, and systems from development through the test environments and finally into production. Operations supplied the test systems and the support personnel to maintain these test systems. The membership of the change control board included the directors of each development silo and operations.

The Scheduling System

The scheduling system processed millions of transactions each day. There were several major subsystems. For example, there were hundreds of freight containers being tracked by the system at any given time, not all of them belonging to either the parent company or the acquired company. The contents in these containers could be almost anything that was not perishable. Central to the integration effort was the system that actually scheduled the freight containers to be in a certain place by a certain time so that they could be routed onward. There was a subsystem that tracked the engines that moved the containers, as well as subsystems that scheduled maintenance for both the freight containers that belonged to the company and the engines.

Failure of any part of this scheduling system was considered unacceptable. Further, the scheduling system had to be 100 percent available, and so was fully redundant. The production system and data center was "owned" by the operations group and staffed by some of the most senior technical staff in the company.

The scheduling system ran in an expanded high-reliability mainframe environment. The project did not include any architectural changes to the platform or hardware environment other than the expansion of data storage capacity and the addition of processing power to handle the predicted new loads.

The company commissioned a complete analysis of the data center under the existing and the projected new loadings. The analysis was conducted by the mainframe vendor, and the system modifications were made in accordance with their recommendations. The newly expanded system was put into service eight months before the expected big bang on the split date. Three integration test systems were also constructed and provisioned at the same time.

The scheduling system project had in excess of 160 individual Project Development Requirement documents (PDRs); each was a development effort in its own right, with its own budget and developers. The senior developers were all Subject Matter Experts, or SMEs, in one or more areas of the system. These SMEs typically participated at some level in the development of all PDRs in their area of expertise. Experienced programmers wrote most of the code, and junior programmers performed the testing and maintained the documentation.

Testing the Scheduling System

The scheduling system was implemented using a traditional plan-driven approach with a rigorous contract and requirements-based development process, overseen by outside auditors. The scheduling system was built and tested using traditional formal requirements documents, quality assurance practices, and change management. A multiphase bottom-up approach to testing was used. Code reviews, unit testing, module testing, and preliminary system testing were all conducted by developers prior to turnover to the change management board for integration testing. The methodology was about as heavyweight as it gets-except that there was no independent test group prior to the systems integration phase. No metrics were made available on these test phases.

Best practice was observed throughout, and requirements were not allowed to creep; only minor corrections were accepted by the change board. The test inventory for the integration test included thousands of test cases.

The original system integration test effort was scheduled to take nine months. The system did deploy on time but with a reduced feature set. Testing and integration continued for an additional 18 months before the parent company had actually integrated the acquired scheduling system. The end cost was 300 percent greater than the original budget. No serious failures occurred in the deployed production system. We will discuss various aspects of this test effort and of the integration test effort throughout the rest of this book.

The Billing System

The parent company's billing system was being modified to accommodate the new billing rates from the acquired company, the contractual differences between the two companies, and the predicted higher systems requirements due to the increase in volume. Each company had a proprietary system in place. Consequently, the database schema and structure of the two systems were quite different. This meant that all the data (an enormous amount of data) from the acquired company would have to be normalized, or converted into an acceptable form (or both), and then assimilated by the parent system in a short time. Failure to convert and integrate the billing data would translate into millions of dollars in lost revenue for the parent company and possible losses to the customers as well.

Because of United States antitrust laws and other legal constraints, the parent company could not start processing any data from the acquired company before the actual acquisition. The parent system was required to process and incorporate all the billing data of the acquired company within a 30-day window once the acquisition was final.

Several special data conversion programs were written and tested to prepare for the acquisition date. These were tested first by the developers and then by a special integration team that specialized in testing all types of data and message flows into the new system.

The billing system itself was an object-oriented, multi-tier client/server application system. The billing system projects were developed using Dynamic Systems Development Method (DSDM).

start sidebar
My Perspective on DSDM

DSDM is a descendent of RAD that was developed in the United Kingdom. It uses an iterative approach to develop each phase of the product: functional model, design and build, and implement.

I consider DSDM to be a good middleweight methodology. It is usually listed with the Agile technologies, but I would describe the DSDM projects I have worked on to be flexible, plan-driven, highly disciplined, and well-documented efforts, which also feature some of the best trained professionals of any development or test effort. Based on my experiences, I would describe DSDM as a very effective methodology for bringing middle-sized, object-oriented, business-critical projects to completion on time.

end sidebar

The system used a SQL-based relational database management system (RDBMS). The business logic layer ran on servers built on the UNIX platform. The clients ran on high-end PCs running the Windows Workstation operating system. The clients required standard IP LAN network connections to the server system.

Testing the Billing System

The billing system was tested using a planned incremental delivery top-down approach by the test group inside the billing silo. Major features of the new billing system were delivered to the change board and integrated one at a time by the integration test group. The code was assumed to be ready to be integrated when it was delivered. There were only a few occasions when the code was turned back for failure to pass a smoke test.

An independent contract test organization was hired to perform the system testing the project. There was one tester for every two or three developers. In addition to evolving a very comprehensive top-down test suite, these testers successfully implemented a large number of automated test suites that ran from the PC client environment. Some of these automated suites were also used by the integration test team and in production after the live date.

Bug reporting was accomplished in a private SQL-based reporting facility that lived in the billing silo and was visible only to project personnel. The business partners, who were the customers of the system, reviewed the system at milestones. If they had issues, they reported them to the testers, who then logged the issues.

Some flexibility was allowed in the systems integration testing phase, since the original plan had called for testing the billing system against the new scheduling system currently in the cleanest integration test environment. This was not possible, because the billing system was delivered on time, while the scheduling system lagged behind. So, the billing system arranged to set up their own integration test system with feeds from the real production system. There were several plan changes of this type. The eventual cost of the test effort was higher than planned, but the time frame for delivery did not slip. The test inventory included 80 PDRs and hundreds of test cases. One serious program failure did occur with the billing system, but it did not impact any scheduling system functions.

There were, however, serious deployment problems with the client application at the acquired company field sites due to the lack of a LAN network infrastructure and, more importantly, due to the older DOS-based equipment at the field offices where the client would need to run.

Various PDRs covered the hardware upgrade issues, but the human factors turned out to be the limiting factor. Newly assimilated personnel had their hands full simply getting used to all the changes in their daily routines without having to learn how to operate an entirely new mouse-driven computer system and billing application. For example, if the bill of lading for a freight container was not present when the cargo was loaded, the freight container could not be moved. The yard operator had to wade through some 50 menu choices before they could print the bill of lading to a network printer (located-they weren't sure where) for a particular container. In the first weeks after the split date, the client-side failure rate was significant.

The Document Generation System

The documentation generation facility was a new Web-based system that was being implemented using an eXtreme approach (although the method was not called by a particular name at the time).

Management had agreed to try a new RAD approach to developing this system because it was Web-based and it relied on several new database technologies that had not been proven in commercial environments. It was recognized that there were substantial risks in trusting this new technology. It was also recognized that even though the end product documents were well understood, no one really knew what the final system would look like or how it would perform the task of generating these documents. The business partners had a long list of things that they did not like about the legacy system, and that list, coupled with the required end product documents, served as the beginning requirements for the project.

Management understood that a flexible, lightweight approach to design and development with a heavy emphasis on prototyping would be most likely to succeed in this environment. The director, whose brainchild it was, was given a free hand to organize her resources any way she chose. The code developed and tested by her group joined the traditional process oriented integration effort when it was turned over to the change control board. From that point forward, all the code, modules, databases, and systems from all the projects were subject to the same rules and procedures.

Testing the Documentation Generation Facility

The documentation generation facility was developed using a methodology that was very similar to what is now called eXtreme Development Methodology (See the following sidebar). While the development community insists that individuals following the latest Agile approaches, like eXtreme Programming, are not abandoning discipline, but rather excessive formality that is often mistaken for discipline, it is not hard to imagine how a method of this type could become an "I-feel-lucky" approach. In this case, however, it did not. This project was so remarkable that I published two papers on it in 1998 and 1999: "A Team Approach to Software Development" and "The Team with the Frog in Their Pond."

start sidebar
eXtreme Programming (XP): A Thumbnail Sketch

XP is an iterative development methodology that is based on 12 basic tenets:

  1. Customer is at the center of the project.

  2. Small releases.

  3. Simple design.

  4. Relentless testing.

  5. Refactoring (adjust code to improve the internal structure, make it clean and simple, remove redundancies, etc.).

  6. Pair programming.

  7. Collective ownership.

  8. Continuous integration.

  9. 40-hour work week.

  10. On-site customer.

  11. Coding standards.

  12. Metaphorically, development is guided by a story view of how the system will work.

XP is a "small project" methodology; it's not known to scale well (6 to 12 developers is considered ideal).

Source: Adapted from Kent Beck, Extreme Programming Explained: Embrace Change (Addison-Wesley, 1999).

end sidebar

The project was housed on one whole floor of a large building. The only rooms with doors were the bathrooms. All team members were given cubicles. Each developer was teamed with a tester and a business partner (the customer). This unit was called a feature team. The entire project had 10 to 15 feature teams. Feature teams working on related PDRs became a cluster. Each cluster had a dedicated leader who was responsible for administrative and reporting tasks, along with escalation activities if they were needed.

The cubicles of a feature team were always touching. If one team member wanted to talk to another member of his or her team, all he or she had to do was stand up and look over the cubicle wall. How a feature team used their three cubicles was up to them. Some teams chose to remove the dividers altogether so that they could share one large space.

Each developer submitted new code to the tester and business partner as soon as it became available. The tester was responsible for integrating the new code in the larger documentation system and for testing (finding bugs in) the code. The business partner was responsible for validating the functionality of the code. If the business partner didn't like the way the user interface worked or how the workflow process worked, they reported it as a bug. There weren't very many arguments about "is it a bug or not?" Most solutions were devised and implemented so quickly because of the proximity of the team that there was no need to log the bug.

Status meetings were held every morning. All members of the feature team were jointly responsible for meeting delivery schedule and reporting progress.

Even though the development methodology was one that could easily lack discipline, these teams successfully implemented some of the most difficult of the MITs' methods and metrics during the effort. They were able to implement and use S-curves to track their progress and estimate when the code was ready to deliver-a task that requires a high degree of maturity. They credited these methods with keeping them on track, preventing excessive feature creep, and providing an excellent communications tool for upper management. However, the methods could not have succeeded without the gifted and insightful management that governed the project.

The team added functions to the test inventory so that it would support several innovative ways to leverage the work that they put into the test inventory. They added columns to track when tests were run and categorized tasks so that sort routines could automatically generate lists of tests for the testers to perform when that code was being integrated. These "test lists" were what I would call test suites of related short form test scripts, with blank spaces for the testers to add notes and outcomes. These forms were filled out as the tester executed the various tests, and at the end of the day, they had a complete log of their testing activities. Record keeping was a matter of putting a check mark in the date column in the main online test inventory, along with the outcome and logging any open issues.

This effort also developed an excellent, highly visible, and fast method of tracking bugs, as well as a method that prioritized bugs based on their cost to fix and severity. They called this the "Z form for bug ranking." (I discuss this methodology in the paper, "The Team with the Frog in Their Pond.")

Briefly, bugs were entered into a lightweight, LAN-based, commercially available bug tracking system, but more importantly, open issues (bugs) were posted on the wall. Each feature team had its own poster on the main hall wall, showing a frog sitting on a lily pad on a pond. Bugs were posted in one of four quadrants in the pond that represented the severity of and cost to fix the bug. Everyone knew where the bugs were all the time-how many and how serious. If a group needed help, they got it, right away.

The original test inventory for the documentation facility included 100 PDRs. The final test inventory included 110 PDRs. No tests were written in advance. Test checklists were generated from the PDR-based inventory and were used by everyone testing. Once a feature or function had been tested and passed, it was checked off.

The system suffered several setbacks due to poor performance from the SQL database system that had been chosen to support the Web-based logic. This was a result of overoptimistic claims made by the database manufacturer and the newness of the entire architecture. To solve the problems, database programming consultants were hired to write and optimize SQL queries and stored procedures to bypass the poor-performing features of the database. The documentation generation facility was delivered slightly over budget, but complete and on time with some known performance issues.

The other major challenge that they faced was an integration issue that involved code collisions in the business logic of a major shared component in the scheduling system. The component was a business rule nexus used by several of the different groups. The silos rarely interacted, and so the integration team had to intervene and arbitrate a set of procedures to govern this type of component so that all changes to it were coordinated. The change management group was able to control versioning so that the collisions could be identified and minimized.

Integrating the Entire System

One internal group and two different consulting firms were invited to bid on the systems integration test effort. The internal group had experts from all the major silos on their team and boasted the most technically competent group of experts extant for the systems that would be integrated. However, they lacked formal testing experience and integration experience outside their own silos.

One of the consulting firms proposed a RAD-oriented, "test fast and furious" approach, and the other firm proposed a fairly traditional top-down, risk-based approach to the integration effort. Management inside the silos feared that the traditional approach would not be able to work quickly enough or be flexible enough to accomplish the mission and voted for the RAD-oriented approach to testing.

Upper management at both companies felt that the RAD approach could not provide as stable a system as the more conservative risk-based approach. They opted to go with the risk-based approach because, first, they felt it would provide better assurance that there would not be any major failures when the system went live and, second, they felt that the traditional risk-based approach was more defensible as a best-practice approach in the event that a failure did occur.

This integration effort was huge, and it used much of the MITs methodology. I will be giving detailed examples from this effort throughout the rest of the book. Here are some of the main points.

When the initial test inventory was submitted by the risk-based test group, it became clear that the internal group of systems experts had only estimated about 30 percent of all integration tasks in their scope. Further, they did not have the testing resources or expertise to accomplish the integration of all the systems in the given time frame. However, no one wanted to deprive these experts the opportunity to test the systems in their scope, so upper management opted to run both integration efforts in parallel. The experts tested the message flows to whatever level of detail that they deemed appropriate and necessary.

The integration test group developed a master test plan for the effort. A major component of the master test plan was the test inventory. The test inventory was prepared, and the items were priorities and cross-referenced. Sections of this inventory are discussed at length in Chapter 7, "How to Build a Test Inventory." This inventory was used by all groups for several purposes.

Initially, the inventory was used to gather information about test needs and relative priorities. The interview process was used to gather information from the various development groups. The idea was to establish the risks associated with each project directly from the developers. In reality, it didn't work quite like that. The risk analysis process for this project was a real learning experience, and we will talk about it in detail in Chapter 7 and Chapter 9, "Risk Analysis."

When the director of the integration test effort was named, he declared that the inventory was the most important single document in the effort. It became the basis for reporting and tracking throughout the integration effort. Also, and particular to this effort, the inventory became the source of delivery dependency information, contact/owner information, relative priority information, and much, much more.

Because of the size and complexity of the effort, no one had ever constructed a single Gantt that all groups could agree upon. The closest thing to it lived on the wall in the SME integration test room. But it only covered modules in their scope, which only covered 30 percent of the project. Also, when code delivery dates changed, there was no way to update a Gantt conclusively, nor could anyone guarantee that all involved parties would see it.

The inventory provided a solution to this problem that ensured that delivery-driven dependencies would be called out and noticed. We will talk about this in more detail in Chapter 7 when we discuss the interview process.

A change management board was created and tasked with overseeing and approving each movement of the code through the various integration test systems. Each new module passed a unit test phase and function test phase before being accepted into the integration test phase. Three different integration test systems were maintained throughout the effort, each one progressively cleaner and more a mirror of the actual production environment. Integration testing was conducted using captured production data and a past environment clock.

Because of the distributed nature of this integration effort, or perhaps simply because of the plan-driven nature of the effort, higher reporting functions and more complex metrics were never used. Status reporting was done using the inventory, not using the S-curves.



Software Testing Fundamentals
Software Testing Fundamentals: Methods and Metrics
ISBN: 047143020X
EAN: 2147483647
Year: 2005
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net