Engineering Process Areas

Requirements Management and Requirements Development

The CAU project patterns its requirements-generation and evaluation process after the robust PASS requirements evaluation process. This accelerates the deployment of practices related to Requirements Development and Requirements Management process areas.

USA and NASA work closely together for requirements elicitation, development, and management. For the CAU project, the Space Shuttle Cockpit Council is composed of USA representatives and shuttle astronauts who identify the desired features to be included in the CAU system, particularly in the area of shuttle cockpit displays. The customer provides USA with data dictionary documents to ensure that a clear understanding of the requirement is communicated.

USA then transforms the Cockpit Council products into detailed software requirements. The highest-level requirements document is the System Specification Document. The lowest level of software requirements is documented in the Software Requirements Specification. Use cases are created to analyze and validate both the System Specification Document and lower-level specifications.

The project uses a tool to track the traceability from the Software Requirements Specification to the System Specification Document. This tool will also be used during code, design, and test to document traceability to the requirements. USA tracks metrics for requirements-generation schedules, issues, readiness status, and quality. For the PASS project, similar forums are used, such as technical mode teams to formulate requirements concepts.

Scenario reviews of preliminary requirements concepts are conducted prior to formal requirements inspections. The purpose of the scenario review is to present community-identified scenarios to increase the team's understanding of intended functionality and usage and to develop other scenarios that might have been overlooked by the technical community. The review is designed to stimulate thought about scenarios to provoke "what-if" analysis. The scenario review is also designed to help increase the understanding of the requirements and to reduce the number of scenario-related discrepancies in requirements and design/code. The review includes scenarios from the following perspectives:

  • User Overview of impacts on operational scenarios and user interface

  • Performance Overview of intended vehicle-performance impacts and algorithmic accuracy requirements

  • Software/System Integration New interfaces and interactions

  • Off-nominal System response to failures and how it might be used for secondary purposes

  • Known discrepancies and maintenance traps Explanations of known maintenance traps associated with software areas being updated

  • A list of applicable user notes Insight into potential traps and possibly a chance to get rid of some nuisance user notes

The case study team identified a gap of a deficient requirements-generation process/procedure. While the evaluation, inspection, approval, and change-management aspects of requirements are covered in depth, there are limited guidelines for the actual writing of the requirements. This area has an existing process improvement action plan that is focusing on the requirements-writing guidelines and deployment of a corresponding course.

All requirements must go through formal requirements inspection. To ensure their commitment to the requirements, project participants attend the requirements inspections. Requirements inspections identify issues and errors with the documented requirements. After sections of the requirements are formally inspected and issues resolved, the requirements sections are submitted to control boards for approval.

In summary, projects' processes are in place to elicit customer requirements, translate those requirements into product requirements, obtain commitments to requirements, analyze and verify requirements, and maintain and control those requirements.

Technical Solution

Operational concepts, alternative solutions, and architectures are developed, and make/buy/reuse decisions are performed in the early phases of a program and typically require minimal adjustment during maintenance.

The PASS project has been in existence for more than 20 years and operates in a predominantly maintenance mode. The project performs design maintenance activities for each proposed software change. However, specific goals 1 and 2 of the Technical Solution process area are geared toward new projects in their early stages and do not seem well suited to the PASS long-term maintenance project.

For the CAU project, the decision regarding acquiring two parts of the software through a subcontract agreement was the only item described in the Technical Solution process area that was applicable at the time of the case study. Many of the practices called for by the Technical Solution process area will be more beneficial to the CAU project during its design phase.

For make/buy decisions, white papers are developed and reviewed to scope the software development effort. A make/buy decision panel is formed in accordance with the USA make/buy process.

One noted deficiency of the design process was that documented rationale was weak (i.e., not comprehensive or consistently applied) for design decisions. However, projects develop designs, code, and end-use documentation using defined standards, guidelines, and processes.

Product Integration

Detailed procedures are used for integrating software components as part of the project's build process. The build function is a highly tooled process that is controlled by the overall configuration management system and is managed through project schedules by the project Baseline Control Board.

The planning performed for each software release includes an integration strategy that specifies the software functions to be implemented for each build, the ordering, and the component dependencies. Tools support the detailed interface documentation.

The development load modules and the build release testing ensure the integrity of the initially compiled units. The detailed product specifications and delivery procedures support the packaging and delivery of the final software assembly.

Verification and Validation

The objective of software system verification is to perform an independent test, analysis, and evaluation of the software to ensure full conformity to specification and satisfaction of operational needs. Within the PASS verification/validation process, four levels of testing are performed: subsystem testing (Level 3), detailed/functional testing (Level 6), performance testing (Level 7), and reconfiguration testing (Level 8). USA uses the term verification for all four levels of testing; however, Level 7 and Level 8 verification activities represent validation as that term is used in the Validation process area. The testing cycle for the PASS project is illustrated in Figure 7.2.

Figure 7.2. PASS Software Testing

graphics/07fig02.gif

Detailed/functional testing verifies that the software system meets all specified requirements. The testing is organized by functional area along the lines of the software requirements. In addition to dynamic testing, complementary static analyses may be performed. Examples of static analyses are multipass analysis and standards compliance audits. Level 6 verification focuses on verifying the letter of the requirements.

For each software system, verification plans define the planned testing, scenarios, and test environments. Detailed plans are established and maintained for the verification and validation of selected work products. For Level 6 Verification, the verification test procedures describe each detailed functional test to be performed. They are based on the requirements but also consider software design decisions and test facility capabilities. Each verification test procedure is reviewed prior to final testing.

Performance testing, on the other hand, is directed at how the software system as a whole performs in nominal, off-nominal, and stressed situations. Performance testing is organized primarily by flight phases and attempts to duplicate as closely as possible actual trajectory and environmental conditions. Level 7 testing concentrates on demonstrating that the software produces the functional intent of the requirements and meets the desired performance constraints. Level 7 tests are performed by simulating user operational scenarios.

For Level 7, Verification pretesting, analysis, and planning activities culminate with the documentation of the verification approach in the performance test plan. The performance test plan includes overall testing objectives and the set of individual performance verification test specifications. The performance test plan defines test requirements based on requirements and interface control documents. It provides a functional, narrative description of testing to be conducted. Included in the performance test plan are (a) identification of requirements against which each computer program end item will be tested, (b) methods employed in testing the requirements, and (c) selected success criteria.

Level 8 testing consists of a standard set of verification test cases that are executed against the software after it has been reconfigured to support the specific mission objectives. This testing is performed according to predefined schedule templates for each flight.

Additional validation is performed after the software release. During this testing phase for the PASS project, the software is integrated as an element of the overall avionics system. Testing and integration are performed in the Shuttle Avionics Integration Laboratory. This laboratory contains avionics hardware and software systems as well as environmental and vehicle models, forming a closed-loop system. The purpose of this validation testing is to ensure that all subsystems interface correctly and that the system functions properly when integrated.

Quality control of testing is a key element of the verification process. Measures taken to ensure quality include the use of checklists, audits, and test coverage analysis of both requirements and code. Furthermore, reported software anomalies not found by verification testing are analyzed and reviewed with management in an effort to identify and correct deficiencies of the verification process.

In addition to the testing, rigorous formal inspections are performed for most project work products. Formal inspections are conducted on identified work products (e.g., requirements, design, code, development test scenarios) and issues are identified that must be addressed before the inspection can be closed.

Preparation for inspections includes developing a schedule for conducting reviews (inspection prescheduling) and defining required lead time for preparation. Inspection moderators verify adequate preparation of inspectors at the start of each inspection. Criteria are defined and followed for entry and exit requirements, re-review thresholds, issue closure, required participants, and responsibilities of each role.

The inspection technique has proved to be the bread and butter of processes; as the associate program manager likes to say, "PASS depends heavily on this technique throughout the project life cycle for defect identification." The CAU project has similarly planned to adopt formal inspections for its work products and is currently applying them to requirements products. Because of the importance placed on this verification method, the project requires every participant to take a mandatory computer-based training course on the inspection process.

For each software system, formal verification begins with the completion of software development marked by the First Article Configuration Inspection milestone. Tests are executed and analyzed in accordance with the approved performance test plan, verification test procedure, and project standards and procedures. In addition, desk analysis of noncode change items, including documentation-only items, tool inputs, and review of program notes, is performed in accordance with procedures.

Software errors identified by testing are analyzed, formally documented as discrepancy reports, and tracked in the configuration management system. Regression testing is scheduled, as required, to reverify software areas affected by program correction.

Results of verification testing are reflected in verification test reports. All test results are reviewed and approved prior to the software delivery milestone at configuration inspection.

The appropriate technical community and NASA participate in the development of the test plans and the review of the test results.

Many tools, such as the simulator/models, test analysis and reporting tools, and test status and tracking tools, are used to support the verification processes.

As with the other engineering process areas, the outputs of the processes (work products) are declared as quality records, placed under configuration management, and handled according to detailed NASA records retention requirements.

Since the USA policy was formulated using the SW-CMM as its process-improvement model, the policy is currently deficient in fully addressing the engineering process areas. Also, the corresponding company assets, such as the standard software process, do not yet fully address these areas. For example, there is no company-level, documented, standard integration process; this will be one of the areas to address.



CMMI (c) Guidelines for Process Integration and Product Improvement
CMMI (c) Guidelines for Process Integration and Product Improvement
ISBN: N/A
EAN: N/A
Year: 2006
Pages: 378

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net