Evaluating Application Systems


The IS auditor should review the application system to ensure the control aspects of the system development and implementation process meet the needs of the users as well as management objectives. IS auditors performing this audit should be independent and should not be involved in the system-development process. The IS auditor should assess whether the prescribed project management, SDLC, and change-management processes were followed. The IS auditor should ensure that proper testing procedures were used to validate user requirements and that the system's internal controls are working as intended.

Application Architecture

In addition to ensuring that systems within the application architecture meet the requirements and objectives of the organization, the IS auditor should review them to ensure that controls protect the confidentiality, integrity, and availability of both applications and data. In reviewing the application architecture, the application controls should ensure that information stored, delivered, and processed by applications is complete, accurate, and authorized. In developing control objectives, the IS auditor should keep in mind the following control categories:

  • Security

  • Input

  • Processing

  • Output

  • Databases

  • Backup and recovery

Section AI5 of COBIT ("Install and Accredit Systems") provides specific activities that an IS auditor should perform to ensure the effectiveness, efficiency, confidentiality, integrity, availability, compliance, and reliability of the IT system. Accreditation is a process by which an organization, through internal or third parties, IT services, or systems ensures adequate security and control exist. The COBIT activities associated with this objective are listed next.

The IS auditor should obtain the following:

  • Organizational policies and procedures relating to system-development life cycle planning

  • IT policies and procedures relating to security policies and committees; systems-development life cycle planning; systems-development testing procedures for programs, unit, and system test plans; training of users; migration of systems from test to production; and quality assurance and training

  • System-development life cycle plan and schedule, and system-development life cycle programming standards, including the following:

    • Change-request process

    • Sample system-development effort status reports

    • Post-implementation reports from earlier developmental efforts

In reviewing this information, the IS auditor should consider whether the following are true:

  • Policies and procedures relating to the system-development life cycle process exist.

  • A formal system-development life cycle methodology is in place for system installation and accreditation, including, but not limited to, a phased approach of training, performance sizing, conversion plan, testing of programs, groups of programs (units) and the total system, a parallel or prototype test plan, acceptance testing, security testing and accreditation, operational testing, change controls, and implementation and post-implementation review and modification.

  • User training as part of each developmental effort is occurring.

  • Program and system controls are consistent with security standards of the organization and IT policies, procedures, and standards.

  • Various development, test, and production libraries exist for in-process systems.

  • Predetermined criteria exist for testing success, failure, and termination of further efforts.

  • The quality-assurance process includes independent migration of development into production libraries and completeness of required user and operations groups' acceptance.

  • Test plans for simulation of volumes, processing intervals, and output availability, installation, and accreditation are part of the process.

  • A training program associated with a sample of several system-development efforts contains the difference from the prior system, including changes affecting input, entry, processing, scheduling, distribution, interfaces to other systems, and errors and error resolution.

  • Automated tools optimize systems developed, once in production, and these tools are being used for efficiency opportunities.

  • Problem resolution is occurring, relating to less than optimal performance.

  • User involvement and formal approval exist at each phase of the system-development process.

  • A test plan is in place for programs, units, systems (including parallel or prototype), conversion, and implementation and post-implementation review.

  • Appropriate consistency is being maintained with security and internal control standards.

  • Appropriate data-conversion tasks and schedules exist.

  • Testing occurs independently from development, modification, or maintenance of the system.

  • Formal user acceptance of system functionality, security, integrity, and remaining risk has been obtained.

  • Operations manuals include procedures for scheduling, running, restoring/restarting, backing up/backing out, and handling error resolution.

  • Production libraries are physically and logically separate from development or test libraries.


Test and development environments should be separated to control the stability of the test environment.


Software Quality-Assurance Methods

The organization should ensure quality during the SDLC through testing. Testing should take place at a variety of levels and degrees, but the testing process should follow an accepted methodology. The testing process can be either a bottom-up or a top-down approach. Bottom-up testing starts at the smallest level of the application (modules and components) and works up through the application until full system testing is completed. The advantage of bottom-up testing is that it provides the capability to test single modules or components before the system is complete and allows for the early detection of errors. Top-down testing is usually used in RAD or prototype development and provides the capability to test complete functions within the system. It also allows for the early correction of interface errors. The approach for testing should include the following:

  • Development of a test plan Should include specific information on testing (I/O tests, length of test, expected results).

  • Testing Utilizes personnel and testing software, and then provides testing reports that compare actual results against expected results. Testing results remain part of the system documentation throughout the SDLC.

  • Defect management Defects are logged and corrected. Test plans are revised, if required, and testing continues until the tests produce acceptable results.

In addition to testing, the quality-assurance activities include ensuring that the processes associated with the SDLC meet prescribed standards. These standards can include documentation, coding, and management standards. The IS auditor should ensure that all activities associated with the SDLC meet the quality-assurance standards of the organization.


Using a bottom-up approach to software testing often allows earlier detection of errors in critical modules.


Testing Principles, Methods, and Practices

To ensure that applications function as expected, meet the requirements of the organization, and implement proper controls, they must be tested. These tests ensure that both the modules and the entire system function as designed and that individual modules will not malfunction, negatively affecting the system. Whenever an application is modified, the entire program, including any interface systems with other applications or systems, should be tested to determine the full impact of the change. The following are the general testing levels:

  • Unit testing Used for testing individual modules, and tests the control structure and design of the module. Unit testing pertains to components within a system; system testing pertains to interfaces between application programs.

  • Interface/integration testing Used for testing modules that pass data between them. These test are used to validate the interchange of data and the connection among multiple system components.

  • System testing Used for testing all components of the system, and usually comprised of a series of tests. System testing is typically performed in a nonproduction environment by a test team.

  • Final acceptance testing Used to test two areas of quality assurance. Quality assurance testing (QAT) tests the technical functions of the system, and user acceptance testing (UAT) tests the functional areas of the system. These tests are generally performed independently from one another because they have different objectives.


Above almost all other concerns, failing to perform user acceptance testing often results in the greatest negative impact on the implementation of new application software.


It is important for an IS auditor to know the testing levels as well as the different types of tests that can be performed. As part of an audit, the auditor should review test results that dictate a specific type of test to be performed, to ensure that controls are operating per the specification. As an example, you might observe that individual modules of a system perform correctly during development testing. You would then inform management of the positive results and recommend further comprehensive integration testing. Testing at different levels and with separate testing elements ensures that the system meets the detailed requirements, does not malfunction, and meets the needs of the users. The following are some of the specific types of tests that can be performed within the testing levels:

  • Whitebox testing Logical paths through the software are tested using test cases that exercise specific sets of conditions and loops. Whitebox testing is used to examine the internal structure of an application module during normal unit testing.

  • Blackbox testing This testing examines an aspect of the system with regard to the internal logical structure of the software. As an example of blackbox testing, the tester might know the inputs and expected outputs, but not system logic that derives the outputs. Whereas a whitebox test is appropriate for application unit testing, blackbox testing is used for dynamically testing software modules.

  • Regression testing A portion of the test scenario is rerun to ensure that changes or corrections have not introduced new errors, that bugs have been fixed, and that the changes do not adversely affect existing system modules. Regression testing should use data from previous tests to obtain accurate conclusions regarding the effects of changes or corrections to a program, and to ensure that those changes and corrections have not introduced new errors.


Regression testing is used in program development and change management to determine whether new changes have introduced any errors in the remaining unchanged code.




Exam Cram 2. CISA
Cisa Exam Cram 2
ISBN: B001EEFNHG
EAN: N/A
Year: 2005
Pages: 146

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net