Testing Enterprise Applications

Testing is the process of exercising the software under controlled conditions and assessing the result. The controlled conditions should involve normal and abnormal data and events. Testing should strive to introduce the unexpected to determine how the system will react.

Ultimately, the role of testing is not just to find bugs, but to assure quality. Because the ultimate definition of quality is "meeting the customer's needs by solving the business problem," the testing process should support that goal by validating what needs to be done and verifying that it is being done correctly.

The testing process is not limited to the Stabilizing Phase of the MSF Development Process Model, but is also an integral part of the Developing Phase. At the Project Plan Approved Milestone (at the end of the Planning Phase), the project team establishes a baseline for the test plan and begins work on the more detailed testing specifications that describe how individual features will be tested. The testing specification is baselined at the Scope Complete Milestone (at the end of the Developing Phase), because at that point the feature set should not grow or change.

During the Developing Phase, coverage testing attempts to thoroughly test each feature of the product as well as the actual code base of the product in a relatively closed environment. During the Stabilizing Phase, testing shifts from coverage testing to usage testing, which validates that application's fulfillment of the use cases and usage scenarios developed during the Envisioning Phase. This stage of testing usually includes involving actual users of the product in beta tests, and preferably occurs in the application's production environment. Tolerance for bugs decreases as testing progresses through the Stabilizing Phase, and because the focus is on shipping during this phase, being able to successfully manage bugs is paramount.

Because of the multiple dependencies within a distributed application environment, testing requirements for this type of application are extensive. Each dependency, Web page, component, and database, as well as elements such as the GUI code, middleware, and network infrastructure, must be tested not only for functionality but also for compatibility in multiple configurations.

The best way to test a distributed enterprise application is using a bottom-up approach. Each component is tested individually outside the MTS environment. When the basic functionality of a component is working, the component is tested within MTS on a single computer. Finally, the application as a whole is tested in the distributed enterprise environment.

Component-Level Testing

The first step is to test each component individually, outside the MTS environment (the same kind of unit testing that is done for any other kind of code). The easiest way to test components is to write a simple test harness that exercises all the functionality exposed by the COM classes. Scripting languages and rapid application development (RAD) tools, such as Microsoft Visual Basic Script and Microsoft Visual Basic, are great ways to build simple test harnesses. Multi-thread test harnesses can be used to make sure that the component has no concurrency problems. The goal is to verify that the application logic of each component works correctly, before the component is placed in the distributed application environment. Many programming tools provide only limited support for debugging components running within MTS, so the more bugs that can be eliminated up front, the better.

One potential problem with testing a component outside the MTS environment is that the component's code typically uses the object context. When a component runs outside the MTS environment, the object context is not available. If the release version of the component might run both within and outside the MTS environment, the object context must be checked to see whether it exists at run time before any method calls are made. If the released version always runs within MTS, checking for the object context before every method call may not be necessary. In this case, conditional compilation is a useful approach if the programming language being used supports it.

The disadvantage of this approach is that a special version of each component must be built to run outside MTS. There is a slight risk that the application logic might be correct in this version and incorrect in the version build for MTS, but it is the best approach available for testing components written in languages such as Microsoft Visual C++ or Microsoft Visual Basic outside the MTS environment.

Local Integration Testing

After the components have been tested outside MTS, they should be tested again within MTS on a single computer, beginning with single, independent components and gradually building up to the entire application. Testing on a single computer eliminates network errors and reduces security problems while the application is being constructed. Getting the entire application working on a single computer verifies correct transactional behavior and security checking before the application is set up across a distributed environment.

Initial testing should focus on whether transactions interact as expected. After the normal code paths are validated, the error paths should be executed. Appropriate calling of SetAbort, SetComplete, EnableCommit, and DisableCommit should be verified, including whether the correct error codes are being returned and whether errors are being handled correctly in the clients. In some cases, the original component won't be able to reproduce all the errors, and it may be necessary to build a special test version that uses the same interfaces but does produce the errors so that all the error paths are exercised.

To reduce the initial configuration work of setting up a test environment, all components should be running in the security context of the interactive user with authorization checking disabled. When the application has been validated in this environment, the components should be tested to make sure they work for a particular user with authorization checking enabled. Finally, any role-based security checks, declarative or programmatic, should be verified to ensure that they work as expected.

Debugging Tools

If a component doesn't execute as anticipated, it may need to be executed in a debugger so that each line of code can be examined. Primary concerns are developing components with debug information and configuring the debugger so that the MTS surrogate is correctly launched.

Traces allow viewing of output information as the component is executing. This information is primarily useful when the code is not executing in the debugger or the source code is not available. However, traces can cause problems if a component does not have access to the interactive user's desktop and a message box is displayed where it can't be seen or closed.

Return values from all COM method calls should be checked to determine whether COM is reporting information about the system or about specific errors generated by the component. For example, COM may report access violations and communication errors in the method return value when the component is executing in a distributed or secure environment.

Data Access Testing

If data access components are not able to access their data sources, database management system (DBMS) tools and the tools provided by ODBC should be used to track down the problem. If SQL Server is being used, connecting to a database, issuing queries, and so on can be tested with the SQL Enterprise Manager. The SQL Trace program can be used to watch operations against the database. Also useful are the Visual Data Tools and SQL debugging feature of Microsoft Visual Studio Enterprise Edition.

If a data source is accessed via ODBC, the data source driver may allow use of the ODBC driver manager to test data source access with a particular data source name (DSN). ODBC also provides a trace facility that can help troubleshoot ODBC errors. Trace messages are written to a log file that can be examined for details of the ODBC commands that were executed.

If data sources can be accessed manually but not from MTS components, data source compatibility with MTS must be verified. In particular, the ODBC drivers must support MTS. If data sources can be accessed from within MTS but transactions aren't working correctly, the Microsoft Distributed Transaction Coordinator (MS DTC) might not be running or might not be properly configured on all computers involved in the transaction.

Integration Testing

When the application is working on one computer, it should be tested in a distributed environment before being released for deployment. At this point, the testing is done within the certification environment. In general, MTS applications should not require any special coding to work in these scenarios, but setting up a certification environment is a great way to test package settings and deployment instructions before an application is actually deployed.

Integration testing should start with a simple deployment and build up to more complex deployments. For example, the application should be tested without a firewall in place before it is tested with the firewall. In addition, the application should be tested both with a single client and with multiple concurrent clients, either using multiple-client computers or a test harness that simulates multiple clients.

The techniques described for local testing also apply to distributed testing. Most administrative tools available with Windows NT allow administrators to operate on remote machines as well as local machines. For example, event logs for multiple machines can be viewed from a single workstation. However, some techniques apply specifically to the distributed environment. If the application works locally but objects cannot be created remotely, network connectivity between the computers may be interrupted or DCOM may not be enabled. Checking the event log will indicate whether security problems are preventing object creation or access.

The exact mechanism used to test network connectivity depends on the network protocols available between computers. If TCP/IP is used for DCOM communication, the Ping utility can be used to determine whether a particular computer can be reached, but does not guarantee communication via DCOM. The DCOM Configuration utility, DCOMCNFG.EXE, can be used to determine whether DCOM is enabled on a particular machine. This test is particularly important on Microsoft Windows 95 and Windows 98 clients, where DCOM is not enabled by default.

If the application's basic COM or MTS functionality is being troublesome, the test computers should be examined to ensure that they are in good working order.



Microsoft Corporation - Analyzing Requirements and Defining Solutions Architecture. MCSD Training Kit
Microsoft Corporation - Analyzing Requirements and Defining Solutions Architecture. MCSD Training Kit
ISBN: N/A
EAN: N/A
Year: 1999
Pages: 182

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net