Testing Web Services


Like any other application, Web services will require testing. A corporation's Web site goes through several quality assurance and testing scenarios, and a Web service will at minimum require the same level of effort. A Web service, like a corporate Web site, has high visibility, supports unpredictable workloads, and therefore must be robust.

As we have indicated earlier in the book, there are two types of Web services: those that serve internal customers (intranet) and those that serve the general public (Internet). Each poses its own unique difficulties for testing. An internally exposed Web service will not require some forms of testing. For example, it has a theoretical maximum on the number of users. An organization can also enforce policies for an internal Web service's use and make broad, simplistic assumptions about security. A service exposed to the Internet typically means that anyone can access it, which may require additional security, reliability, and scalability.

Many organizations have used testing tools that record macros of a user's action for future playback. This approach will not work with Web services, which have no user interface (UI) to speak of. This will also cause additional difficulty for one-off manual testing scenarios. To successfully test a Web service that does not have a UI may require that the testing team have some programming skills. Simply put, a Web service cannot be tested by monkeys banging on the keyboard.

Types of tests include functional, regression, load/stress, and proof of concept, each with a different goal. Let us look at each type and the best way to realize the goals.

Functional Testing

A functional test ensures that the Web service meets all business requirements and works as expected. Typically, functional tests are based on information contained in a UML use-case diagram or similar notation. A test scenario may look at whether your Web service properly implements authentication, supports multiple communication protocols (e.g., HTTP and messaging), and properly handles alternate scenarios.

This is the introductory form of testing. For Web services, you will need to know how a Web service can be invoked, the information being sent as part of the request, and the appropriate response. Flute Bank may wish to test its stock-trading Web service for multiple data input situations. Let us look at some alternate case scenarios that test for order price:

  1. Entering an order price that is not numeric: [five dollars] instead of [5.00]

  2. Entering an order price that is valid but awkward: [5 0/32] instead of [5]

  3. Entering an order price where the denominator is not evenly divisible: [5 16/33]

  4. Entering a negative order price: [ -5.25]

  5. Entering an order price where 0 is the denominator: [5 3/0]

Functional testing makes sure the system not only works as specified but that it will handle errors appropriately. A complete functional testing plan will include bounds testing and error checking.

As we learned in Chapter 4, a SOAP message is made up of an envelope, a header, and a body. For functional testing, it is useful to trap the SOAP messages sent, because requests and responses are part of interacting with a service. SOAP extensions—code that modifies the contents of a SOAP message and can take actions on it—can be used for this. An extension can compress/decompress, add data, and so on. Sometimes it is useful to have a SOAP extension catch the messages exchanged between services and log them to a text file for analysis.

Flute Bank, in looking at the logs, determined it could gain additional performance by using simple SOAP data types rather than custom data types for its portfolio management service. The bank learned that user-defined types reduce accessibility of the service, because they require special client knowledge to understand the data. Secondarily, the bank determined that it is also taking a performance hit, because the proxy class has more work to do in serializing and deserializing the SOAP messages.

Flute Bank also analyzed its auto-insurance-quote Web service and found additional optimizations. A smart tester, along with help from the architect, noted that a quote from Flute's Web site required two calls to the same Web service. The Web site would pass the vehicle identification number (VIN) once to determine the age of the vehicle and then again to determine the car's owner. Flute decided to optimize its service interface to return both pieces of information in a single method call.

Regression Testing

Regression testing ensures that the target Web service is still working between builds or releases of different versions. Regression testing starts with an assumption that the service worked in the past and checks that it still works as advertised. A regression test is usually a scaled-back version of a functional test and intentionally should not be a full functional test.

A regression test in Flute Bank's stock-trading Web service may look to see if a valid and invalid stock price for purchase generate the appropriate responses. It may also look to see if performance is within normal operating range. Regression tests, by their nature, are repetitive and therefore should be automated.

Load and Stress Testing

The primary goal of load and stress testing is to determine how well a Web service will scale, based on simulated users accessing it. Functional and regression testing prove that a service works with a single user. Load and stress testing prove that it will work with multiple concurrent users.

Flute Bank may want to test each of its Web services with varying numbers of concurrent users, to find the breaking point. This information is useful for capacity planning. The bank may also want to determine whether tripling the number of users will change the response time. The bank may also want to determine whether tripling the capacity of its application server farm will really produce triple the capacity.

Load and stress tests are usually executed in controlled environments. If Flute Bank wants to determine how a particular service responds with increased use, it must isolate or at least stabilize other factors that can skew the results, such as hardware and networking. A successful execution of a load and stress test should result in a statement such as "X Web service will respond within X seconds for up to X clients making X requests per second."

Load and stress testing can also be helpful in creating reasonable service level agreements. You may already know that the service can handle 50 requests a second but need to know what will happen with a massive peak in usage. Will the Web service slow down, crash, or simply return garbage or inaccurate data?

A load and stress testing scenario should preferably capture the metrics in Table 16.3 for further analysis. These are the first steps in measuring your service from the load and stress perspective. The ideal scenario occurs when the number of users can be increased and these numbers stay the same. Likewise, the worst scenario occurs when the numbers increase linearly with the amount of simulated users.

Table 16.3: Load Testing Metrics

Metric

Description

Connection time

The time it takes to complete a connection from the client to the Web service. (Lower is better.)

First byte time

The time it takes for the client to receive the first byte of the response from the Web service. This indicates whether the service requires a lot of think time.

Last byte time

The time it takes for the client to receive the last byte of the response from the Web service. This will increase based on the amount of data transmitted.

Testing Web services provides unique challenges that are not necessarily encountered in traditional Web-based applications. It becomes more important than ever to validate the capacity, scalability, and reliability of your service-oriented architecture in the design stage as well as to perform a full load and stress test before production release.

Proof-of-Concept Testing

One of the biggest mistakes continuously repeated among large and small corporations alike is waiting until it is too late to start testing. One of the basic tenets of Kent Beck's Extreme Programming Explained (Addison-Wesley, 1999) and Scott Ambler's Agile Modeling (Wiley, 2002) is the need for a test plan before writing a single line of code. Some will argue about the right time to start testing, but the author team recommends at least conducting proof-of-concept testing early in a Web service's development lifecycle.

From the testing perspective, an architect responsible for developing robust Web services will most likely be faced with concerns related to scalability. A realistic scenario is a business sponsor inquiring whether a particular architecture can scale to 1,000 simultaneous users. The smartest thing an architect can do at this stage is to execute a reduced load test (described above). This form of load testing does not need to be run on production-level hardware or provide exact answers. Its sole purpose is to indicate whether you are headed in the right direction.




Java Web Services Architecture
Java Web Services Architecture (The Morgan Kaufmann Series in Data Management Systems)
ISBN: 1558609008
EAN: 2147483647
Year: 2005
Pages: 210

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net