Introducing the Interoperability Performance Tests


Before concluding the chapter, I want to introduce the interoperability performance tests, a common thread in many chapters in this book. One of the primary aims of this book is to show a broad range of technology options that span the .NET and J2EE platforms. I'm frequently asked questions such as, "How does option x perform?" and "When it comes to performance, how does using option x compare with using option y?" Giving an exact answer to such questions is difficult because the performance of any solution can depend on a variety of factors. These factors can include the passing of data between platforms, network latency, machine speed, processor utilization, application tuning, and vendor selection. This book aims to illuminate the differences among these solutions by including sample interoperability code and descriptions that you can modify for your own programming environment.

To further this goal, many interoperability samples in this book include a modified sample that allows you to run a simple performance test. This modified sample resides in the same structure as the sample code, under a PerformanceTest subdirectory. The parsing and serialization samples presented in this chapter include performance tests.

How Do the Tests Work?

Most of the interoperability performance tests work by simply repeating the same instructions for a number of iterations and then calculating the average time taken to complete one iteration. For example, to execute the performance test version of the parsing sample code presented in this chapter, run the following from the command line:

 SimpleParser ..\..\Shared\portfolio.xml 1000 

This instructs the program to run the simple parser code 1000 times and report on the average time taken to process each request.

Note

As many of the tests read, modify, and write the same file many times per second, it is recommended that any antivirus software be disabled. This will prevent the shared file from being locked as the test is running, and the results reported will be more accurate.

How Do You Ensure Accuracy of the Tests?

Most of the tests in the book complete very quickly. To ensure accurate results, use a Win32 API named QueryPerformanceCounter . QueryPerformanceCounter obtains the current reading of the system's high-performance counter and samples at a frequency based on the CPU cycles of the machine. By measuring the number of cycles that occur between two points in time and then dividing that number by the frequency of the counter, you can obtain very accurate readings (in most cases, to a fraction of a millisecond [ms]).

The QueryPerformanceCounter implementation is in the Counter.cs class, which is included in all the performance test samples in this book. The API exposes two methods that are used frequently throughout the tests: Value and TimeElapsed . The Value method returns a long value of the cycles that have elapsed since a specific point in time. By calling the Value method twice (once when the test begins, and again when it's complete), you can calculate the number of counter cycles that have passed. The TimeElapsed method of the Counter.cs class takes these two long values and returns a value in milliseconds (of type float ).

Why Not Use System.DateTime and System.TimeSpan ?

In a word: accuracy. In similar tests, System.DateTime and System.TimeSpan ” two packages included in the .NET Framework to measure time ”are accurate only within about 10 milliseconds. This is perfectly acceptable for most general- use applications, but in our tests, the serialization of objects and XML normally falls within the 10-millisecond range. Rather than guess or aggregate an average, we can use the QueryPerformanceCounter API to obtain a more accurate representation of how much time has elapsed.

Why Are All the Performance Tests in .NET and Not Java?

All the performance tests are in .NET mainly for consistency reasons and immediate access to the QueryPerformanceCounter Win32 API. The aim of this book is to encourage the selection of the correct interoperability solution for a specific scenario, not to serve as a showdown between .NET and Java. The performance tests should be used to measure a delta between various interoperability solutions (for example, serialization commands vs. parsing commands), not the platforms themselves .

Interoperability Performance Test ”XML Parsing

The performance test can be found in the following directory:

 C:\Interoperability\Samples\Data\XML\Parsing\PerformanceTest\dotNET 

After compilation, run the following command to execute the test:

 SimpleParser ..\..\Shared\stock-stress.xml 1000 

The performance test measures the time required to read each of the 20 records in the XML file and append a record to the end of the file. (Incidentally, the stock-stress.xml file is similar to the stock.xml file that was used in the previous example, but it contains a few more companies for test purposes.)

On my development machine (a modest machine with a Pentium 4, 1.6 GHz processor), this test takes about 4.1 ms to complete a single parsing of the document. Again, this figure should be used to determine the difference between the interoperability options presented throughout the book, not the performance of a particular machine.

Interoperability Performance Test ”XML Serialization

The performance test can be found in the dotNET subdirectory of the following directory:

 C:\Interoperability\Samples\Data\XML\Serialization\PerformanceTest 

After compilation, run the following command to execute the test:

 SimpleSerializer ..\..\Shared\portfolio-stress.xml 1000 

The performance test measures the time required to serialize and deserialize a portfolio of 20 records in the XML file. (The portfolio-stress.xml file that's created is similar to the portfolio.xml file that was used in an earlier example, but it contains a few more stocks for test purposes.)

In my tests, this test takes about 3.6 ms to complete. Again, these tests are most beneficial when customized to your own environment. That said, my results show that serialization of this particular file offers a slight performance increase over regular parsing. For additional results, you might want to add more companies and data to the serialization test.




Microsoft. NET and J2EE Interoperability Toolkit
Microsoft .NET and J2EE Interoperability Toolkit (Pro-Developer)
ISBN: 0735619220
EAN: 2147483647
Year: 2003
Pages: 132
Authors: Simon Guest

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net