Foreword


About a year ago I was sent out to a large Fortune 500 WebSphere customer to solve a critical "WebSphere performance" problem. The customer was close to putting a WebSphere application into production, and believed they had discovered ”with less than a week to go before the application was to go into production ”that WebSphere "did not perform well."

No one seemed to have many details about the problem, but we were assured by the highest levels of management at both the customer's company and IBM that this was indeed a critical situation. So I dropped everything and headed out the next morning on a 6:30 AM flight. At the company I met with the customer representative, who showed me an impressive graph (the output of a popular load-testing tool) that demonstrated that their application reached a performance plateau at five simultaneous users, and that response times increased dramatically as more load was placed on the system.

I asked if they could run the test while I watched so that I could see the numbers myself . I was told no ”the hardware they were using for performance testing was also being used for user -acceptance testing. It would not be available until after 4 PM that day. So I asked if I could see the test scripts themselves . to see how they were testing the application. Again the answer was no. The fellow who wrote the scripts wouldn't return until 5 PM , and no one else knew where he kept them.

Not wanting to seem like I was wasting time, I next asked for the source code for the application. They were able to provide it, and I spent the next eight hours reading through it and making notes about possible bottlenecks. When the script author returned at 5 PM , we reconfigured the test machine and ran the script. Sure enough, the performance curve looked like what the test had caught the previous night. I asked him to walk me through the code of the test script. He showed me what each test did, and how the results were captured. I then asked him about one particular line of code in the middle of the script: "So, here you seem to be hard-coding a particular user ID and password into the test. You never vary it, regardless of the number of simultaneous users the load testing tool simulates?"

He said that this was true and asked if that could be a problem. I explained to him that their test setup used a third-party security library, and that one of the "features" of this library was that it restricted users with the same user ID and password from logging in twice. In fact, it "held" requests for the second login until the first user using that login has logged out. I had picked up on this fact by reading the code that morning. I then asked if he could rewrite the script to use more than one login ID. In fact, if they wanted to test up to a hundred simultaneous logins, could he rewrite the script so that it used a hundred different login IDs? He ended up doing just that, and the next night, after a few more such adventures , we reran the modified test.

This time WebSphere performed like a champ. There was no performance bottleneck, and the performance curve that we now saw looked more like what I had expected in the first place. There were still some minor delays, but the response times were much more in line with other, untuned customer applications I had seen.

So what was wrong here? Why did this company have to spend an enormous amount of money on an expensive IBM consultant just to point out that their tests weren't measuring what they thought they measured? And why were we working under such stressful, difficult circumstances, at the last possible moment, with a vendor relationship on the line?

What it came down to was a matter of process. Our customer did not have a proper process in place for performance testing. They did not know how to go about discovering performance problems so that they could be eliminated. The value that this company placed on performance testing was demonstrated by the fact that the performance tests were scheduled for after hours, and were done on borrowed hardware. Also, the fact that this problem was not discovered until less than a week before the planned deployment date of the application showed the priority that performance testing had among other development activities; it was an "afterthought," not a critical, ongoing part of development.

I have repeatedly seen large, expensive systems fail ”and thousands or millions of dollars lost ”because of this attitude. As a wise man once said "failing to plan is planning to fail." The book you hold in your hand can help you to avoid such failures. It offers concise , easy to follow explanations of the different kinds of performance problems that large-scale web applications face. More important, it provides you with a process and methodology for testing your systems in order to detect and fix such problems before they become project-killers.

The authors of this book are all respected IBM consultants and developers, with years of collective experience in helping solve customer problems. They've dealt with the foibles of application servers, customer application code, network configuration issues, and a myriad of other performance-stealing problems. They convey their experiences and recommendations in a laid-back, easy to understand way that doesn't require that you to have a Ph.D. in stochastic modeling to understand. I believe their greatest contribution to the world is a process for injecting performance testing into all stages of the development process ”making it, appropriately, a key part of web site development.

If you are building a large web site using J2EE technologies ”or even just a small, departmental application ”buy this book. Performance problems can creep in to all sizes of applications, and the time that you will save by following the advice given here will easily repay the purchase price of this book many times over. I've come to rely on the authors for advice in this area, and I'm sure you will too.


”Kyle  Brown
Senior  Technical  Staff  Member
IBM  Software  Services  for  WebSphere



Performance Analysis for Java Web Sites
Performance Analysis for Javaв„ў Websites
ISBN: 0201844540
EAN: 2147483647
Year: 2001
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net