Preemptive Avoidance

Everything we have discussed so far refers to fixing an existing application. This is all well and good, but it relies on having the end users of your product pointing out the poor performance they are experiencing, which can have quite serious commercial ramifications.

A more productive as well as professional approach is to design with performance in mind from the outset. This is, of course, a whole book in its own right. Generally speaking, the principles set out in this book have had performance as a priority, so with luck, none of them will adversely affect your application if used appropriately and with reasonable volumes of data. It is in your own architecture that you need to be careful.

Tips for High-Performance Architecture

Here are a few useful tips you can employ when architecting your application, which may be useful in ensuring fast, efficient design from the outset:

  • Get the hardware right. The fastest PHP in the world will still run like a dog on the wrong hardware. Serving Web pages is relatively simple, so put your weaker hardware to work as Apache workhorses. Keep the big guns for running databases. Also, try to equip all servers with high-performance SCSI disks in a RAID configuration where possible.

  • Use caching at the lowest level possible. If some of your more intensive scripts are producing the same output time and time again, should you really hit the database every time? True, the database will cache the responses to queries at some level, but if the HTML remains the same each time, why not cache the HTML itself? Various third-party caching packages of variable quality are available in PECL, but you can easily write your own using the serialization of GET, POST, and COOKIE parameters passed to your script. By comparing that serialization to those made on previous requests, you can determine the "uniqueness'' of each request; requests that are identical to those made previously can be satisfied using cached data, rather than having to hit the database again.

  • Perform unpredictable processes offline. If some process in one of your scripts is dependent on some unpredictable third party, strongly consider taking it offline. The most obvious example is processing credit cards through a payment service provider. If you must have real-time authorization (for example, if you are allowing customers to purchase access to online content), then use a simple, automatically refreshing page that diverts to a success page after the database has been updated to indicate authorization. Assign an external script, run as a scheduled task, even if once every 60 seconds, the task of batch-processing requests to authorize cards, and update the database to reflect the successful authorization, or otherwise, of those requests.

  • Use databases judiciously. Not all data needs to be stored using a database. For example, a content-management system that stores its content in XML may be better off storing that data on disk than in a database. Databases are not good at storing large chunks of text.

  • Optimize database queries. Learn what does and doesn't run quickly in your particular flavor of database, and err toward the quicker techniques. PostgreSQL, for example, is noticeably slow when handling subselects, and INNER joins are likely to prove a more efficient choice. In addition, make sure that all necessary indices are in place and, equally important, remove any unnecessary indices, because these will actually slow the database down.

  • Load test. Load test individual components using realistic traffic levels and data sets to ensure that real-world performance will match performance at development time. We discuss this in more detail shortly.

Load Testing

Functional testing is an important part of QA, but load testing is of equal importance in ensuring the overall quality of your finished project.

In a nutshell, load testing entails simulating a number of simultaneous connections to the Web server and carrying out typical user flow scenarios in each case. At the same time, the performance of the scripts should be measured as load increases, as well as the overall impact on the server as a whole.

A table of data can then be produced that, properly analyzed, can advise that at a given number of simultaneous users, response time will be n. n will obviously tend toward infinity as the number of simultaneous users increases, but at what stage does it reach an unacceptable value and, hence, what is the maximum number of simultaneous users that can be supported on the setup in question?

Excellent packages such as ApacheBench (http://codeflux.com/ab/) can simulate the very simplest of scenarios, but more sophisticated commercial software such as LoadRunner can recreate more realistic scenarios by randomly deviating the interval between the requests of each user, and requesting randomly altering sequences of pages to more realistically match the behaviour of a real human user.

When load testing, you should endeavor to use another server or, ideally, servers to act as clients connecting to the live Web server(s). This ensures that the load test software itself is not a burden on your Web server. In addition, you should always try to use the live server running in a staging mode or, if not feasible, an accurate recreation of the live environment and configuration on another server, to ensure that the results are representative.

It can often be helpful to present the results of your tests (assuming that they are positive) in a nontechnical manner to your client. It will help the client plan for future expansion against his or her own commercial or operational objectives.



Professional PHP5 (Programmer to Programmer Series)
Professional PHP5 (Programmer to Programmer Series)
ISBN: N/A
EAN: N/A
Year: 2003
Pages: 182
BUY ON AMAZON

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net