|< Day Day Up >|
This section takes some time to review the important checkpoints you should complete before beginning your performance analysis.
Each component in your hardware environment plays an important role in the responsiveness of your MySQL solutions. Ideally, your production and performance testing environments will match as many of the following items as closely as possible:
In an ideal world, your test system will be a mirror image of your production server. Unfortunately, most readers will be fortunate if they even have a test machine, let alone a dedicated performance analysis platform. If this is the challenge you face, it makes more sense to run your tests on the production platform during periods of low activity.
Regrettably, this is likely to translate into a series of ruined nights and weekends for you. Testing on the production systems is discussed in the "Organizational Support" section later in this chapter.
Many MySQL installations service a wide cross section of users, including those connected via a local area network (LAN), a corporate wide area network (WAN), and over the Internet. Accurate performance testing demands that you attempt to replicate the connectivity profiles of all your users.
For example, suppose that remote users are connecting to your application via a web browser over the Internet and complaining of performance bottlenecks. To further complicate things, only a certain percentage of all remote browser users have performance complaints; the majority of the user community has no objections.
Before you even begin testing, you should do all that you can to describe and simulate all facets of the user experience, especially the connectivity components. You might be surprised to learn that the users who complain the most about browser access suffer from external performance impediments beyond your control, such as firewall issues, slow dial-up lines, proxy server problems, and so on. In certain cases, the users might even be running old, underpowered, or badly configured client machines. MySQL tuning will have little impact on these types of predicaments. Knowing these handicaps in advance will save you a tremendous amount of time, as well as greatly reduce the risk of taking action based on incorrect analysis.
After you've obtained the right hardware and connectivity profile for your testing, the next step is to locate, install, and configure the correct software.
At this point, you have done your best to replicate your production hardware, software, and connectivity profiles onto a test platform. Your next task is to come up with a data set for examining during the testing process. The ideal scenario is to replicate the entire production database, including tables, storage engine selection, indexes, views, triggers, and stored procedures.
Using the production schema with a small data set is a common error during testing. If your test database is too small, you might get an incorrect view of your true performance profile: The most inefficient database operation will blaze through a tiny set of information, even if the underlying database itself is poorly designed. In addition, the MySQL query optimizer might generate different plans for two identical databases if the number of rows is vastly different between them.
Fortunately, it's easy to make a full copy of a database. Use MySQL's mysqldump utility to export the data from the production instance to the test instance. Of course, if you're only concerned with a performance problem that affects a small subset of your production tables, it's probably safe to replicate only that portion of your database. However, it's still essential to duplicate the indexes, views, and so on.
Your Test Plan
Putting together a structured test plan takes time and discipline. You might feel that this work is a distraction from your performance-tuning tasks. However, if you invest the effort up front, you will realize significant benefits through the entire testing cycle. Your test plan will give you a clear sense of direction. If you have a multiperson testing team, a test plan is essential to bypassing the typical duplication of work and confusion that plague group performance analysis projects. This test plan will also form the nucleus of your report after you finish your analysis.
Test plans don't need to be masterpieces of design that take weeks to implement: You only need a road map that shows the steps you plan to take when conducting your experiments, along with any external assistance you might need.
For example, look at these entries from a sample test plan:
Test #: 203 Date: 5-October-2005 Start time: 11:14 Finish time: 11:19 Platform: Production 2 Action: Changed join syntax between 'customer' and 'problem' table Special needs: None Test dependencies: 175, 182 Team members: Bernard Bernbaum, Edward Dane Test status: Complete Test results: Query response improved 27% Test #: 204 Date: 6-October-2005 Start time: 09:29 Finish time: 09:47 Platform: Testing 1 Action: Raised 'sort_buffer_size' by 25%; reran marketing's problem 'ORDER BY' Special needs: None Test dependencies: 201 Team members: Tom Reagan Test status: Complete Test results: No impact on problem query Test #: 205 Date: 17-October-2005 Start time: 17:00 Finish time: 18:00 Platform: Production 1 Action: Create additional index on 'salesProbability' column; rerun problem query Special needs: May impact reports Test dependencies: <None> Team members: Dale Levander, John Caspar Test status: <not yet started> Test results: <not completed>
You can store your test plan in a spreadsheet or within a MySQL database for widespread access.
Change One Variable at a Time
The fastest way to an ulcer is to attempt to make sense of your results after changing two, three, or even more performance-related variables at one time. Your testing experience will be much more successful and pleasant if you methodically change one variable at a time, run your test(s), record results, and then move on to the next candidate.
For example, suppose that you are faced with a transaction that takes too long to complete. You suspect a number of problems, including the following:
In your eagerness to fix all of these problems at once, you make radical alterations in each of the preceding areas. You are happy to see a dramatic boost in responsiveness of your slow transaction. Your glee is short-lived, however: Your changes have damaged performance for other types of operations. You must now go back and undo your changes, and determine which ones heightened performance and which ones caused collateral damage. It would have been far better to make one change and then measure its impact before proceeding.
By this time, it's fairly safe to say that you've invested a significant amount of effort obtaining and configuring hardware, installing the correct software and patches, ensuring connectivity, and building your test plan. The last hurdle is often the most difficult to overcome: gaining sufficient operational support for your performance-tuning efforts.
Management might be reluctant to grant you adequate time and resources. Users might be unwilling to help isolate a system performance problem, even though their daily lives are negatively impacted by these types of issues. The original designers and developers of the MySQL-based solution might assemble roadblocks, fearing the ramifications of your report on their careers. These obstacles might be magnified if the solution was provided by a third-party vendor.
Your performance testing and tuning mission will fail without ample support from all of these communities. Before beginning your work, you have the right to ask for assistance in each of these areas:
|< Day Day Up >|