MyISAM and Memory

 < Day Day Up > 

Aside from the engine-neutral server settings described in the preceding chapter, the MyISAM key cache is the primary memory-based structure that administrators can configure. This section explores its architecture, as well as how to set up, monitor, and tune its behavior.

Before getting started, recall that there actually are several memory caches built in to MySQL:

  • Key cache This is the cache that buffers index information for tables that use the MyISAM storage engine.

  • Buffer pool This is InnoDB's answer to MyISAM's key cache, except it holds both indexes and data. This is discussed more in Chapter 12, "InnoDB Performance Enhancement."

  • Memory pool This is a MySQL internal buffer that caches data dictionary and other server structures. It only benefits tables that use the InnoDB storage engine, so it is discussed more in Chapter 12.

  • Query cache This cache holds queries and their results, and is of use to all MySQL users, regardless of their choice of storage engine. This cache was discussed in Chapter 10, "General Server Performance and Parameters Tuning," in its review of general engine settings.

The Key Cache

Because accessing information from a disk drive is orders-of-magnitude more expensive than the same operation performed in memory, it's always beneficial to leverage memory and reduce the interaction between the database server and its related disks. The MyISAM key cache is specially designed for this purpose, and focuses exclusively on indexed information.

Although it's beyond the scope of this book to deliver a detailed architectural description of this highly sophisticated engine module, it's still worthwhile to explore it at a high level before discussing its configuration and monitoring.

It's best to think of the key cache as an optional, administrator-defined and configured block of memory that holds index information from one or more tables. The primary unit of storage is a block, which can contain index details from a single table or even a mixture of tables.

When attempting to create, update, or retrieve indexed information from a particular table, MySQL first looks into this cache to see if the relevant information is obtainable in memory. If it is, MySQL performs an extremely fast read from, or write operation into, the key cache. If not, MySQL identifies one or more key cache entries for replacement. These replacement candidates are then removed from the key cache; if their data or index information has changed, MySQL writes the new values back to disk. After they are out of the way, MySQL uses the space for the new information.

The following lists a few more significant facts about the key cache:

  • All index blocks are implicitly time stamped MyISAM uses a queue mechanism to store blocks. This lets MySQL know which blocks are the "oldest" (that is, are at the front of the queue and therefore have been present in the key cache for the longest amount of time). This is important because the key cache has limited space, and MySQL might need to swap out an existing block to make room for a new block. This is known as the "Least Recently Used" (LRU) approach.

  • Blocks might change while in the cache For example, if you change someone's last name from Smith to Smythe, and this column is indexed, the key cache's index block is updated. As soon as it becomes necessary to remove the updated block from the cache (to satisfy a new request), MySQL writes the updated block back to disk, thereby making the index change permanent.

  • As long as you have sufficient memory, you can elect to deploy multiple key caches in addition to the default key cache offered by MySQL This is typically done to separate different processing profiles into their own caches, a feature introduced in MySQL 4.1.

    For example, you can choose to create one key cache to support highly dynamic, transactional usage, and another designed for decision support, in which the data doesn't change very often. Separating the key caches means that these very different applications aren't likely to interfere with each others' usage of the key cache.

  • To improve concurrency, multiple threads may work with the cache all at once However, if changes are being made to a particular block, other threads that need to access the block need to briefly wait until the update is complete.

  • In addition to the LRU method driven by the ordered queue, MySQL offers an optional, sophisticated algorithm to control replacements in the key cache Known as the "midpoint insertion strategy," it distinguishes between different degrees of index block utilization, which has a significant effect on which blocks are identified for replacement.

  • When enabled, MyISAM divides the queue of potential candidates into two segments: a "hot" list (that is, blocks that are being heavily accessed and, hence, should remain in the key cache) and a "warm" list, containing less "popular" blocks Although this chapter doesn't devote a significant amount of explanation on this topic, its configuration is explored momentarily.

Configuring the Key Cache

Now that you've seen how the key cache works, this section discusses how to configure it and control its behavior. You'll create a group of sample key caches at the end of this section.

To get started, follow these steps:

1.

Decide if you want to deploy multiple key caches, and if so, how many. Whether you employ one, two, or ten key caches isn't as important as understanding the important performance role they play, as well as how to configure, monitor, and then tune them.

2.

Set the appropriate variables. You have several decisions to make for each of the key caches you create:

  • Sizing the in-memory key cache buffers The key_cache_block_size setting determines how many bytes will make up each of the key cache buffers. The default of 1,024 bytes should be good enough for most applications; future releases of MySQL will make changing this parameter more meaningful.

  • Allocating memory to your key cache(s) This is an important decision: Setting the key_buffer_size variable too low means that you won't realize the full benefits of key caching. Setting it too high wastes precious memory, and might introduce performance anomalies elsewhere.

    As stated previously, every environment is different, so off-the-cuff recommendations are hard to make. With that said, a good starting point is to allocate approximately 5% 10% of your system's main memory to your combined group of key caches; you can grant more space if your InnoDB memory requirements are low. It's always better to start small and then request more resources.

  • Determine if you want to use the midpoint insertion strategy This optional algorithm was described earlier. If you don't want to employ this approach, simply ensure that key_cache_division_limit is set to 100. Otherwise, set it to be the desired dividing line between the hot and warm index block lists. For example, when set to 30%, MySQL ensures that the warm list contains no less than 30% of the blocks in the key cache, whereas the remaining blocks are associated with the hot list.

Alas, some blocks are not meant to live out their existence in the hot list and face downgrading onto the warm list, where they will likely be quickly swapped out for a newer block. The key_cache_age_threshold determines how long a block can remain in the most-unused part of the hot list before it is cast out onto the warm list.

3.

Associate your indexes with their designated caches. The CACHE INDEX statement does the actual assignment between your key caches and the indexes that will be placed in them. These key caches use structured system variables, which have scope only in the context of each key cache.

4.

Decide if you want your key cache preloaded. MySQL gives you the choice of having your key cache loaded with data "on the fly" (that is, the key cache fills in as rows are requested from the server) or preloaded. If you decide to preload your key cache, use the LOAD INDEX statement. You can configure the amount of memory allocated for this load operation via the preload_buffer_size setting.

5.

Monitor your key caches. You have a number of ways to keep an eye on your key cache performance. The most effective is by using the MySQL Administrator, which is explored in a moment.

6.

If you want to clear the contents of your key cache(s), you can either stop and then restart MySQL or simply alter its key_buffer_size setting. When faced with this kind of request, MySQL synchronizes the key cache with the disk in an orderly manner, and then clears the cache's contents.

After identifying the steps to follow to configure your key cache, let's walk through an example. For the purposes of this illustration, let's use three caches: one key cache for OLTP-style access, one for DSS applications, and the default key cache.

Our database server has 4GB of memory, so using the 10% rule discussed earlier, you can allocate a total of approximately 400MB to the combined group of key caches:

  • The default key cache will receive 256MB via the key_buffer_size parameter. You'll then preload the cache with indexes for your largest, widely accessed, mixed use table:

     mysql> LOAD INDEX INTO CACHE customer_master IGNORE LEAVES; +---------------------------+--------------+----------+----------+ | Table                     | Op           | Msg_type | Msg_text | +---------------------------+--------------+----------+----------+ | high_hat.customer_master  | preload_keys | status   | OK       | +---------------------------+--------------+----------+----------+ 1 row in set (0.02 sec) 

  • Grant 75MB to both the OLTP and DSS key caches, assign the indexes from the appropriate tables to those caches, and then preload the indexes:

     mysql> CACHE INDEX customer_address in dss; +---------------------------+--------------------+----------+----------+ | Table                     | Op                 | Msg_type | Msg_text | +---------------------------+--------------------+----------+----------+ | high_hat.customer_address | assign_to_keycache | status   | OK       | +---------------------------+--------------------+----------+----------+ 1 row in set (0.01 sec) mysql> LOAD INDEX INTO CACHE customer_address IGNORE LEAVES; +---------------------------+--------------+----------+----------+ | Table                     | Op           | Msg_type | Msg_text | +---------------------------+--------------+----------+----------+ | high_hat.customer_address | preload_keys | status   | OK       | +---------------------------+--------------+----------+----------+ 1 row in set (0.00 sec) mysql> CACHE INDEX transactions in oltp; +-----------------------+--------------------+----------+----------+ | Table                 | Op                 | Msg_type | Msg_text | +-----------------------+--------------------+----------+----------+ | high_hat.transactions | assign_to_keycache | status   | OK       | +-----------------------+--------------------+----------+----------+ 1 row in set (0.01 sec) mysql> LOAD INDEX INTO CACHE transactions IGNORE LEAVES; +-----------------------+--------------+----------+----------+ | Table                 | Op           | Msg_type | Msg_text | +-----------------------+--------------+----------+----------+ | high_hat.transactions | preload_keys | status   | OK       | +-----------------------+--------------+----------+----------+ 1 row in set (0.00 sec)  

Monitoring and Tuning the Key Cache

After configuring your key caches, you can then create a set of MySQL Administrator graphs to help you understand its activities, as shown in Figure 11.1.

Figure 11.1. A collection of MySQL Administrator graphs designed to report about the key cache.


These graphs include information about the following:

  • Key cache consumption To determine how much of the key cache is currently being used, multiply the ever-changing value of key_blocks_used by the static key_block_size. Then, compare the result with the overall amount of key cache space, represented by key_buffer_size, yielding the current key cache resource consumption.

    Use this bar graph to help determine if your key cache is sized correctly. For example, if it quickly fills up, there is a decent possibility that you need to allocate more memory, unless you are preloading the cache or your access patterns vary over time. On the other hand, if it remains relatively unused, you might be able to restore some of its memory back to the operating system or spread it to other MySQL caches. Remember to check these values over time; making a snap decision based on 10 minutes of activity is not wise.

  • Read hit rate percentage To determine the key cache hit rate (that is, the amount of times that MySQL was able to find the required information in the key cache rather than having to go to disk), use the following formula:

     100-(^[Key_reads]/^[Key_read_requests])*100 

    How should you interpret these results? If you consistently see a high hit rate (in excess of 90%), chances are that your key cache is tuned efficiently.

    A very low hit rate can mean that your key cache is undersized relative to your needs, and that you should allocate more memory. It's also possible, however, that your database is simply so big and/or accessed so randomly that MySQL can't efficiently place index information in the key cache. In this case, it's likely that no key cache will be big enough (unless your memory and data are equivalent in size), so your goal should be to boost the size of the key cache (along with the hit rate) with the understanding that it will never be perfect.

    Finally, a hit rate of 99% or 100% might paradoxically mean that you have oversized your key cache, usually at the expense of another component of your server. When you see such high hit rates, consider gradually reducing the key cache's memory until you reach the point at which the hit rates begin to deteriorate.

  • Write hit rate percentage This graph measures the relationship between key writes (the number of key block writes to disk) and their associated key write requests (the number of key block writes into the key cache) as follows:

     100-(^[Key_writes]/^[Key_write_requests])*100 

    Because key write operations typically touch a more dispersed, random set of data, it's natural to expect to see a lower cache rate than for read operations. You will notice exceptions to this rule for large-scale write operations, such as bulk data loads or large index creation.

  • Key cache reads versus disk reads The next two graphs display the varying number of times that MySQL was able to read information from the key cache (key_read_requests) rather than the disk (key_reads).

  • Key cache writes versus disk writes These next two graphs also track the number of times the key cache was used. In this case, the successful key cache write requests (key_write_requests) are contrasted with those that were written to disk (key_writes).

Be careful as you monitor your system: Unfortunately, there is no way to observe the hit rates for any nondefault key caches.

Take a look at a few real-life MyISAM activity profiles. In Figure 11.2, a series of read-only queries are run against a disparate group of MyISAM tables.

Figure 11.2. MySQL Administrator graphs showing key cache metrics on an active, read-intensive system.


As would be expected, there are only indicators of read activity; there are no write operations under way. In addition, according to this graph, a respectable hit rate is being received even though the key cache hasn't filled up. If this profile persists, you might be able to reduce the default key cache from its current 256MB level to somewhere around 200MB, and adjust the customized key caches as well.

Next, a mixture of read-focused and write-focused applications are launched, as shown in Figure 11.3.

Figure 11.3. MySQL Administrator graphs showing key cache metrics on an active, mixed read-and-write system.


Why is there such a disparity between the near-perfect read cache hit rate and the (apparently) awful write cache hit rate? Recall from earlier in this section that in the absence of large, sequential, key-affecting operations like bulk inserts or index building, write operations are typically more random and dispersed than read operations. Hence, the discrepancy between read and write cache hit rates.

Watch what happens in Figure 11.4, though, in which a series of write-intensive applications that update blocks of approximately 300 rows at a time are launched.

Figure 11.4. MySQL Administrator graphs showing key cache metrics on an active, write-intensive system.


Now you see great hit rates for both read and write access to the key cache. The writing applications' WHERE clauses are meeting with success in their quest to locate index information in the key cache, and MyISAM is able to buffer their updates in this cache as well.

     < Day Day Up > 


    MySQL Database Design and Tuning
    MySQL Database Design and Tuning
    ISBN: 0672327651
    EAN: 2147483647
    Year: 2005
    Pages: 131

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net