InnoDB and Memory

 < Day Day Up > 

This section cites ways to configure, monitor, and tune InnoDB's advanced memory-based buffering and caching capabilities for optimal performance.

Buffer Pool

Most modern database servers take advantage of system memory to increase performance by reducing and/or delaying the amount of interaction with the much slower disk drives. InnoDB is no exception: Its buffer pool offers sophisticated data analysis and caching algorithms that translate to significant response enhancements.

The buffer pool contains many internal structures that determine what information is placed in the cache, how it is synchronized to disk, and what should be removed from the cache. The design and details of these internals are beyond the scope of this book, but it's important to understand that they exist and play a vital role in augmenting database speed.

Database administrators set a number of key server variables that control the overall size of the buffer pool, along with its behavior. The following sections look at how to configure, monitor, and tune this essential module.

Buffer Pool Configuration

The server variable that controls the amount of memory to be allocated to the buffer pool is innodb_buffer_pool_size. How much memory should you apportion to your buffer pool? The conventional wisdom ranges from 50% to 80% of all available memory, which is a good starting point. However, before picking a value, you should ponder several key considerations:

  • Server purpose You can safely allocate more memory to the buffer pool if your server is dedicated to MySQL. On the other hand, if your server is tasked with many responsibilities (for example, web server, application server, development, and so on), you should be careful not to consume more memory than is absolutely necessary.

  • Overall available system memory Even if your server is dedicated to MySQL, memory is still a finite resource. Devoting up to 80% of available RAM to the buffer pool can place a heavy tax on overall system responsiveness.

  • Processing profile A write-intensive database server has very different buffer pool needs than one that primarily retrieves information. This is one reason why it's so important to monitor your buffer pool; you might discover that you have allocated excessive memory to this cache, which penalizes other aspects of your server.

    If you're faced with the need to load a large amount of data, you might also decide to temporarily boost this value until after the load is complete.

  • Storage engine mix Finally, as you go about the task of deciding on a buffer pool size, it's vital that you consider your environment's unique blend of storage engines. For example, if you are heavily skewed toward the MyISAM storage engine, allocating abundant memory to the InnoDB buffer pool doesn't help things at all. In fact, it might make them worse. It's easy to imagine a scenario in which the InnoDB buffer and memory pool vie with the MyISAM key cache and all-purpose query cache.

    Conversely, an InnoDB-intensive situation warrants an extra dose of buffer pool memory, but this might also cause conflict with other caches; this is especially true when the other caches themselves are oversized relative to their actual workload.

After settling on a buffer pool size, you then need to manage additional necessities for this cache. Given that the buffer pool is the main in-memory cache for InnoDB index and data information, two important requirements spring to mind:

  • It's vital that altered pages are periodically persisted to disk. Otherwise, data will be lost if the system is unexpectedly shut down.

  • There must be room in the pool for new data/index information to be loaded from disk and/or entered by user interaction with the database.

The innodb_max_dirty_pages_pct server variable helps satisfy both of these obligations. With a range from 0 to 100, it instructs the primary InnoDB thread to synchronize (that is, flush) dirty pages to disk. For example, if set to 80, InnoDB does not allow more than 80% of the pages in the buffer pool to be marked as dirty. Instead, it flushes enough pages to bring the dirty page percentage in line with this parameter.

However, in reality, it's fairly unlikely that adjusting this setting will make much of a difference in resolving a performance bottleneck. For example, if your system is configured with insufficient memory for the computing tasks at hand, the buffer pool page contention issue that you are attempting to address with this parameter is merely a symptom of the underlying problem: inadequate memory for the server's processing load.

In recent years, Microsoft introduced enhanced virtual memory capabilities into its server-side operating system. Known as Address Windowing Extensions (AWE), it gives administrators access to substantially larger amounts of memory than the previous limit of 4GB.

To make this memory available to the InnoDB buffer pool, just set the innodb_buffer_pool_awe_mem_mb to the amount of additional memory that you want to use, with a limit of 64GB.

Chapter 10, "General Server Performance and Parameters Tuning," explains the differences among MySQL's in-memory caches. Recall that in addition to the buffer pool, InnoDB also employs a cache (known as the memory cache) to accumulate RAM-based information about database structure as well as other internal details.

As an administrator, you have control over the size of this cache by setting the innodb_additional_mem_pool_size parameter. If you are running a very large and/or complex database and insufficient memory is assigned to the memory cache, MySQL requisitions supplementary memory from the operating system. In most cases, however, the out-of-the-box default value should suffice.

Buffer Pool Monitoring and Tuning

To help give you a better idea of how well your buffer pool is being used, you can create a collection of custom MySQL Administrator graphs that use the status variables introduced in version 5.0.2, as shown in the following two examples. The first explores a number of buffer pool specific indicators, whereas the second reports on InnoDB page and row operations.

To begin, Figure 12.5 shows an example of monitoring a 256MB buffer pool's state just after InnoDB has started, but prior to any activity.

Figure 12.5. Monitoring a quiet InnoDB buffer pool.


Because the next part of this chapter also refers to this custom MySQL Administrator graph, take a moment to explore its components. From top to bottom, they include the following:

  • Buffer pool used pages This bar graph tracks the value of innodb_buffer_pool_pages_data (the total number of pages in use) versus the total amount of available buffer pool space, represented by innodb_buffer_pool_pages_total.

    Notice how much space is available in the buffer pool after starting the server, but before any activity. At this point, any used pages are controlled by InnoDB for its own internal purposes.

  • Dirty pages in the buffer pool Recall that a dirty buffer pool page is simply a page where the underlying data or index has been modified, so it must eventually be written back to disk to preserve database integrity. This is tracked via innodb_buffer_pool_pages_dirty, again comparing it to the total amount of space in the buffer pool: innodb_buffer_pool_pages_total.

  • Buffer pool hit rate This is a very important graph, one that serves as the foundation of determining whether the buffer pool is sized correctly for your needs. The formula used for this graph is as follows:

    100-(100*(^[innodb_pages_read]/

    ^[innodb_buffer_pool_read_requests]))

    What does this mean? This simply tracks the proportion of page reads that were satisfied by examining the buffer pool, rather than by going to disk.

    A high value means that MySQL was frequently able to find what it needed from the buffer pool; a lower number shows that the buffer pool did not contain the necessary data as often as possible. Some examples of buffer pool hit rates are shown a little later in this chapter.

  • Buffer pool read requests This graph monitors the changes from moment to moment of the innodb_buffer_pool_read_requests value. It helps you to see if you are experiencing spikes or lulls in demand for access to information in the buffer pool.

  • Buffer pool sequential read-ahead activity As you saw earlier, InnoDB offers sophisticated algorithms to determine if a program is requesting significant amounts of sequential data, typically via a table scan. To monitor this activity, you can check the relative values of the innodb_buffer_pool_read_ahead_seq status variable. A rising trend means that InnoDB is performing more table scans.

  • Buffer pool random read-ahead activity InnoDB's read-ahead algorithm is also useful when the user is requesting large amounts of nonsequential information. You can monitor this by watching the value of innodb_buffer_pool_read_ahead_rnd.

  • Write requests into the buffer pool To examine the relative rates of buffer pool modifications, you can track the continually changing values of the innodb_buffer_pool_write_requests status variable.

  • Buffer pool pages flushed As discussed earlier, MySQL periodically synchronizes pages from the buffer pool to disk: This ensures that data is not lost. To watch the relative amounts of page synchronization activity, you can screen the value of innodb_buffer_pool_pages_flushed.

  • Number of waits for space in the buffer pool If there is insufficient space in the buffer pool, InnoDB needs to wait until a buffer pool flush event has occurred before it can write information onto a newly freed page. The innodb_buffer_pool_wait_free status variable counts the number of times this type of undesirable episode has happened. A high value here means that your buffer pool is undersized for your needs.

    The error log receives a message similar to the following one if InnoDB has repeated difficulty in finding free blocks in the buffer pool:

 061230 15:55:47InnoDB: Warning: difficult to find free blocks from InnoDB: the buffer pool (285 search iterations)! Consider InnoDB: increasing the buffer pool size. InnoDB: It is also possible that in your Unix version InnoDB: fsync is very slow, or completely frozen inside InnoDB: the OS kernel. Then upgrading to a newer version InnoDB: of your operating system may help. Look at the InnoDB: number of fsyncs in diagnostic info below. InnoDB: Pending flushes (fsync) log: 0; buffer pool: 0 InnoDB: 241586 OS file reads, 279151 OS file writes, 48757 OS fsyncs InnoDB: Starting InnoDB Monitor to print further InnoDB: diagnostics to the standard output. 

If you prefer character-oriented utilities, you can run SHOW INNODB STATUS. The section of its results that's relevant for the buffer pool is as follows:

 BUFFER POOL AND MEMORY ---------------------- Total memory allocated 317682874; in additional pool allocated 1529728 Buffer pool size   16384 Free buffers       1 Database pages     16298 Modified db pages  57 Pending reads 63 Pending writes: LRU 0, flush list 2, single page 0 Pages read 343297, created 572, written 22277 1.00 reads/s, 0.00 creates/s, 0.00 writes/s Buffer pool hit rate 999 / 1000 

In Figure 12.6, a collection of heavy read-oriented processes is started; you can examine what things look like in the buffer pool.

Figure 12.6. Buffer pool activity for a read-intensive server load.


A second group of graphs, which focus on InnoDB page and row access, are shown in Figure 12.7.

Figure 12.7. Page and row activity for a read-intensive server load.


This second page tracks the following indicators:

  • Pages created The innodb_pages_created status variable tells us how many pages InnoDB is creating within the buffer pool. These pages are created when you insert new data or index values. Being interested in the trend, you can track relative values.

  • Pages read The innodb_pages_read status variable was described earlier. Recall that its purpose is to track the number of pages that InnoDB was able to find in the buffer pool; relative values are tracked here as well.

  • Pages written InnoDB increments the innodb_pages_written counter each time it writes (and then flushes) an InnoDB page.

  • Rows inserted/read/updated/deleted These important indicators are tracked by watching the innodb_rows_inserted, innodb_rows_read, innodb_rows_updated, and innodb_rows_deleted status variables, respectively.

The SHOW INNODB STATUS command generates a character-based counterpart to these graphs:

 -------------- ROW OPERATIONS -------------- 8 queries inside InnoDB, 0 queries in queue Main thread process no. 7818, id 1839139760, state: sleeping Number of rows inserted 10552, updated 0, deleted 0, read 14515057 5000.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 9000.00 reads/s ---------------------------- 

Observe that the graphs correctly reflect the heavy read activity on the server, whereas all indications of write activity are quiescent. You can see good buffer pool cache hits, especially when InnoDB is performing large, sequential read operations (particularly prevalent on the left one third of the graphs).

Although you can see a good hit rate, if this was the typical activity profile for your server, it might be worthwhile to boost the size of the buffer pool to allow even more pages to be cached. However, make sure to let your statistics accumulate over a meaningful period of time before finalizing this kind of change.

To balance the load, you can introduce a collection of database processes that perform significant amounts of data alterations, as shown in Figure 12.8.

Figure 12.8. Buffer pool activity from a more balanced server load.


Things are still going well: The buffer pool cache hit rate remains very high, pages are being synchronized to disk regularly, and there have been no waits for the buffer pool to be flushed. With a consistent hit rate of 99%, in fact, you might even be able to reduce buffer pool memory if it didn't significantly lower the caching percentage. Again, however, it's important to let enough time pass before making such a change.

Finally, you can completely flip the processing profile of your server by converting its primary workload into CREATE/UPDATE/DELETE operations, as shown in Figure 12.9.

Figure 12.9. Buffer pool activity for a write-intensive server load.


The buffer pool cache rate is high, but observe how there aren't many used pages in the cache. Assuming that this pattern holds up over time, this is a good indicator of a situation in which a relatively small amount of data is heavily accessed (and consequently placed into the buffer pool) to support other operations.

A good example of this kind of scenario is a lookup table that drives data alterations in other tables. If this was the typical profile of your server, you could probably reduce the size of the buffer pool given its underutilization. Meanwhile, the write performance appears optimal: Page flushing is proceeding normally, and there have been no wait states on the buffer pool.

The previous three examples all show a well-configured, smooth-running buffer pool. How can you tell if you should alter your buffer pool settings? There are several key symptoms that indicate a problem:

  • Very low buffer pool cache hit rate Generally, but not always, a consistently low value for this indicator means that InnoDB is laboring harder than necessary: Instead of finding information in the superefficient buffer pool, it must perform the time- consuming task of retrieving data from the disk. If you can spare the memory, it is wise to augment the buffer pool with more resources and then carefully monitor the cache hit rate, along with any possible side effects.

    Unfortunately, if your database is very large, and/or features hard-to-predict access patterns, you might have to accept that your cache hit rates may never get very high.

  • Very high buffer pool cache hit rate Paradoxically, a consistent, very high (95%+) buffer pool cache hit rate might mean that you've allocated too much memory. Because memory is never free, you might be shortchanging other equally needy structures, such as the key cache, query cache, or the operating system itself. If any of these other system components are complaining of insufficient memory, consider gradually decreasing the amount of memory dedicated to the buffer pool.

    Of course, another explanation for the sky-high hit rate might be that your databases are simply relatively small in comparison to available memory. In this case, you might just have to accept your good fortune.

  • Excessive buffer pool waits The innodb_buffer_pool_wait_free indicator lets us know how many times InnoDB's buffer pool access threads had to pause while awaiting already-resident pages to be flushed to disk.

    If you observe a steadily (or rapidly) rising value for this server status variable, it's likely an indicator that you are shortchanging your buffer pool; slowly increasing its size is a good idea.

An Alternative to the Buffer Pool

Before leaving the subject of the InnoDB buffer pool, it's worth taking a moment to discuss one possible alternative to using InnoDB tables and their associated buffer pool: MEMORY tables.

These types of tables (formerly known as HEAP) are discussed during Chapter 4, "Designing for Speed." Recall that they are extremely fast and based completely in memory. Although they have numerous restrictions and limitations that keep the number of potential replacement scenarios relatively low, one very useful situation in which you can substitute them for InnoDB tables is when you have large, static lookup tables that you want to be always present in memory.

For example, suppose you are building an application that will rely on a very large, relatively unchanging table to provide lookup values. You are concerned that because of heavy system activity, it's likely that data from this table will frequently be pushed out of the buffer pool to make room for other information, thereby slowing the application. Under these circumstances, you could permanently store the underlying data in an InnoDB or MyISAM table, and then load a corresponding MEMORY table when your server boots, application starts, and so on. Your application clients would then look to this MEMORY table to find their lookup values. Assuming that there's sufficient memory to both store all data and index information in the MEMORY table and, thereby, avoid swapping or other resource contention, all necessary lookup data will be resident in memory.

     < Day Day Up > 


    MySQL Database Design and Tuning
    MySQL Database Design and Tuning
    ISBN: 0672327651
    EAN: 2147483647
    Year: 2005
    Pages: 131

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net