Column Type Choices and Query Efficiency

This section provides some guidelines for choosing your columns that can help queries run more quickly.[1]

[1] In this discussion, "BLOB types" should be read as meaning both BLOB and TEXT types.

Don't use longer columns when shorter ones will do. If you are using fixed-length CHAR columns, don't make them unnecessarily long. If the longest value you store in a column is 40 bytes long, don't declare it as CHAR(255); declare it as CHAR(40). If you can use MEDIUMINT rather than BIGINT, your table will be smaller (less disk I/O), and values can be processed more quickly in computations. If the columns are indexed, using shorter values gives you even more of a performance boost. Not only will the index speed up queries, shorter index values can be processed more quickly than longer values.

If you have a choice about row storage format, use one that is optimal for your table type. For MyISAM and ISAM tables, use fixed-length columns rather than variable-length columns. This is especially true for tables that are modified often and therefore more subject to fragmentation. For example, make all character columns CHAR rather than VARCHAR. The tradeoff is that your table will use more space, but if you can afford the extra space, fixed-length rows can be processed more quickly than variable-length rows.

For InnoDB tables, the internal row storage format does not treat fixed-length and variable-length columns differently (all rows use a header containing pointers to the column values), so using CHAR is not in itself intrinsically better than using VARCHAR. In fact, because CHAR will on average take more space than VARCHAR, it's preferable to use VARCHAR to minimize the amount of storage and disk I/O needed to process rows.

For BDB tables, it usually doesn't make much difference either way. You can try a table both ways and run some empirical tests to check whether there's a significant difference for your particular system.

Declare columns to be NOT NULL. This gives you faster processing and requires less storage. It will also simplify queries sometimes because you don't need to check for NULL as a special case.

Consider using ENUM columns. If you have a string column that contains only a limited number of distinct values, consider converting it to an ENUM column. ENUM values can be processed quickly because they are represented as numeric values internally.

Use PROCEDURE ANALYSE(). If you have MySQL 3.23 or newer, run PROCEDURE ANALYSE() to see what it tells you about the columns in your table:

 SELECT * FROM tbl_name PROCEDURE ANALYSE();  SELECT * FROM tbl_name PROCEDURE ANALYSE(16,256); 

One column of the output will be a suggestion for the optimal column type for each of the columns in your table. The second example tells PROCEDURE ANALYSE() not to suggest ENUM types that contain more than 16 values or that take more than 256 bytes (you can change the values as you like). Without such restrictions, the output may be very long; ENUM declarations are often difficult to read.

Based on the output from PROCEDURE ANALYSE(), you may find that your table can be changed to take advantage of a more efficient type. Use ALTER TABLE if you decide to change a column's type.

Use OPTIMIZE TABLE for tables that are subject to fragmentation. Tables that are modified a great deal, particularly those that contain variable-length columns, are subject to fragmentation. Fragmentation is bad because it leads to unused space in the disk blocks used to store your table. Over time, you must read more blocks to get the valid rows, and performance is reduced. This is true for any table with variable-length rows, but is particularly acute for BLOB columns because they can vary so much in size. The use of OPTIMIZE TABLE on a regular basis helps keep performance on the table from degrading.

OPTIMIZE TABLE works with MyISAM and BDB tables, but defragments only MyISAM tables. A defragmentation method that works for any table type is to dump the table with mysqldump and then drop and recreate it using the dump file:

 % mysqldump --opt db_name tbl_name > dump.sql  % mysql db_name < dump.sql 

Pack data into a BLOB column. Using a BLOB to store data that you pack and unpack in your application may allow you to get everything with a single retrieval operation rather than with several. This can also be helpful for data that are not easy to represent in a standard table structure or that change over time. In the discussion of the ALTER TABLE statement in Chapter 3, one of the examples dealt with a table being used to hold results from the fields in a Web-based questionnaire. That example discussed how you could use ALTER TABLE to add columns to the table whenever you add questions to the questionnaire.

Another way to approach this problem is to have the application program that processes the Web form pack the data into some kind of data structure, and then insert it into a single BLOB column. For example, you could represent the questionnaire responses using XML and store the XML string in the BLOB column. This adds application overhead on the client side for encoding the data (and decoding it later when you retrieve records from the table), but simplifies the table structure and eliminates the need to change the table structure when you change your questionnaire.

On the other hand, BLOB values can cause their own problems, especially if you do a lot of DELETE or UPDATE operations. Deleting a BLOB may leave a large hole in the table that will be filled in later with a record or records of probably different sizes. (The preceding discussion of OPTIMIZE TABLE suggests how you might deal with this.)

Use a synthetic index. Synthetic index columns can sometimes be helpful. One method is to create a hash value based on other columns and store it in a separate column. Then you can find rows by searching for hash values. However, note that this technique is good only for exact-match queries. (Hash values are useless for range searches with operators such as < or >=). Hash values can be generated in MySQL 3.23 and up by using the MD5() function. Other options are to use SHA1() or CRC32(), which were introduced in MySQL 4.0.2 and 4.1, respectively.

A hash index can be particularly useful with BLOB columns. For one thing, you cannot index these types prior to MySQL 3.23.2. But even with 3.23.2 or later, it may be quicker to find BLOB values using a hash as an identifier value than by searching the BLOB column itself.

Avoid retrieving large BLOB values unless you must. For example, a SELECT * query isn't a good idea unless you're sure the WHERE clause is going to restrict the results to just the rows you want. Otherwise, you may be pulling potentially very large BLOB values over the network for no purpose. This is another case where BLOB identifier information stored in a synthetic index column can be useful. You can search that column to determine the row or rows you want and then retrieve the BLOB values from the qualifying rows.

Segregate BLOB values into a separate table. Under some circumstances, it may make sense to move BLOB columns out of a table into a secondary table if that allows you to convert the table to fixed-length row format for the remaining columns. This will reduce fragmentation in the primary table and allow you to take advantage of the performance benefits of having fixed-length rows. It also allows you to run SELECT * queries on the primary table without pulling large BLOB values over the network.



MySQL
High Performance MySQL: Optimization, Backups, Replication, and More
ISBN: 0596101716
EAN: 2147483647
Year: 2003
Pages: 188

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net