|
|
MySQL distributions come with a benchmark suite called run-all-tests. You can use it to test various DBMSs to see how well they perform. To use it, you need to have Perl, the Perl DBI module, and the DBD module for the DBMS you want to test. Table 13.5 explains the options for
run-all-tests.
Option | Description |
---|---|
--comments | Adds a comment to the benchmark output. |
--cmp=server[,server...] | Runs the test with limits from the specified servers. By running all servers with the same --cmp, the test results will be comparable between the different SQL servers. |
--create-options=# | Specifies an extra argument to all create statements. For example, to create all tables as BDB tables, you would use --create-options=TYPE=BDB. |
--database | Specifies the database in which the test tables are created. The default is the test database. |
--debug | Displays debugging information. You normally only use this when debugging a test. |
--dir | Indicates where the test results should be stored. The default is the defaultoutput. |
--fast | Allows the use of nonstandard ANSI SQL commands to make the test gofaster. |
--fast-insert | Uses fast inserts where possible, which include multiple value lists, such as INSERT INTO tablename VALUES (values),(values) or simply INSERT INTO tablename VALUES (values) rather than INSERT INTO tablename(fields) VALUES (values). |
--field-count | Specifies how many fields there are to be in the test table. Usually only used when debugging a test. |
--force | Continues the test even when encountering an error. Deletes tables before creating new ones. Usually only used when debugging a test. |
--groups | Indicates how many different groups there are to be in the test. Usually only used when debugging a test. |
--lock-tables | Allows the use of table locking to get more speed. |
--log | Saves the results to the --dir directory. |
--loop-count (Default) | Indicates how many times each test loop is to be executed. Usually only used when debugging a test. |
--help | Displays a list of options. |
--host='host name' | Specifies the host where the database server is located. The default islocalhost. |
--machine="machine or os_name" | The machine/operating system name that is added to the benchmark output filename. The default is the operating system name + version. |
--odbc | Uses the DBI ODBC driver to connect to the database. |
--password='password' | Specifies the password for the user the test connects as. |
--socket='socket' | Specifies the socket to connect with (if sockets are supported). |
--regions | Specifies how AND levels should be tested. Usually only used when debugging a test. |
--old-headers | Gets the old benchmark headers from the old RUN- file. |
--server='server name' | Specifies which DBMS on which to run the test. These can include Access, Adabas, AdabasD, Empress, Oracle, Informix, DB2, mSQL, MS-SQL, MySQL, Pg, Solid, and Sybase. The default is MySQL. |
--silent | Does not output information about the server when the test starts. |
--skip-delete | Specifies that the test tables created are not deleted. Usually only used when debugging a test. |
--skip-test=test1[,test2,...] | Excludes the specified tests when running the benchmark. |
--small-test | Speeds up the tests by using smaller limits. |
--small-tables | Uses fewer rows to run the tests. This would be used if the database cannot handle large tables for some reason (they could have small partitions, forexample). |
--suffix | Adds a suffix to the database name in the benchmark output filename. Used when you want to run multiple tests without overwriting the results. When using the --fast option, the suffix is automatically _fast. |
--random | Generates random initial values for the sequence of test executions, which could be used to imitate real conditions. |
--threads=# | Specifies the number of threads to use for multiuser benchmarks. The default is 5. |
--tcpip | Use TCP/IP to connect to the server. This allows the test to do many new connections in a row as the TCP/IP stack can be filled. |
--time-limit | Specifies a time limit in seconds for a test loop before the test ends, and the result estimated. The default is 600 seconds. |
--use-old-results | Uses the old results from the --dir directory instead of actually running the tests. |
--user='user_name' | Specifies the user to connect as. |
--verbose | Displays more info. Usually only used when debugging a test. |
--optimization='some comments' | Adds comments about optimizations done before the test. |
--hw='some comments' | Adds comments about hardware used for this test. |
To run run-all-tests, change to the sql-bench directory from the base directory. The following is a sample output of the benchmark:
% cd sql-bench % perl run-all-tests --small-test --password='g00r002b' Benchmark DBD suite: 2.14 Date of test: 2002-07-21 21:35:42 Running tests on: Linux 2.2.5-15 i686 Arguments: --small-test Comments: Limits from: Server version: MySQL 4.0.1 alpha max log Optimization: None Hardware: ATIS: Total time: 19 wallclock secs ( 5.23 usr 0.96 sys + 0.00 cusr 0.00 csys = 0.00 CPU) alter-table: Total time: 2 wallclock secs ( 0.12 usr 0.03 sys + 0.00 cusr 0.00 csys = 0.00 CPU) big-tables: Total time: 1 wallclock secs ( 0.43 usr 0.10 sys + 0.00 cusr 0.00 csys = 0.00 CPU) connect: Total time: 8 wallclock secs ( 2.90 usr 0.66 sys + 0.00 cusr 0.00 csys = 0.00 CPU) create: Total time: 0 wallclock secs ( 0.15 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU) insert: Total time: 31 wallclock secs ( 8.47 usr 1.43 sys + 0.00 cusr 0.00 csys = 0.00 CPU) select: Total time: 55 wallclock secs (17.76 usr 1.71 sys + 0.00 cusr 0.00 csys = 0.00 CPU) transactions: Test skipped because the database doesn't support transactions wisconsin: Total time: 42 wallclock secs ( 9.55 usr 1.84 sys + 0.00 cusr 0.00 csys = 0.00 CPU) All 9 test executed successfully Totals per operation: Operation seconds usr sys cpu tests alter_table_add 1.00 0.07 0.00 0.00 92 alter_table_drop 0.00 0.03 0.00 0.00 46 connect 0.00 0.22 0.02 0.00 100 connect+select_1_row 1.00 0.27 0.04 0.00 100 connect+select_simple 1.00 0.27 0.04 0.00 100 count 1.00 0.13 0.00 0.00 100 count_distinct 1.00 0.13 0.02 0.00 100 count_distinct_2 1.00 0.16 0.02 0.00 100 count_distinct_big 1.00 0.12 0.04 0.00 30 count_distinct_group 1.00 0.17 0.00 0.00 100 count_distinct_group_on_key 1.00 0.13 0.01 0.00 100 count_distinct_group_on_key_parts 1.00 0.16 0.01 0.00 100 count_distinct_key_prefix 1.00 0.12 0.01 0.00 100 count_group_on_key_parts 1.00 0.09 0.00 0.00 100 count_on_key 20.00 6.11 0.60 0.00 5100 create+drop 0.00 0.01 0.00 0.00 10 create_MANY_tables 0.00 0.01 0.00 0.00 10 create_index 1.00 0.00 0.00 0.00 8 create_key+drop 0.00 0.12 0.01 0.00 100 create_table 1.00 0.03 0.00 0.00 31 delete_all_many_keys 1.00 0.08 0.00 0.00 1 delete_big 0.00 0.01 0.00 0.00 1 delete_big_many_keys 1.00 0.07 0.00 0.00 128 delete_key 0.00 0.06 0.01 0.00 100 delete_range 1.00 0.01 0.00 0.00 12 drop_index 0.00 0.00 0.00 0.00 8 drop_table 0.00 0.01 0.00 0.00 28 drop_table_when_MANY_tables 0.00 0.00 0.00 0.00 10 insert 57.00 13.90 2.51 0.00 44768 insert_duplicates 1.00 0.29 0.04 0.00 1000 insert_key 0.00 0.04 0.01 0.00 100 insert_many_fields 0.00 0.12 0.00 0.00 200 insert_select_1_key 0.00 0.00 0.00 0.00 1 insert_select_2_keys 0.00 0.00 0.00 0.00 1 min_max 1.00 0.06 0.01 0.00 60 min_max_on_key 17.00 8.12 0.64 0.00 7300 multiple_value_insert 0.00 0.03 0.00 0.00 1000 order_by_big 1.00 0.30 0.10 0.00 10 order_by_big_key 1.00 0.29 0.14 0.00 10 order_by_big_key2 1.00 0.28 0.12 0.00 10 order_by_big_key_desc 0.00 0.35 0.08 0.00 10 order_by_big_key_diff 0.00 0.35 0.05 0.00 10 order_by_big_key_prefix 1.00 0.31 0.09 0.00 10 order_by_key2_diff 0.00 0.01 0.00 0.00 10 order_by_key_prefix 0.00 0.02 0.00 0.00 10 order_by_range 0.00 0.03 0.00 0.00 10 outer_join 1.00 0.01 0.00 0.00 10 outer_join_found 1.00 0.01 0.01 0.00 10 outer_join_not_found 1.00 0.03 0.01 0.00 10 outer_join_on_key 0.00 0.01 0.00 0.00 10 select_1_row 1.00 0.27 0.06 0.00 1000 select_1_row_cache 1.00 0.18 0.07 0.00 1000 select_2_rows 1.00 0.43 0.05 0.00 1000 select_big 0.00 0.31 0.10 0.00 17 select_big_str 1.00 0.55 0.22 0.00 100 select_cache 4.00 0.95 0.18 0.00 1000 select_cache2 4.00 1.28 0.11 0.00 1000 select_column+column 1.00 0.35 0.06 0.00 1000 select_diff_key 0.00 0.02 0.00 0.00 10 select_distinct 1.00 0.30 0.06 0.00 80 select_group 4.00 0.61 0.09 0.00 391 select_group_when_MANY_tables 0.00 0.00 0.00 0.00 10 select_join 1.00 0.06 0.03 0.00 10 select_key 0.00 0.02 0.01 0.00 20 select_key2 0.00 0.02 0.00 0.00 20 select_key2_return_key 1.00 0.12 0.00 0.00 20 select_key2_return_prim 0.00 0.00 0.00 0.00 20 select_key_prefix 0.00 0.05 0.00 0.00 20 select_key_prefix_join 2.00 0.62 0.16 0.00 10 select_key_return_key 0.00 0.02 0.00 0.00 20 select_many_fields 1.00 0.31 0.10 0.00 200 select_range 2.00 0.23 0.05 0.00 41 select_range_key2 1.00 0.43 0.02 0.00 505 select_range_prefix 1.00 0.42 0.05 0.00 505 select_simple 0.00 0.21 0.04 0.00 1000 select_simple_cache 0.00 0.14 0.05 0.00 1000 select_simple_join 0.00 0.13 0.05 0.00 50 update_big 1.00 0.01 0.00 0.00 10 update_of_key 1.00 0.20 0.03 0.00 500 update_of_key_big 0.00 0.02 0.01 0.00 13 update_of_primary_key_many_keys 0.00 0.12 0.02 0.00 256 update_with_key 4.00 1.03 0.12 0.00 3000 update_with_key_prefix 1.00 0.58 0.01 0.00 1000 wisc_benchmark 2.00 0.70 0.16 0.00 34 TOTALS 156.00 43.84 6.55 0.00 76237
The benchmark suite is useful for comparing various platforms. MySQL comes with a set of results, but these are dated and not particularly useful. I suggest repeating the test yourself to make them meaningful in your situation.
It's also important to benchmark your own applications (under the highest possible load) before you roll them out. An application that can help you impose load on your server is super-smack, downloadable from the MySQL site.
Another useful script distributed with MySQL is crash-me, which verifies the functionality on a specific installation and tests the reliability of the server under stress (see Table 13.6). It gets its name from the results when an installation fails the test. It's also portable and can test multiple database platforms for comparison purposes. As a result of its behavior, it should never be run in a live environment. It can crash not only the database server, but it also takes significant amounts of memory, meaning it can impact on other programs running on the server. Be aware, though, that MySQL has developed it, so it naturally highlights MySQL strengths and downplays MySQL weaknesses for comparison purposes. For example, triggers and procedures, which MySQL does not currently implement, may seem from viewing the crash-me output to be as important as nonstandard MySQL features, such as using || for OR instead of string concatenation.
Option | Description |
---|---|
--help, --Information | Displays a help list of options. |
--batch-mode | Runs the test without asking for input and exits if it encounters errors. |
--comment='some comment' | Adds the specified comment to the crash-me limit file. |
--check-server | Does a new connection to the server every time it checks if the server isstillrunning. This can be useful if a previous query causes wrong data tostart being returned. |
--database='database' | Creates the test tables in this database. The default is test. |
--dir='directory_name' | Saves the output to this directory |
--debug | Displays lots of output to assist in debugging if there is a problem. |
--fix-limit-file | Reformats the crash-me limit file. This does not rerun the crash-me. |
--force | Begins the test immediately, without warning and without waiting for input. Use this option to automate the test. |
--log-all-queries | Displays all executed queries. Mostly used for debugging crash-me. |
--log-queries-to-file='filename' | Logs full queries to the specified file. |
--host='hostname' | Runs the test on the specified host. The default is localhost. |
--password='password' | Specifies the password for the current user. |
--restart | Saves states during each test, making it possible, in the case of a crash, to continue from where it left off by restarting with the same options. |
--server='server name' | Specifies the server on which to run the test. These include Access, Adabas, AdabasD, Empress, Oracle, Informix, DB2, Mimer, mSQL, MS-SQL, MySQL, Pg, Solid, or Sybase. The default is MySQL. Other servers will not have their names reported. |
--user='user_name' | Specifies the username to connect as. |
--start-cmd='command to restart server' | Will use the specified command to restart the database server in the case of it dying. (The availability of this option says everything!) |
--sleep='time in seconds' | Specifies the time in seconds to wait before restarting the server. The default is 10 seconds. |
A sample display of crash-me follows:
% perl crash-me --password='g00r002b' Running crash-me 1.57 on 'MySQL 4.0.1 alpha max log' I hope you didn't have anything important running on this server.... Reading old values from cache: /usr/local/mysql-max-4.0.1-alpha-pc- linux-gnu-i686/sql-bench/limits/mysql.cfg NOTE: You should be familiar with 'crash-me --help' before continuing! This test should not crash MySQL if it was distributed together with the running MySQL version. If this is the case you can probably continue without having to worry about destroying something. Some of the tests you are about to execute may require a lot of memory. Your tests WILL adversely affect system performance. It's not uncommon that either this crash-me test program, or the actual database back-end, will DIE with an out-of-memory error. So might any other program on your system if it requests more memory at the wrong time. Note also that while crash-me tries to find limits for the database server it will make a lot of queries that can't be categorized as 'normal'. It's not unlikely that crash-me finds some limit bug in your server so if you run this test you have to be prepared that your server may die during it! We, the creators of this utility, are not responsible in any way if your database server unexpectedly crashes while this program tries to find the limitations of your server. By accepting the following question with 'yes', you agree to the above! You have been warned! Start test (yes/no) ? Tables without primary key: yes SELECT without FROM: yes Select constants: yes Select table_name.*: yes Allows ' and " as string markers: yes Double '' as ' in strings: yes Multiple line strings: yes " as identifier quote (ANSI SQL): error ` as identifier quote: yes [] as identifier quote: no Column alias: yes Table alias: yes Functions: yes Group functions: yes Group functions with distinct: yes Group by: yes Group by position: yes Group by alias: yes Group on unused column: yes Order by: yes Order by position: yes Order by function: yes Order by on unused column: yes Order by DESC is remembered: no Compute: no INSERT with Value lists: yes INSERT with set syntax: yes allows end ';': yes LIMIT number of rows: with LIMIT SELECT with LIMIT #,#: yes Alter table add column: yes Alter table add many columns: yes Alter table change column: yes Alter table modify column: yes Alter table alter column default: yes Alter table drop column: yes Alter table rename table: yes rename table: yes truncate: yes Alter table add constraint: yes Alter table drop constraint: no Alter table add unique: yes Alter table drop unique: with drop key Alter table add primary key: with constraint Alter table add foreign key: yes Alter table drop foreign key: with drop foreign key Alter table drop primary key: drop primary key Case insensitive compare: yes Ignore end space in compare: yes Group on column with null values: yes Having: yes Having with group function: yes Order by alias: yes Having on alias: yes binary numbers (0b1001): no hex numbers (0x41): yes binary strings (b'0110'): no hex strings (x'1ace'): no Value of logical operation (1=1): 1 Simultaneous connections (installation default): 101 query size: 1048574 Supported sql types Type character(1 arg): yes Type char(1 arg): yes Type char varying(1 arg): yes Type character varying(1 arg): yes Type boolean: no Type varchar(1 arg): yes Type integer: yes Type int: yes Type smallint: yes Type numeric(2 arg): yes Type decimal(2 arg): yes Type dec(2 arg): yes Type bit: yes Type bit(1 arg): yes Type bit varying(1 arg): no Type float: yes Type float(1 arg): yes Type real: yes Type double precision: yes Type date: yes Type time: yes Type timestamp: yes Type interval year: no Type interval year to month: no Type interval month: no Type interval day: no Type interval day to hour: no Type interval day to minute: no Type interval day to second: no Type interval hour: no Type interval hour to minute: no Type interval hour to second: no Type interval minute: no Type interval minute to second: no Type interval second: no Type national character varying(1 arg): yes Type national character(1 arg): yes Type nchar(1 arg): yes Type national char varying(1 arg): yes Type nchar varying(1 arg): yes Type national character varying(1 arg): yes Type timestamp with time zone: no Supported odbc types Type binary(1 arg): yes Type varbinary(1 arg): yes Type tinyint: yes Type bigint: yes Type datetime: yes Supported extra types Type blob: yes Type byte: no Type long varbinary: yes Type image: no Type text: yes Type text(1 arg): no Type mediumtext: yes Type long varchar(1 arg): no Type varchar2(1 arg): no Type mediumint: yes Type middleint: yes Type int unsigned: yes Type int1: yes Type int2: yes Type int3: yes Type int4: yes Type int8: yes Type uint: no Type money: no Type smallmoney: no Type float4: yes Type float8: yes Type smallfloat: no Type float(2 arg): yes Type double: yes Type enum(1 arg): yes Type set(1 arg): yes Type int(1 arg) zerofill: yes Type serial: no Type char(1 arg) binary: yes Type int not null auto_increment: yes Type abstime: no Type year: yes Type datetime: yes Type smalldatetime: no Type timespan: no Type reltime: no Type int not null identity: no Type box: no Type bool: yes Type circle: no Type polygon: no Type point: no Type line: no Type lseg: no Type path: no Type interval: no Type serial: no Type inet: no Type cidr: no Type macaddr: no Type varchar2(1 arg): no Type nvarchar2(1 arg): no Type number(2 arg): no Type number(1 arg): no Type number: no Type long: no Type raw(1 arg): no Type long raw: no Type rowid: no Type mlslabel: no Type clob: no Type nclob: no Type bfile: no Remembers end space in char(): no Remembers end space in varchar(): no Supports 0000-00-00 dates: yes Supports 0001-01-01 dates: yes Supports 9999-12-31 dates: yes Supports 'infinity dates: error Type for row id: auto_increment Automatic row id: _rowid Supported sql functions Supported odbc functions Supported extra functions Supported where functions Supported sql group functions Group function AVG: yes Group function COUNT (*): yes Group function COUNT column name: yes Group function COUNT(DISTINCT expr): yes Group function MAX on numbers: yes Group function MAX on strings: yes Group function MIN on numbers: yes Group function MIN on strings: yes Group function SUM: yes Group function ANY: no Group function EVERY: no Group function SOME: no Supported extra group functions Group function BIT_AND: yes Group function BIT_OR: yes Group function COUNT(DISTINCT expr,expr,...): yes Group function STD: yes Group function STDDEV: yes Group function VARIANCE: no mixing of integer and float in expression: yes No need to cast from integer to float: yes Is 1+NULL = NULL: yes Is concat('a',NULL) = NULL: yes LIKE on numbers: yes column LIKE column: yes update of column= -column: yes String functions on date columns: yes char are space filled: no DELETE FROM table1,table2...: no Update with sub select: no Calculate 1--1: yes ANSI SQL simple joins: yes max text or blob size: 1048543 (cache) constant string size in where: 1048539 (cache) constant string size in SELECT: 1048565 (cache) return string size from function: 1047552 (cache) simple expressions: 1837 (cache) big expressions: 10 (cache) stacked expressions: 1837 (cache) tables in join: 63 (cache) primary key in create table: yes unique in create table: yes unique null in create: yes default value for column: yes default value function for column: no temporary tables: yes create table from select: yes index in create table: yes null in index: yes null in unique index: yes null combination in unique index: yes null in unique index: yes index on column part (extension): yes different namespace for index: yes case independent table names: no drop table if exists: yes create table if not exists: yes inner join: yes left outer join: yes natural left outer join: yes left outer join using: yes left outer join odbc style: yes right outer join: yes full outer join: no cross join (same as from a,b): yes natural join: yes union: no union all: no intersect: no intersect all: no except: no except all: no except: no except all: no minus: no natural join (incompatible lists): yes union (incompatible lists): no union all (incompatible lists): no intersect (incompatible lists): no intersect all (incompatible lists): no except (incompatible lists): no except all (incompatible lists): no except (incompatible lists): no except all (incompatible lists): no minus (incompatible lists): no subqueries: no insert INTO ... SELECT ...: yes atomic updates: no views: no foreign key syntax: yes foreign keys: no Create SCHEMA: no Column constraints: no Table constraints: no Named constraints: no NULL constraint (SyBase style): yes Triggers (ANSI SQL): no PSM procedures (ANSI SQL): no PSM modules (ANSI SQL): no PSM functions (ANSI SQL): no Domains (ANSI SQL): no many tables to drop table: yes drop table with cascade/restrict: yes -- as comment (ANSI): yes // as comment: no # as comment: yes /* */ as comment: yes insert empty string: yes Having with alias: yes table name length: 64 (cache) column name length: 64 (cache) select alias name length: +512 (cache) table alias name length: +512 (cache) index name length: 64 (cache) max char() size: 255 (cache) max varchar() size: 255 (cache) max text or blob size: 1048543 (cache) Columns in table: 3398 (cache) unique indexes: 32 (cache) index parts: 16 (cache) max index part length: 255 (cache) index varchar part length: 255 (cache) indexes: 32 index length: 500 (cache) max table row length (without blobs): 65534 (cache) table row length with nulls (without blobs): 65502 (cache) number of columns in order by: +64 (cache) number of columns in group by: +64 (cache) crash-me safe: yes reconnected 0 times
|
|