We conducted an experiment using these two slightly different table definitions:
CREATE TABLE Table1 ( column1 INTEGER NOT NULL, column2 CHAR(64) NOT NULL, column3 INTEGER NOT NULL, column4 INTEGER NOT NULL) CREATE TABLE Table2 ( column1 DECIMAL(3), column2 CHAR(64) NOT NULL, column3 INTEGER NOT NULL, column4 INTEGER NOT NULL)
The purpose of the experiment was to see whether the two definitions for column1 INTEGER NOT NULL and DECIMAL(3)would make a difference to performance, especially with Microsoft. We thought it would. Our reasoning was:
We weren't testing to see whether operations on column1 would go faster. Instead we were testing to see whether accesses involving column3 would go faster! Our suspicion was that column3 would be misaligned . On an Intel machine, if (offset within page) is not divisible by four, there is a penalty for reading a four-byte integer. So we inserted 10,000 rows into both tables, then scanned for column3 = 51700 (which was always false ). Andas expectedthe scan worked faster on Table1 , where column1 was defined as INTEGER NOT NULL (GAIN: 4/8).
Data type does make a difference!