Automatic Configuration Features of SQL Server

3 4

SQL Server is packed with automatic features that are designed to reduce the overhead normally associated with configuring and tuning a relational database management system (RDBMS). In this section, you'll be introduced to some of these features. (Those who are familiar with Microsoft SQL Server 7 will recognize these features because they originated in SQL Server 7.) You'll learn how they work, how to use them to reduce your work as a DBA, and how to override these automatic features, if necessary.

Dynamic Memory Management

Dynamic memory management enables SQL Server to configure the amount of memory it will use for the buffer cache and for the procedure cache dynamically, based on available system memory. Because SQL Server includes dynamic memory management, the DBA does not have to manually control the size of these caches. But in certain situations, you will want to restrict the amount of memory used by SQL Server, and this is possible as well.

How Dynamic Memory Management Works

The dynamic memory management feature works by constantly monitoring the amount of physical memory available in the system. SQL Server increases or decreases the SQL Server memory pool (described in the next section) based on its needs and on the amount of available memory. This capability can be quite useful in systems where the amount of memory used is relatively stable, but it can be problematic when the amount of memory used by a system's non-SQL Server processes varies, because SQL Server will be constantly changing its memory allocation.

A computer system that is used primarily as a SQL Server database server is a good candidate for dynamic memory management. In this type of system, the amount of memory used by processes other than SQL Server is stable, so SQL Server will automatically allocate the amount of memory necessary for it to work effectively, up to the point where no more physical memory is available. Then it will maintain that amount as long as no other process requires memory. If another process does require memory, SQL Server will deallocate the amount of memory that is needed so that it is available to the other process.

A computer system that runs processes with constantly changing memory requirements and on which processes are frequently added or removed is not a good candidate for dynamic memory management. On such a system, memory usage is constantly changing, requiring SQL Server to continually allocate and deallocate memory as the other processes need it—a procedure that can be inefficient because of the extra overhead produced. This type of system might perform better if you manually allocate a fixed amount of memory for SQL Server or if you specify the minimum and maximum amounts of memory that SQL Server can allocate. You'll see how to do this later in this chapter.

Thus, whether your system will perform better using dynamic or manual memory management is determined by the volatility of the memory usage in the system. By monitoring SQL Server memory allocation, you can determine whether the amount of memory used is changing on a regular basis or remaining fairly stable. To monitor the memory usage, you can use Microsoft Windows 2000 Performance Monitor. The Total Server Memory (KB) counter within the SQLServer: Memory Manager object displays the amount of memory, in kilobytes (KB), that SQL Server is currently consuming. To find out how memory usage has changed over time, look at this counter in the chart window.

The Memory Pool

SQL Server dynamically allocates and deallocates memory in a pool. The memory pool consists of memory that is divided among the following components:

  • Buffer cache Holds the database data pages that have been read into memory. The buffer cache usually takes up most of the memory pool.
  • Connection memory Used by each connection into SQL Server. Connection memory consists of data structures that keep track of each user's context and includes cursor-positioning information, query parameter values, and stored-procedure information.
  • Data structures Consist of global information about both locks and database descriptors, including information about lock holders, about the types of locks that are held, and about various files and filegroups.
  • Log cache Used to hold log information that will be written to the transaction log. It is also used when log information that has recently been written into the log cache is read. The log cache improves performance of log writes. The log cache is not the same as the buffer cache.
  • Procedure cache Used to store the execution plans for Transact-SQL (T-SQL) statements and stored procedures when they are being executed.

Because memory allocation changes dynamically, if dynamic memory management is allowed, the memory pool might be constantly increasing or decreasing. Also, the five components of the memory pool can change their individual sizes dynamically. This capability is not configurable and is controlled by SQL Server. For example, if more memory is needed so that more T-SQL statements can be stored in the procedure cache, SQL Server might take some memory from the buffer cache and use it for the procedure cache.

Using Additional Memory

The amount of memory that SQL Server can access depends on the Windows operating system used. Microsoft Windows NT Server 4 supports 4 gigabytes (GB) of memory, of which 2 GB is allocated for user processes and 2 GB is reserved for system use. This 2-GB limit represents the maximum amount of memory that can be allocated for SQL Server with NT 4. But with Windows NT Server 4 Enterprise Edition, the amount of virtual memory allocated for a process is 50 percent larger—3 GB. This increase is possible because the system allocation is reduced to 1 GB. This increase in virtual memory allocated to processes allows you to increase the size of the memory pool to close to 3 GB. To enable this support in Windows NT 4 Enterprise Edition, you must add the flag /3GB to the boot line in the Boot.ini file, which can be done through the System icon that appears in Control Panel.

For two editions of the Windows 2000 operating system, SQL Server 2000 Enterprise Edition can use the Windows 2000 Address Windowing Extensions (AWE) API to support larger address spaces. SQL Server supports close to 8 GB on Windows 2000 Advanced Server and close to 64 GB on Windows 2000 Datacenter Server. AWE is supported on only these two operating systems; it is not supported on Windows 2000 Professional. (See Chapter 2 of this book and the topic "Using AWE Memory on Windows 2000" in Books Online for more details.)

SQL Server Memory Configuration Options

The following SQL Server configuration parameters are associated with specific aspects of memory allocation. You can set these parameters by using either SQL Server Enterprise Manager or the sp_configure stored procedure. To view all of these parameters by using sp_configure, you must have the option show advanced options set to 1.

  • awe enabled Enables SQL Server to use extended memory (the AWE memory mentioned previously). Setting this option to 1 enables it. This option is available only in SQL Server Enterprise Edition and can be set only by using sp_configure.
  • index create memory Limits the amount of memory used for sorts during index creation. The index create memory option is self-configuring. It should not require adjustment in most cases. However, if you experience difficulties creating indexes, you might want to try increasing the value of this option from its default value.
  • max server memory Sets the maximum amount of memory that SQL Server can allocate to the memory pool. Leave the default setting if you want to allow SQL Server to dynamically allocate and deallocate memory. If you want to allocate memory statically (so the amount used will not change), set this option and min server memory to the same value.
  • min memory per query Specifies the minimum amount of memory (in kilobytes) that will be allocated to execute a query.
  • min server memory Sets the minimum amount of memory that SQL Server can allocate to the memory pool. Leave the default value to allow dynamic memory allocation. If you want to allocate memory statically, set this option and max server memory to the same value.
  • set working set size Specifies that the memory that SQL Server has allocated cannot be swapped out, even if that memory can be more effectively used by another process. The set working set size option should not be used when SQL Server is allowed to allocate memory dynamically. It should be used only when min server memory and max server memory are set to the same value. In this way, SQL Server will allocate a static amount of memory as nonpageable.

NOTE


To take advantage of the AWE memory options, you must be running Windows 2000 Advanced Server or Windows 2000 Datacenter Server along with SQL Server 2000 Enterprise Edition.

Other Dynamic Configuration Options

Several dynamic configuration options that do not pertain to server memory are available in SQL Server. If you leave the default values of these options, SQL Server will dynamically configure them. The default values can be overridden. This is not usually necessary, but you should understand how the options work in case you do want to configure them manually.

You set configuration options by using either SQL Server Enterprise Manager (not all options can be set through Enterprise Manager) or sp_configure. To set an option by using sp_configure, open Query Analyzer or an osql connection in a command prompt window and run the stored procedure with its parameters, as follows:

 sp_configure "option name", value 

The parameter option name is the name of the configuration option, and value is the value you want to set it to. If you run this command without including the value parameter, SQL Server will return the current value for the specified option. To see a list of all the options and their values, run sp_configure with no parameters. Several options are considered to be advanced options. In order to view and configure these options by using sp_configure , you must first set the option show advanced options to 1, as shown here:

 sp_configure "show advanced options", 1 

The options that are configurable through Enterprise Manager are not affected by show advanced options.

To set an option by using Enterprise Manager, first open the Properties window for a server in Enterprise Manager by right-clicking the server name and choosing Properties from the shortcut menu. A sample Properties window is shown in Figure 30-1.

Figure 30-1. The General tab of a server's Properties window in Enterprise Manager.

You can then access certain dynamic options on the window's tabbed pages. In the following sections, the non-memory-related dynamic options of SQL Server are described, and each section explains whether the option described in it can be set in Enterprise Manager, and if so, where the option can be found in the Properties window.

The locks Option

SQL Server dynamically configures the number of locks used in the system according to the current needs. You can set the locks option to indicate the maximum number of available locks, thus limiting the amount of memory SQL Server uses for locks. The default setting is 0, which allows SQL Server to allocate and deallocate locks dynamically based on changing system requirements. SQL Server allows up to 40 percent of its total memory to be used for locks. You should leave the locks parameter at the default value of 0 and allow SQL Server to allocate locks as necessary. This option is an advanced option and can be set only by using sp_configure.

The recovery interval Option

The recovery interval value indicates the maximum number of minutes that SQL Server can spend per database to recover from a failure. (See the section "Automatic Checkpoints" later in this chapter for more details about how checkpoints work) The time SQL Server needs to recover a database depends on when the last checkpoint occurred. Therefore, the recovery interval value is used by SQL Server to determine dynamically when to run automatic checkpoints.

For example, each time SQL Server is shut down cleanly, checkpoints are run on all databases, and therefore, when SQL Server is restarted, recovery takes little time. But if SQL Server is forced to stop without shutting down cleanly (because of a power failure or some other type of failure), when SQL Server starts up again, it must recover each database by rolling back transactions that did not commit and rolling forward transactions that did commit but whose changes were not yet written to disk at the time of the failure. If the last checkpoint on a particular database occurs shortly before the system failure, the recovery time for that database will be shorter. If the last checkpoint occurs a long time before the system fails, the recovery time will be longer.

SQL Server determines how often to run checkpoints according to a built-in algorithm and, as mentioned, according to the recovery interval setting. So if you set recovery interval to 5, for example, SQL Server will run checkpoints on each database often enough such that the recovery of a database in case of a failure would take about 5 minutes. The default recovery interval value is 0, indicating automatic configuration by SQL Server. When the default setting is used, the recovery time is less than 1 minute, and a checkpoint occurs approximately every minute for active databases. In many cases, the benefit provided by frequently running checkpoints is outweighed by the performance degradation caused by running the checkpoints. Thus, over time, you should reduce the number of checkpoints performed by increasing the recovery interval value. The value you choose will depend on your business requirements concerning how long users can wait for the system to recover in case of a failure. Generally, a value of 5 to 15, indicating a recovery time of 5 to 15 minutes, should be used.

The recovery interval option is an advanced option. You can set it in Enterprise Manager by clicking the Database Settings tab of the Properties window and typing a value in the Recovery Interval (Min) box, as shown in Figure 30-2.

Figure 30-2. Setting the recovery interval.

The user connections Option

SQL Server dynamically configures the number of user connections allowed into SQL Server. SQL Server allows a maximum of 32,767 user connections. By setting the user connections option to a value other than 0, you specify the maximum number of simultaneous user connections allowed into SQL Server. (The number of user connections allowed also depends on the limits of your applications and hardware.) User connections will still be dynamically configured, up to the maximum.

For example, if only 10 users are logged in, only 10 user connection objects are allocated. If the maximum is reached and SQL Server needs more user connections, you will get an error message stating that the maximum number of user connections has been reached.

In most cases, the default value for the user connections option does not need to be changed. Note that each connection does require about 40 KB of memory.

You can use SQL Server Query Analyzer and the following T-SQL statement to determine the maximum number of user connections that your system allows:

 SELECT @@MAX_CONNECTIONS 

The user connections option is an advanced option. You can set it in Enterprise Manager by clicking the Connections tab of the server's Properties window and entering a number in the Maximum Concurrent User Connections spin box, as shown in Figure 30-3.

Figure 30-3. Setting user connections.

The open objects Option

The open objects option is an advanced option and can be set only by using sp_configure. Setting this option determines the maximum number of database objects—such as tables, views, stored procedures, triggers, rules, and defaults—that can be open at the same time. The default value of 0 indicates that SQL Server will dynamically adjust the number of open objects allowed on the system. You should leave the default setting. If you do change it and SQL Server needs more open objects than you have configured, you will get an error message from SQL Server stating that it has exceeded the allowed number of open objects. Also, each open object consumes some memory, so your system might need more physical memory to support the number of open objects needed.

Statistics

Column statistics are needed to achieve better query performance on your system. SQL Server is able to gather statistical information regarding the distribution of values in a table column. Query Optimizer then uses this information to determine the optimal execution plan for a query. Statistics can be gathered on two types of columns: those that are part of an index and those that are not in an index but are used in a predicate of a query (in the WHERE clause). By leaving the default SQL Server settings for a database, you allow both types of statistics to be created automatically by SQL Server. Indexed-column statistics are created when the index is created. Non-indexed-column statistics are created when they are needed for a query (on a single column only, not multiple columns, as you will see in the section "CREATE STATISTICS" later in this section). Once the statistics are aged (not used for a period of time), SQL Server will automatically drop them.

To create both non-indexed-column statistics and indexed-column statistics, SQL Server uses only a sample of the data in the table, not every row in the table. This minimizes the overhead required by the operation, but in some cases, sampling does not characterize the data well, and the statistics will not be completely accurate.

In Enterprise Manager, you can enable or disable the automatic creation of statistics for a database. To do this, first open the Properties window for a database. Click the Options tab, where you will see the Auto Create Statistics check box. (Figure 30-4 shows this check box for an example database, MYDB.) This option is selected (enabled) by default.

click to view at full size.

Figure 30-4. The Properties window for the MyDB database.

In the database's Properties window, you will also see an option called Auto Update Statistics. This option, which is also enabled by default per database, indicates that SQL Server will automatically update the statistics on table columns when necessary. These statistics need to be updated when a large number (or high percentage) of rows in the table have changed (through update, insert, or delete operations). When many changes to the data are made, the current statistics are less accurate. SQL Server will automatically determine when the statistics should be updated. If you choose to disable statistics creation by disabling this option, you must perform these tasks manually to ensure your database performs well. The following sections show you how to both create and update statistics manually.

CREATE STATISTICS

You can create statistics manually on specific columns in a table by using the T-SQL command CREATE STATISTICS. Creating statistics manually differs from automatic creation in that it allows you to combine statistics on multiple columns, which generates information such as the average number of duplicate values and distinct values for the combination of columns. The syntax of CREATE STATISTICS is as follows:

 CREATE STATISTICS stats_name ON table_name ( column [ , column… ] ) [ [WITH [ FULLSCAN | SAMPLE size PERCENT ] [ , NORECOMPUTE ] 

You must provide a name for the statistic collection you are creating, a table name, and at least one column name. You can specify multiple column names in order to gather statistics on a combination of columns. Note that you cannot specify computed columns or columns assigned the ntext, text, or image data type as statistics columns. Either a full scan or a sampling of data can be used for statistics gathering. A full scan will take longer to complete than a sample, because it scans every row of the table, but it can be more accurate. If you use a sampling, you must specify the percent of data to be sampled. The NORECOMPUTE keyword specifies that automatic updating of these statistics be disabled, allowing statistics that no longer represent the data well to still be used.

You might want to create statistics on columns that are used together in a predicate of a query. For example, you can create statistics on the FirstName and LastName columns of the Employees table of the Northwind database to search for an employee name by first name and last name. The T-SQL code used to do this is as follows:

 CREATE STATISTICS name ON Northwind..Employees (FirstName, LastName) WITH FULLSCAN, NORECOMPUTE 

This statement calculates statistics for all rows in the FirstName and LastName columns, and disables automatic recomputation of the statistics.

If you want to manually create statistics for all columns in all tables of a database without having to type CREATE STATISTICS statements for each column of each table, you can use the sp_createstats stored procedure. This stored procedure is described in the next section.

sp_createstats

You can create statistics on all eligible columns of all user tables in a database by using the stored procedure sp_createstats. Statistics will be created for all columns that do not already have statistics. Each set of statistics will be on a single column only. The syntax for sp_createstats is as follows:

 sp_createstats [ 'indexonly' ] [ , 'fullscan' ] [ , 'norecompute' ] 

The indexonly parameter specifies that only columns that participate in an index will have statistics created on them. The fullscan parameter specifies that a full scan of all rows, rather than a random sampling, will be performed—in other words, 100 percent of the data is sampled. The norecompute parameter specifies that automatic statistics updating be disabled on the new statistic. The new statistic is given the name of the column on which it is created.

UPDATE STATISTICS

SQL Server automatically updates statistics by default. You can disable this option and then manually update the statistics to make them current by using the UPDATE STATISTICS command. This command allows you to update indexed-column and non-indexed-column statistics. You might want to create a script that executes UPDATE STATISTICS on your most modified tables and then run that script as a SQL Server job periodically. This will help keep statistics up-to-date and ensure better query performance. (See the section "Rebuilding Indexes" in Chapter 17 for more details about the syntax and options of UPDATE STATISTICS.) To enable or disable the automatic update status of a particular statistic, you can use the sp_autostats stored procedure, as explained in the next section.

sp_autostats

By using the system stored procedure sp_autostats, you can enable or disable the automatic updating of particular statistics. Running this procedure does not cause the statistics to be updated; rather, it determines whether or not automatic updating should occur. This stored procedure is called with one, two, or three parameters: a table name and optionally a flag and a statistic name. The flag indicates the status of automatic updating and can be set to ON or OFF. To display the current update status for all statistics in a table (indexed-column and non-indexed-column statistics), run the command with the table name specified. The following command displays the status for statistics in the Customers table:

 USE Northwind GO sp_autostats Customers GO 

The output shows the name of each statistic, whether automatic updating is set to ON or OFF, and when the statistics were last updated. Do not be confused by the column heading "Index Name" above the first column of the output. It refers to all statistics, not just indexes. If you have not manually turned off updating for these statistics, they should be displayed with a status of ON, as this is the SQL Server default.

To disable automatic updating of all statistics in the Customers table in the Northwind database, use the following command:

 USE Northwind GO sp_autostats Customers, 'OFF' GO 

You can reenable automatic statistics updating by setting the flag value to ON. To change the status of a particular statistic or the statistics for an index, include the statistic name or index name, respectively. For example, the following command enables automatic statistics updating for the PK_Customers index:

 USE Northwind GO sp_autostats Customers, 'ON', 'PK_Customers' GO 

The status of all other statistics on the Customers table will be unchanged.

File Growth

With SQL Server 2000, you can configure data files to grow automatically as needed. This feature is useful because it prevents you from inadvertently letting your files run out of space. However, using this feature is not an excuse for not monitoring your database size and performing capacity-planning exercises occasionally. You should always be aware of how fast tables are growing. Then you can determine whether you need to perform regular deletions on unnecessary, possibly out-of-date data in some tables and, by doing so, slow their growth. As the amount of data in a table increases, queries take more time and performance levels decline. The topic of configuring automatic file growth when you create a database was touched on in Chapter 9—here, you will learn how to alter the growth options for an existing database. The automatic file growth option can be configured from within Enterprise Manager. To do so, follow these steps:

  1. In the left pane of Enterprise Manager, expand a server and then click the Databases folder. Right-click the database that you want to modify (in this example, we'll modify the MyDB database), and choose Properties from the shortcut menu to open the database's Properties window.
  2. Click the Data Files tab (Figure 30-5) to see the properties of the data files for that database. The options in the File Properties area are designed to let you control how the data file will expand. To enable automatic file growth, select the Automatically Grow File check box. If you are using automatic file growth, you should set limits so that the file cannot grow uncontrollably.

    click to view at full size.

    Figure 30-5. The Data Files tab of the MyDB Properties window.

    You specify a maximum file size by using the options in the Maximum File Size area. You should click Restrict File Growth and enter a maximum size in the spin box. If instead you click Unrestricted File Growth, you might later find that your entire disk subsystem has filled up without notice, causing both performance and operational problems.

    You specify how fast a file grows by using the options in the File Growth area. If you click In Megabytes, whenever the data file is filled, SQL Server will increase its size by the specified amount. If you click By Percent, SQL Server will increase the data file's size by the specified percentage of its current size.

  3. Click the Transaction Log tab, shown in Figure 30-6, to set the automatic growth options for the transaction log. These options are used the same way as the corresponding options in the Data Files tab are used. For transaction log files, as for data files, you should set limits so that they do not grow uncontrollably.

    click to view at full size.

    Figure 30-6. The Transaction Log tab of the MyDB database's Properties window.

    The automatic file growth feature is convenient in many cases. Just be sure that you do not accidentally let a file consume all of the disk space on your system.

Checkpoints

SQL Server performs checkpoint operations automatically. The frequency of checkpoints will be automatically calculated according to the value you specify for the SQL Server configuration option recovery interval. This option specifies how long, in minutes, you can wait for a database recovery in the event of a system failure. Checkpoints will occur often enough to ensure that a system recovery will never take longer than the specified number of minutes. Checkpoints also occur automatically whenever SQL Server is shut down by a SHUTDOWN statement or by Service Control Manager. You can also issue checkpoints manually by using the CHECKPOINT statement.

If you want your system to perform optimally and if you can tolerate a long recovery time, you can set recovery interval to a high value, such as 60. This means that if your system fails, automatic recovery could take up to 60 minutes to complete. Checkpoints cause a large number of disk writes when they occur, and they can take away processing power from user transactions, thus slowing user response times. This is why running checkpoints less often can help improve transaction performance overall. Of course, setting this value too high could result in an excessively long downtime if a failure does occur. A common setting for recovery interval is between 5 and 15 (minutes).

By default, recovery interval is set to 0. This setting allows SQL Server to determine the best time to perform checkpoints based on the system load. Generally, when the default setting is used, checkpoints occur about once a minute. If you notice that checkpoints are occurring this often, you might want to adjust the recovery interval setting. To determine whether SQL Server is performing check-points excessively, use the SQL Server trace flag -T3502. This flag causes checkpoint information to be written to the SQL Server error log. Notice that checkpoints occur per database.



Microsoft SQL Server 2000 Administrator's Companion
Microsoft SQL Server 2000 Administrators Companion
ISBN: B001HC0RPI
EAN: N/A
Year: 2005
Pages: 264

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net