Maintenance


Besides backup and restore, there are other tasks that you will need to do from time to time. These will largely involve maintaining your indexes for performance and routine maintenance using DBCC commands. However, some of these maintenance tasks can be intrusive . By intrusive , it is meant that it might affect performance, availability, or both.

Table 14-1 shows a list of what to run and what the best practices are for running these commands so that there is minimal impact on online processing.

Table 14-1: Routine Maintenance Commands

DBCC

Note

CHECKDB, CHECKALLOC, CHECKFILEGROUP, and CHECKTABLE

Does not block updates, with recommended parameters

DBREINDEX (nonclustered index)

Shared lock (used to take an exclusive in previous versions)

SHOWCONTIG

Has an option not to take a shared lock on the base table

DBREINDEX (heap)

Exclusive lock

UPDATEUSAGE

Shared lock

CHECKCONSTRAINTS

Shared lock

DBCC INDEXDEFRAG

Runs while database is fully operational

DBCC CHECK (CHECKDB, CHECKALLOC, CHECKFILEGROUP, and CHECKTABLE) and DBCC INDEXDEFRAG can each be run online without blocking updates, but they do prevent log truncations while running. The impact is minimal when you use the recommended settings. You still have to understand the impact these have on your system. You also should weigh different options and carefully consider what you are getting out of the process.

Perform regular updates to your index schema or statistics. Your goal is to find out how long everything takes, and then decide which action will provide the best benefit. (You can create scripts that keep track of the time elapsed during any maintenance task you do on a regular basis.) Be sure to review the documentation for these traditional maintenance tasks thoroughly before you use them in SQL Server 2000; they have undergone significant improvement and enhancement specifically for the purpose of supporting online maintenance. You cannot use the same cost/benefit grid as you did for Microsoft SQL Server 6.5 or Microsoft SQL Server 7.0.

Calculating the Cost of Maintenance

You might find that your tasks (some or all) are so lightweight (good examples of this include DBCC CHECKDB with the PHYSICAL_ONLY option or DBCC INDEXDEFRAG) that you notice only a minor effect on performance during the scan.

On the other hand, if you drop and recreate an index, the build process will perform faster than a DBCC INDEXDEFRAG ( especially if you have Microsoft SQL Server 2000 Enterprise Edition and you take advantage of the parallel index builds). But there is a cost: if it is a vital index, you will still see performance degradation until it is rebuilt, and if it is a clustered index, your data is completely unavailable while the index is rebuilt. In most cases, use the lightweight DBCC INDEXDEFRAG, which might take a little more time to complete because it runs without interfering as much with online processing.

Table 14-2 provides some tips about planning and running these medium impact commands. Run these as a SQL Profiler script on your test server and get a baseline of its performance. Then run each maintenance task and record the results. Compare them to the original and begin to put together a plan for maintenance and administration.

Table 14-2: Medium Impact Commands

Process

Impact

Recommendation

UPDATE STATISTICS

Depends on usage of the command. The higher the sample ratio, the more of an impact this will have.

Monitor your system carefully to determine optimal usage.

Use autocreate and autoupdate where possible. (Test this before changing your system s setting.) In high transaction volume systems or any situation where autostats are not used, explicitly update statistics on specific tables at scheduled times. The higher the sample ratio, the more useful your statistics will be, but the longer it will take to build them.

DBCC SHOWCONTIG

Minimal (with the WITH FAST option).

Use this to help you determine if decreasing query performance is linked to fragmentation.

DBCC INDEXDEFRAG

Minimal (system transactions), no blocking.

If you determine you have fragmentation, this is one method you can use to defragment your indexes.

DBCC CHECKDB

Note: CHECKDB includes CHECKALLOC

Minimal with PHYSICAL_ONLY, NO_INFOMESSAGES; uses log analysis to get transactional consistency; makes heavy use of CPU and tempdb.

Corruption in the database stems primarily from problems at the hardware layer. Run this periodically and before and after upgrades or hardware changes. Do not run simultaneously with other CPU and disk- intensive tasks.

Another potential problem is any need to truncate and refresh a table in the production system. You can create two tables and alternate between them.

This is just one example, but you can introduce many design changes to make sure that services are not interrupted . The following is a brief history of how things have changed:

  • In SQL Server version 6.5, CHECKALLOC had to run offline to avoid spurious errors. In SQL Server 7.0, you could run it with read-only access.

  • From SQL Server 7.0 on, CHECKDB, CHECKALLOC, CHECKFILEGROUP, and CHECKTABLE all became fully online operations.

  • A nonclustered index rebuild only takes an S lock on the base table. It used to take an exclusive lock.

Intrusive Maintenance

Some maintenance operations are more intrusive than others. By intrusive, it is meant that a table, or possibly the entire database, will be unavailable while the command is running. Notable ones include create index, rebuild index, full update statistics (auto update statistics does not cover all customer scenarios), and restore database. Customers must plan for these intrusions.

Table 14-3 provides some tips about planning and running these processes should you decide to run them; you should try less intrusive versions before running some of these commands.

Table 14-3: Intrusive Maintenance Commands

Process

Recommendation

UPDATE STATISTICS FULLSCAN

Use autocreate and autoupdate where possible. (Always test changes prior to implementing in production.) In high transaction volume systems, or any situation where autostats are not used, explicitly update statistics on specific tables at scheduled times. The higher the sample ratio, the more useful your statistics will be, but the longer it will take to build them.

DBCC SHOWCONTIG WITH TABLERESULTS, ALL_INDEXES

Again, another variation of SHOWCONTIG that can help you determine if you have fragmentation and are seeing poorly performing queries.

DBCC CHECKDB with options other than physical_only

Note: CHECKDB includes CHECKALLOC

As with CHECKDB described earlier, this can help you determine things about possible corruption, but if you check more than PHYSICAL_ONLY, this might take some time. Do only as directed by PSS, so as to not cause availability problems.

DBCC DBREINDEX clustered index WITH NO_INFOMSGS or DROP/CREATE INDEX

Do this during the lowest levels of usage. This should be a very infrequent process, and it is covered here because of its former prominence. DBREINDEX is more optimized than drop/create index. Use the white paper mentioned in Defragmenting Indexes later in this chapter to determine if you need to run DBREINDEX.

DBCC SHOWCONTIG WITH TABLERESULTS, ALL_INDEXES

Using TABLERESULTS or ALL_INDEXES may take some time, depending on the results.

Defragmenting Indexes

Defragmenting indexes, whether small or large, might become a problem for you at some point. You do have a few options, but you need to weigh them carefully. All of them will have some impact on your system. The size of your database will definitely affect index defragmentation, since the larger the database is, the more complex the solution has to be.

More Info

An excellent resource on defragmenting indexes is the white paper Microsoft SQL Server 2000 Index Defragmentation Best Practices, which is located at http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/sql/maintain/Optimize/SS2KIDBP.asp . Use this section as a primer, but this white paper should be your main guide.

Before you even attempt to defragment an index, you need to determine if you have fragmentation and if you need to actually do something about it. This is documented in the white paper.

If you need to defragment your indexes, one thing you could choose to do is to run a DBCC INDEXDEFRAG, but it might not be optimal for you. It can only defragment a single index. It can span files, but it only does one file at a time if you are using file sand filegroups, although you have to issue only one command. DBCC INDEXDEFRAG is an online operation, which means that you do not have to worry about users affecting the defragmentation process.

DBCC DBREINDEX can rebuild a single or even all indexes for one table. However, it is an offline operation ”that is, the underlying tables will be unavailable for use while the DBREINDEX is happening, and you cannot generate any transaction log backups . Its advantage over an explicit DROP INDEX and CREATE INDEX is that you can rebuild indexes enforcing PRIMARY KEY or UNIQUE constraints without worrying about needing to recreate the constraints manually. Statistics are also rebuilt automatically and DBREINDEX will use multiple processors if they are available. You will also need to account for disk space during a DBREINDEX operation.

More Info

The white paper mentioned in the previous reader aid has a good comparison of the two options.

Logical vs. Physical Fragmentation

Logical fragmentation means that you have a fragmented SQL Server file internally where the data is stored but that the physical file on the file system is not fragmented. Your data can also be logically fragmented if you do many inserts and deletes. If your data is fragmented, you will need to find some way to compact your data so that it is contiguous and not fragmented. DBCC SHRINKDATABASE can shrink the database, but it is not coded to know about or reduce fragmentation; it only compacts things. DBCC SHRINKDATABASE will shrink a database even if it spans multiple files. Physical fragmentation can also occur if you are using your server for multiple applications and each one is using the same disk(s).

Physical fragmentation at the disk level can occur for SQL Server. With releases prior to SQL Server 7.0 that could not automatically grow, you physically had to create additional segments for your data or log. This could cause physical fragmentation over time. Now, with SQL Server 7.0 and SQL Server 2000, you can allow your databases to grow or shrink automatically, but you need to realize that these two can cause physical fragmentation at the disk level since you are expanding or shrinking your database files, or both.

This is directly related to placement of your data and log files. If you place all your files on the same spindle, your growth will occur on each database and expand (or contract, if shrinking) to the next available disk block(s). Over time, this can fragment your files. This is one good reason to place your disk files onto separate spindles. If you encounter severe physical fragmentation, you will have to consider defragmenting at the file system level. If you choose to do this, it is recommended that you make sure that no users are connected to SQL Server and possibly that SQL Server is shut down so you do not encounter any potential conflicts while the disk pages are moved around on the disk. It might be easier, however, to use a utility like sp_detach_db to copy and move the data and log files safely, reformat the disk so that it has a clean slate, and then copy the data back and reattach it.

How you approach physical fragmentation will vary depending on the size of your data, on whether other things will be affected, and on how fragmented things are. Make sure you back up your databases prior to initiating such an action.

Note

Because you have fragmentation does not necessarily mean you have to do something about it. Some storage area networks (SANs) actually have logic to prevent such a problem. That said, as part of your monitoring, you need to determine if you are actually having problems that might indicate fragmentation, such as longer average reads and writes per second and poor throughput (disk reads and writes per second).

In the end, you will have to test what works best in your environment to give you availability and performance, as well as the windows of opportunity you will need to perform the necessary maintenance.

Example: Defragmenting a VLDB That Uses Log Shipping

Consider the following scenario: you have a database that log ships to another server every 15 minutes. Once a week, it is necessary to defragment the indexes on the database. Unfortunately, at the end of the process, you now have a 5 GB transaction log that takes too long to send and apply on your warm standby. To make matters worse , the database has 12 files. Also, you cannot control the fill factor directly with INDEXDEFRAG. If you consider DBCC DBREINDEX, you can control fill factor and it will span files, but it will cause a lot of blocking on the database at the time ”and that only increases with size.

In this case, you might want to think about the following:

  • Concern yourself with fill factor rather than fragmentation. You would do well to scan the same amount of data with less I/O.

  • You might want to switch the recovery model to BULK_LOGGED, do a SELECT INTO and a rename for the data, and then switch to FULL recovery model. This will keep the integrity of the transaction-log chain, and you get the benefits of minimally logged operations.

  • Consider disabling log shipping before the defragmentation, and then resynchronize after the defragmentation. This might potentially take less time; however, you will be exposed during that time.

  • DBCC INDEXDEFRAG is definitely slower, but it has less impact on end users. Because the transactions are smaller with INDEXDEFRAG, the transaction logs will be much smaller. It is likely that the latency between the log-shipped database and the production database will be negligible.

  • Alternate DBCC INDEXDEFRAG and DBCC DBREINDEX.

Ultimately, the right answer is the one that suits a company s conditions. For the bullets listed previously, messing around with the fill factor of your indexes might be too granular, and, unless you really know what you are doing, you might cause more harm than good. Switching recovery models and moving data around and such is too much movement for some and might be risky for others. Disabling log shipping is a viable option, but you will still have the problem of a large transaction log after the defragmenting is complete. You are also exposed from an availability perspective, so you need to think about that. Alternating between DBCC DBREINDEX and DBCC INDEXDEFRAG is certainly viable , but unless you have a maintenance window to allow for DBREINDEX to happen, just doing DBCC INDEXDEFRAG might be your best option from an availability standpoint. At some point you might have to do a full DBREINDEX or DROP and CREATE INDEX, but you might be able to stave that off.

Database Corruption

Although it is not common, you can still encounter database corruption with SQL Server 2000. Corruption usually happens due to some hardware error (good example: a bad or improper disk driver) that corrupts your SQL Server databases when data is written to them. A good indicator of corruption is 605 errors in your SQL Server error log. If you suspect disk corruption, scour Event Viewer as well as any hardware-specific monitoring tools to find out if you are seeing any errors outside of SQL Server. Whether you are seeing other messages or not, the best way to handle corruption is to call Microsoft PSS to have them help you assess your problems and walk you through the next steps. You do not want to accidentally cause additional problems by running DBCC commands that might not be necessary. At this point, you might be reliant on your backups and disaster recovery plans.




Microsoft SQL Server 2000 High Availability
Microsoft SQL Server 2000 High Availability
ISBN: 0735619204
EAN: 2147483647
Year: 2006
Pages: 137

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net