Section 17.4. Protecting Your Database


17.4. Protecting Your Database

When working with any database, the old maxim "an ounce of prevention is worth a pound of cure" holds true. Before covering details of how to perform a Sybase backup, we cover maintenance tasks that are normally performed on your servers on either a nightly or weekend schedule. The primary maintenance operations are dbcc and update statistics. dbcc checks the health of your server. update statistics updates the table distribution pages that Sybase uses to create query plans. update statistics is necessary to make your applications run well. These two steps are Sybase's "ounce of prevention." In addition to these dbcc tasks, you need to choose a transaction log archive strategy. If you follow these tasks, you will help maintain the database, keeping it running smoothly and ready for proper backups.

17.4.1. dbcc: The Database Consistency Checker

Even though Sybase's dataserver products are very robust and much effort has gone into making them fault-tolerant, there is always the chance that a problem will occur. For very large tables, some of these problems might not show until very specific queries are run. This is one of the reasons for the database consistency checker, dbcc. This set of SQL commands can review all the database page allocations, linkages, and data pointers, finding problems and, in many cases, fixing them before they become insurmountable.

I strongly recommend running dbcc as part of a nightly preventive maintenance cycle. dbcc also should be run (if possible) before backing up a database, which adds an additional level of reliability to the backups. If a database is corrupt when it is backed up, the backup will also contain the corruption, and it is possible that the backup will not be restorable. dbcc can, however, be a time-consuming process, taking an order of magnitude longer than a backup. A 200 GB database that takes 15 minutes to back up might take 3 hours to dbcc. While your system is not "down" while dbcc is running, dbcc degrades overall system performance and it can hold locks on tables that might impact application performance. Therefore, it is quite possible that your nightly maintenance cycle may preclude running dbcc every night. One option might be to run dbcc as part of a weekend maintenance cycle.

Since dbcc checks can be resource-intensive, consider adopting a strategy to take advantage of an object-level dbcc if you have a large database system (say, 500 GB and larger) and have high availability requirements. On a given day, run a certain number of dbcc checktable and dbcc tablealloc commands for a portion of the database. On subsequent days, run different tables. Over a period of days, you can accomplish a complete integrity check. For example, if your database has 200 tables in addition to the system tables, run a dbcc on each system table on night one, run dbcc against each of the first 50 of the user tables on night two, the next 50 the next night and so on, until at the end of five nights you have checked every table in the database. On the sixth night, you can begin the cycle again.

Building dbcc checks into your regular backup/maintenance schedule can ensure that you have a consistent, accurate database available at all times. dbcc is not an optional step if you care about your data.

You should dbcc your database on some kind of routine basis. Simply running the command, however, is not sufficient. Search the dbcc output for error messages (usually running the Unix grep command or the Windows find command to look for corrupt and error is sufficient).


17.4.1.1. Standard/nightly dbcc checks

There are different types of checks dbcc can run on a database, and they differ in terms of runtime length, locking levels used, and completeness. Some check only data pages while others check index consistency and sort order. Which check should be run depends on the database size, the database access requirements, and the thoroughness required. Table 17-1 lists several dbcc checks that are usually performed as part of normal nightly maintenance.

Table 17-1. Types of dbcc checks
KeywordPurposeAccess restrictions
dbcc checkalloc

Checks page allocations for all tables and indexes. Can fix some problems if allowed. Reports errors and the amount of space used. Very slow to runlots of I/O, but very little locking.Database owner only
dbcc checkstorage

Checks page storage.Database owner only
dbcc checkcatalog

Checks the consistency of system tables in a databasevery quick check. Outputs report on segments defined for database.Database owner only
dbcc checkdb

Runs same checks as checktable but for all tables in a database.Database owner only


If possible, you should run all these checks on each of your databases. The dbcc command is a Sybase-specific SQL command and is normally run using the isql command-line utility.

Here is the syntax for the dbcc sql statements:

$ dbcc checkdb  $ dbcc checkalloc [( database_name)] [, fix]]) $ dbcc checkcatalog [( database_name )]

The fix option of dbcc checkalloc controls whether dbcc should repair the problems it encounters. The fix option requires that the database be placed in single user mode and is therefore used only if prior runs of dbcc have detected problems (or if you have seen allocation errors in your Sybase error log). In fact, you should never run this command with the fix option unless you are instructed to do so by technical support or after looking up a previously discovered error message in the Sybase manuals. Before running dbcc checkalloc, fix, you must set the database option single user mode to TRue. Once it is run, only one user can access the database.

dbcc checkstorage is a newer mechanism to check database consistency. This command requires some basic installation to create a database named dbccdb that is used to store consistency check results. dbcc checkstorage is similar to dbcc checkalloc and dbcc checkdb, but it runs this check without holding locks on the underlying tables.

17.4.2. Reorgs

Reorganizing table data within Sybase is an optional step, but a step that can significantly enhance application performance. Sybase can leave extra whitespace when, for example, a variable length column is resized with an update statement (the original row size must be preserved in case the statement is rolled back). Additionally, repeated row inserts and deletes on the same page can result in a nonoptimal distribution of rows. For example, if a data page can hold 10 rows, and you have deleted 9 of them, you could end up with a single row on the page. The page is not deallocated until the tenth row is deleted. This kind of allocation mismatch is not normally a big deal because rows are also inserted and pages combined, but if the size of your table changes too much, it can slow down the performance of selects. Because the data is stored in a b-tree, there is a point where the increased size of the data page chain causes an extra lookup by your queries. You can find the application taking 20 minutes to run one day and 35 minutes to run the next, simply because you added or modified some rows in a table. The Sybase reorg command compacts pages to their optimal size. This is the equivalent of dropping and reading your indexes but is done much more efficiently. The reorg command optimizes your page layout by copying rows around in a bunch of small transactionsa procedure that is usually quite fast and that does not hold table-level locks on your data so it can be done on running systems. Sybase provides four reorg commands:

reorg reclaim_space tablename [indexname]    [with {resume, time = no_of_minutes}] reorg forwarded_rows tablename    [with {resume,time = no_of_minutes}] reorg compact tablename    [with {resume, time = no_of_minutes}] reorg rebuild tablename [indexname]

The reorg compact command is a combined run of reorg reclaim_space and reorg forwarded rows. With these reorg options, you can specify a maximum runtime so that they can be run during a specified maintenance window. The reorg rebuild command iterates through your data and optimizes everything. It cannot, however, be run within a specified time window. It is recommended that you choose either reorg rebuild or reorg compact and run it on all your tables at least once a month. Sybase requires only reorgs on DOL tables (DOL is a locking scheme).

17.4.3. Update Statistics

The update statistics tablename SQL command updates distribution statistics on your tables. The distribution information optimizes queries by permitting Sybase's cost-based optimizer to correctly select indexes to use in a query. Distribution information is kept on the table data (the clustered index) and on any nonclustered indexes for which you run the command. Once you have updated your statistics, future queries use the statistical information found about your table to choose the correct index. Existing stored procedures do not use the updated information. To allow stored procedures to access this information, you need to run the sp_recompile tablename procedure on each of your tables. sp_recompile tells the system that the next time a stored procedure that uses your table is run, it should be recompiled using the latest table statistics. It is normal to run sp_recompile on all your tables directly after you run update statistics on them.

Commands like update statistics, reorg rebuild, and sp_recompile should be run on each table in your database and cannot be run at the whole database level. To find a listing of tables in your database, run the query select name from sysobjects where type='U'.


17.4.4. Configuration Audits

One important step to avoiding a disaster is to keep an up-to-date offline copy of your system configuration. A configuration audit can greatly decrease the time it takes you to restore in the event of a catastrophic system failure. You should have a copy of the commands that you used while setting up your dataserver, a copy of the results of the system stored procedures sp_helpdb and sp_helpdevice, and a bcp output of the most important system tables in the master database, including:

sysusages
syslogins
sysremotelogins
sysloginroles
syssrvroles
sysdatabases
sysdevices
syscharsets
sysconfigures
sysservers
sysremotelogins
sysresourcelimits
systimeranges

In addition, maintain a hard copy by printing the output of the following queries:

$ cat backup_systemtables.sql exec sp_helpdb go exec sp_helpdevice go select * from sysusages order by vstart select * from sysusages order by dbid, lstart select * from syslogins select * from sysloginroles select * from sysdatabases select * from sysdevices select * from syscharsets select * from sysconfigures select * from sysservers select * from sysremotelogins select * from sysresourcelimits  select * from systimeranges go

These queries can be scheduled or run using a command such as:

$ XSQL backup_systemtables.sql

XSQL is explained later in this chapter. You should also ensure that you have an easily available backup of the Sybase software directory. This should include copies of your configuration file.

17.4.5. Implement Mirroring and Disk Striping

Mirroring, either at the server level or at the operating system level, can provide nonstop recovery in the event of media failure. I strongly recommend implementing mirroring at the operating system level. Sybase also supports software mirroring. If you do not have hardware mirroring, you should read "Mirroring Database Devices" in the Sybase System Administration Guide. When implementing a RAID solution for your databases, it is important to set up disk striping. Databases are often I/O-limited; and disk striping, by nature, optimizes I/O by spreading it across multiple disks.

17.4.6. How to Back Up Your Servers

Sybase backups are normally run while the database is up and active. The transaction log mechanism in Sybase guarantees system consistency. In Sybase, the SQL command to run a full database backup is dump database and the command to run a transaction log dump is dump transaction. The dump command does a byte-by-byte copy of all populated pages in the database. Both the dump and restore commands are performed at the database level.

There are two normal ways to back up your databases. The first is to back up your databases every night and to set the database option TRuncate log on checkpoint. In this case, you do not run transaction log dumps. This is the normal scheme for development databasesdatabases in which your data can be rebuilt or where the loss of multiple day data changes to your data is acceptable. The other option is to run full database backups every night and incremental backups every 5 to 15 minutes. In this case, truncate log on checkpoint is not set. The incremental backup is performed by running the SQL command dump TRansaction to "/dat/myfile". This creates a file (/dat/myfile in this example) that contains your transaction dumps and deletes all transaction records that have been committed and are marked as having been flushed to disk.

You should name your dump files so that they sort alphabetically. For example, you can use the convention yyyymmdd.hhmmss. If you name them using such a format, it will be easier to work with them if they need to be restored.


The dump and load SQL statements are not actually run by the dataserver. These commands are run by a separate process known as the backup server. You must therefore ensure that the backup server is running at all times.

17.4.6.1. Syntax of the dump statement

In its simplest form, a full database backup looks like this:

> dump database somedb to "file"

In its simplest form, a transaction log backup like this:

> dump transaction somedb to "file"

The file in these commands can be a tape device, a disk file, or a dump device defined in Sybase using the stored procedure sp_adddumpdevice. (Dumping to tape devices and logical dump devices is nowadays often rejected in favor of backing up directly to disk files.) Example 17-1 shows sample dump commands.

Example 17-1. Sample dump commands

1> dump database mydb to '/sybase/backups/mydb.990312.bck' 2> go Backup Server session id is:  20.  Use this value when executing the 'sp_volchanged' system stored procedure after fulfilling any volume change request from the Backup Server. Backup Server: 4.41.1.1: Creating new disk file /sybase/backups/mydb.990312 .bck. Backup Server: 6.28.1.1: Dumpfile name 'mydb9909106EF4   ' section number 0001 mounted on disk file '/sybase/backups/mydb.990312.bck' Backup Server: 4.58.1.1: Database mydb: 338 kilobytes DUMPed. Backup Server: 4.58.1.1: Database mydb: 344 kilobytes DUMPed. Backup Server: 3.43.1.1: Dump phase number 1 completed. Backup Server: 3.43.1.1: Dump phase number 2 completed. Backup Server: 3.43.1.1: Dump phase number 3 completed. Backup Server: 4.58.1.1: Database mydb: 352 kilobytes DUMPed. Backup Server: 3.42.1.1: DUMP is complete (database mydb). 1> dump transaction mydb to '/sybase/backups/mydb.tlogdmp' 2> go Backup Server session id is:  23.  Use this value when executing the 'sp_volchanged' system stored procedure after fulfilling any volume change request from the Backup Server. Backup Server: 6.28.1.1: Dumpfile name 'mydb9909106F49   ' section number 0001 mounted on disk file '/sybase/backups/mydb.tlogdmp' Backup Server: 4.58.1.1: Database mydb: 16 kilobytes DUMPed. Backup Server: 4.58.1.1: Database mydb: 20 kilobytes DUMPed. Backup Server: 3.43.1.1: Dump phase number 3 completed. Backup Server: 4.58.1.1: Database mydb: 24 kilobytes DUMPed. Backup Server: 3.42.1.1: DUMP is complete (database mydb).

17.4.6.2. Backup striping and compression

The Sybase dump command can split the backup files between several files, a process called striping. The dump command also supports integrated file compression. This native compression should always be used because compressed backups are faster and fit into smaller files.

Here's the syntax for striping:

> dump database somedb to "file2" > stripe on "file2" > go 

Here's the syntax to use compressed file backups:

dump transaction somedb to "compress::number::file2" go

The number refers to the compression level and ranges from 0 to 9 (where 0 indicates no compression). I recommend using compression level 1 for your backups; this represents the fastest level of compression. The performance and size gain made by compressing your dumps at level 1 is significant when compared to uncompressed backups. There is often a lot of whitespace in your raw database fileswhitespace ideally suited to backup compression.

If you are backing up to a file using both striping and compression (recommended), the syntax of your command will look like this:

dump database mydb  to "compress::1::/dumps/mydump-stripe1" stripe on " compress::1::/dumps/mydump-stripe2" stripe on " compress::1::/dumps/mydump-stripe3" stripe on " compress::1::/dumps/mydump-stripe4" go

In all cases, if you are unsure about the specific use of a command, please refer to the Sybase system administration manuals for additional guidance.

17.4.7. Have a Run Book

Because most data centers have more than one dataserver and database, having clear documentation helps prevent mistakes. I strongly recommend that you keep a run book that contains configuration and batch job scheduling information on all your servers. A run book need not be a paper copy of your configuration; an electronic run book would be a better idea. Simply dump the output of common system procedures (sp_helpdevice, sp_helpdb) into text files and keep a few weeks' worth. Having information about your historical system layout can save time in the event of a problem (it gives you more data for analyzing how the problem started) and is critical in the event of a disaster.




Backup & Recovery
Backup & Recovery: Inexpensive Backup Solutions for Open Systems
ISBN: 0596102461
EAN: 2147483647
Year: 2006
Pages: 237

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net