With log shipping, whether you use the SQL Server 2000 Enterprise Edition functionality or code your own, there are some common points you must think about prior to configuring your log shipping solution.
|More Info|| |
Many of the concepts and rules that apply to configuring log shipping are similar or the same as those for upgrading or migrating to SQL Server 2000. For more information, see Chapter 13, Highly Available Microsoft SQL Server Upgrades.
Before you even think about implementing log shipping, you must determine the answers to certain questions. Without the answers, you cannot accurately plan and set up log shipping from both an availability and performance perspective.
Perhaps the most important questions to ask are not the technical ones but the ones related to why you are implementing log shipping and what the cost to the business will be.
How many transactions are generated per hour ? Although transactions from an application or business standpoint are ultimately represented as one or many different SQL Server transactions, you need to understand the volume of the business that is planned or occurring and translate that into SQL Server usage. For example, if each business transaction inserts data into three tables, that is a 1:3 ratio from business to SQL Server transactions. Multiply that number by the number of users over time, and you have your total transactions per hour. The eventual number of transactions should account for all work done, not just one type.
How much downtime is acceptable to end users? Nearly every business can tolerate some downtime, it is just a matter of how much. Although there is no set number, the measurement is usually in minutes or, at most, hours. This number guides how frequently transaction logs are backed up, copied , and applied. Remember to take any existing service level agreements (SLA) into account when figuring out acceptable downtime, modify them as necessary, or put new ones in place that reflect the current state of the business.
How much data or how many transactions can be lost? Like the downtime question, most businesses are quick to respond, We cannot lose any data or transactions. The reality, as with downtime, is that some losses can be tolerated; it is just a matter of how many. You will always be transactionally consistent as of the last transaction log applied to the secondary, but you might not be as current as the primary. Again, this number is usually measured in minutes or, at most, hours. The usual comfort level of lost data is anywhere from five minutes to about an hour for most businesses. Remember to take any SLA (new or existing) into account when figuring out acceptable loss of data.
How much money is allocated to the project? Although money is not the top linchpin, because log shipping is essentially an easier solution to conceptually understand, if, say, your network is too slow or your secondary is not at the same capacity as your primary, will you need to revamp things and if so, is that possible? You do not want to attempt to implement anything ”including something as simple as log shipping ”if you cannot handle it financially .
After you understand the nontechnical issues and potential barriers, you can then consider the technical issues and barriers.
How large are the transaction log files generated? The size of the transaction logs generated is a crucial bit of information because it directly impacts your network as well as the time to copy and apply the transaction log. If you are consistently generating transaction logs in the 1 GB range (plus or minus), what will the effect be on a slower network? It certainly will not be good, especially if that network, which might only be 100 megabit, is shared with all other traffic and is already heavily used.
What is your network topology and speed? Your network can quickly become a barrier to availability in a log shipping solution if your network cannot handle the file sizes generated by the transaction log backup, as evidenced by the previous item.
How long does it take to copy the transaction log file? In conjunction with the previous two points, knowing how long it takes to copy the transaction log impacts how you configure your log shipping solution. This measurement is closely related to how fast the transaction logs can be made and then sent across the wire. For example, if it takes you two minutes to back up your transaction log, you should not be attempting copies every minute. That makes no sense. If it takes another two minutes to copy, that number influences how often you can then apply the transaction log.
How long does it take to apply each transaction log file? This number saves you in a disaster recovery scenario. If you know definitively it will take you, say, two minutes to apply your transaction logs (on average; some might be longer, some might be shorter) and you have 10 of them queued up, that means that, at the very earliest, your standby server could be brought online to the latest data available in about 20 minutes.
What is the capacity and configuration of your secondary server? Ideally, the capacity of your secondary server should be equal to that of the primary if you want things to run with the same horsepower.
Will you switch back to the primary if you perform a role change to a secondary server? As with failover clustering, once you have switched to another server, will you incur the availability hit again once the old primary is back in a state where it can assume its original role? If you have the proper capacity on the secondary, this should not be an issue.
Do you have domain connectivity? Log shipping only requires that all servers be able to connect to each other. However, some log shipping solutions might require domain connectivity. If you do not have domain connectivity and it is a requirement (such as to use Windows Authentication), this could be a potential barrier for some solutions involving log shipping.
Will any maintenance interfere with log shipping? Certain administrative tasks for SQL Server interfere with the ability for SQL Server to generate transaction logs. How often will you run such maintenance to ensure your secondary is as current as it needs to be? Or will you not run it at all?
How long will transaction logs be kept? Will they be archived? The transaction logs should be kept in accordance with an overall corporate archival scheme for your backups . Such a policy must be agreed on and set before implementing log shipping, as it will affect the planning stage.
Is replication configured on the primary database? If you are looking to combine log shipping and replication on a database, you need to take some considerations into account. For example, if you are looking to use log shipping to protect the publisher, you need to be able to rename the secondary SQL Server instance ”something that is not possible with failover clustering.
|More Info|| |
For more information on log shipping and replication, see Log Shipping and Replication in Chapter 8, Replication.
To briefly revisit a point mentioned in Chapter 3, log shipping inherently involves some amount of latency; it is not real time. Transactions are only applied to the secondary after they are backed up, copied, and restored from the transaction log. With log shipping, you are only as current as:
The last transaction completed on the primary
The last transaction log backed up on the primary
The last transaction log copied from the primary
The last transaction log applied to the secondary
The log shipping secondary is only as current as the last transaction log that is applied, so if the delta is five minutes off of the primary, your secondary should always remain five minutes behind if all is functioning properly. Analyzing these points is fairly straightforward.
If the SQL Server experiences a failure before that transaction log is backed up, there is a chance that if the server cannot recover and read the transaction log, it will be lost, and you will only be as current as the secondary.
If a transaction log is backed up but not copied over to the secondary and either the copy process fails or the hard disk that contains that transaction log is lost, again, you are only as current on the secondary as what is available to it.
If a transaction log is copied from the primary and is still accessible to the secondary, it can be applied, so your secondary will be as current as the last transaction in the transaction log.
As mentioned briefly earlier, the capacity and configuration of your secondary is crucial for the success of any log shipping solution. If you underpower your secondary server, end users will notice the performance difference. The point of switching servers is to provide, at a minimum, the same quality or level of service you provided before.
One of the nice features of log shipping is that you can, as long as you have the capacity, use one instance as a standby for multiple databases. This takes two assumptions into account: that you have the capacity to handle the additional workload in the event that all databases would need to be servicing end users at once (not unlike planning failover clustering to ensure that multiple instances do not affect each other in a failover), and that there are no conflicts or security issues in doing so. Some common issues related to log shipping more than one database to a single secondary include the following:
Master and msdb databases Because you cannot have multiple msdb and master databases, it is not possible to restore master or msdb databases from a different server with a different name . Any differences need to be manually reconciled.
Logins You cannot have duplicate login names at the SQL Server level. Also, if you do configure more than one database that genuinely uses the same login, but the user has different security privileges that might compromise another database, that is not a wise choice.
Capacity Do you have enough horsepower to run all the databases should they all become active?
Objects Do you have any duplicate objects (such as jobs, alerts, and so on) that will conflict with those of other databases? If so, you need to resolve these types of issues prior to configuration.
|More Info|| |
Log shipping more than one database to a single standby is not unlike server consolidation. For more information on SQL Server consolidation and upgrades, consult Chapter 13.
It is very important to plan the disk capacity needed for implementing log shipping. You must know how large your transaction log file sizes are and how much disk capacity you have, and then coordinate those two factors with your corporate archival scheme. During the configuration of the built-in feature of SQL Server 2000, you can configure a retention period for the transaction log files, so it must be known from the start.
|More Info|| |
For more information on retention and archiving of backup files, see the topic Backup Retention in Chapter 10, Implementing Backup and Restore.
If the primary database utilizes full-text searching, you must consider how you will handle anything relating to full-text after the role change. The issues are pretty straightforward:
Full-text indexes are built as data is inserted into a database. In the event of a catastrophic failure, you cannot even consider moving the index on the primary server to the secondary because it is not available. Even if the primary server were available, what is the state of your secondary database? Is it in the same exact spot, and can you ensure that no changes were made prior to the role change? If the role change is planned and you know the two databases are in exactly the same consistent place, you might be able to copy the full- text index to the secondary. This scenario of a planned role change would also require the following:
The two databases are loaded with the same database identifier (DBID) as the primary.
The catalog must be re-created on the secondary server at least once before the database is loaded in STANDBY or NORECOVERY states. This is required because the MSSearch service places catalog-specific information in the registry when the catalog is rebuilt and repopulated.
Catalog files must be copied only after the MSSearch service is stopped on both machines.
Catalog files must be copied each time the catalog is repopulated or updated and as often as possible when using Change Tracking.
Copying the registry entries mentioned earlier might make the MSSearch service unusable, so they should not be modified.
The steps to do this are listed below. The only steps that you would need to perform (once you establish the backup of the database and the DBID) when using Change Tracking are Steps 4, 5, and 6. These should be performed as often as possible because all the changes made to the full-text catalogs are in files in the FTDATA folder. To roll these changes to the secondary server in log shipping, you need to copy these files over to the secondary machine at regular intervals.
Stopping and starting MSSearch could cause an availability problem from an application perspective and cause perceived unavailability or downtime of your application. Do this with caution after making the proper risk assessment of your business.
Create and populate the full-text catalog on the primary machine.
Back up the primary database.
Restore the primary database on the secondary server. The DBID has to be the same. Follow the procedure in Knowledge Base article 240867, INF: How to Move, Copy, and Back Up Full-Text Catalog Folders and Files, to restore a database with the same DBID.
Stop the MSSearch service on both the primary and secondary servers.
Copy the SQL xxxxxyyyyy folder under FTDATA over to the location on the secondary server.
Restart the MSSearch service.
Re-create and repopulate the catalogs on the secondary database.
Restore the database backup using the STANDBY option and a .tuf file extension.
Run the wizard and select the existing database restored in Step 7.
Select the appropriate settings for log shipping.
Create a scheduled job to stop MSSearch, copy the catalog files from the FTDATA folder over to the secondary machine, and restart the MSSearch service.
You cannot take a copy of an older full-text index and apply it to a database that is out of sync with that copy. Although it might seem to work for a while, you will eventually encounter problems. This is not supported or recommended.
You cannot build a full-text index on a database that is in NORECOVERY or STANDBY mode, which is required for the transaction logs to be applied. You thus have to start building the index after the database is brought online after the role change. This means some features of the application will not be functioning (which might prove unacceptable) and, depending on the size of the index, it could take quite some time to generate.
|More Info|| |
The Microsoft Support Knowledge Base article 240867, How to Move, Copy, and Back Up Full-Text Catalog Folders and Files, details the process of how to move, copy, and back up full-text files to another server. This article can be found at http://support.microsoft.com.
Log shipping requires that the databases participating in log shipping be in either Full or Bulk-Logged recovery models. By default, all databases are configured with Full recovery, as that is the default behavior of the model database. Recovery models were first introduced in SQL Server 2000, and there is no real equivalent of this feature in prior versions of SQL Server. Simple recovery does not allow you to make transaction log backups, and therefore does not let you use log shipping because it will break the log sequence number (LSN) chain. When you apply transaction logs, the process checks to see that the first LSN in the new backup file comes sequentially after the last LSN applied. A SQL Server 2000 error in LSNs during the transaction log applications should look similar to this:
Server: Msg 4305, Level 16, State 1, Line 2 The log in this backup set begins at LSN 6000000007200001, which is too late to apply to the database. An earlier log backup that includes LSN 6000000005000001 can be restored. Server: Msg 3013, Level 16, State 1, Line 2 RESTORE LOG is terminating abnormally.
This is the SQL Server 7.0 error:
Server: Msg 4305, Level 16, State 1, Line 2 This backup set cannot be restored because the database has not been rolled forward far enough. You must first restore all earlier logs before restoring this log. Server: Msg 3013, Level 16, State 1, Line 2 Backup or restore operation terminating abnormally.
With all versions of SQL Server prior to SQL Server 2000, if you did anything such as a Transact -SQL BULK INSERT or a nonlogged operation or if you set the Truncate Log On Checkpoint option (which is no longer valid in SQL Server 2000), you would invalidate regular transaction log backups. First and foremost, with Truncate Log On Checkpoint, it is what it says: when a checkpoint happens, the log is truncated and not backed up. Database administrators (DBAs) frequently like to turn this on without thinking about the recoverability of SQL Server.
From a log shipping perspective, there is no apparent difference between Full or Bulk-Logged but there are some differences that affect performance. If you are using Bulk-Logged, the transaction logs contain information added or modified since the transaction log backup. In addition, the transaction log contains the data pages modified by any bulk operations (such as BULK INSERT or BCP) since the last backup. This means that you will have potentially larger file sizes that need to be backed up, copied, and restored if you are doing bulk operations. This definitely impacts your time to copy if you have limited network bandwidth. Consider the following example: if you do a bulk load of data on a single processor server that runs at 3 MB per second and takes 10 minutes, that means you may have generated 1800 MB (or 1.8 GB) of changed data pages!
With Full, you still have the same issue as Bulk-Logged with large file sizes. However, one of the benefits of Full is that the data rate will be reduced to nearly match your network. Even though you are technically moving a bit more information across the network in total (Bulk-Logged and Full are about the same file size; Bulk-Logged gathers the data directly from the data file instead of indirectly from the transaction log as redo log records), the secondary server does not lag far behind because the primary is being loaded much more slowly due to the logging of all inserted data. If you have a server with eight processors, it stands to reason that if the load rate is eight times, it affects the time for logs.
If you configure log shipping and then switch the recovery model to Simple at any point after it is up and running, you invalidate log shipping. An example of when this could occur is if you were doing a bulk insert and wanted to minimize your log file size and not have a record of the page changes. The same would also apply if someone ” even briefly, whether accidentally or on purpose ”switched the recovery model to Simple and immediately changed it back.
|More Info|| |
For more information on recovery models, see Chapter 10, Implementing Backup and Restore.
Network bandwidth is a potential barrier to successful log shipping implementation. The three main issues associated with network bandwidth were listed in the earlier section, Technical Questions. They are:
How large are the transaction log files generated?
What is your network topology and speed?
How long does it take to copy the transaction log file?
Although log shipping is arguably the easiest SQL Server “based high availability method to configure, it can crush a network if your transaction log backup files are huge. That will eventually affect the delta of time for which the primary and secondary are out of sync. Coupled with that, you also need reliable network connections between the participating servers. Intermittent network problems will not increase your availability when it comes to log shipping; they can only hurt it.
If you are sending large transaction logs on a regular basis, you should configure a separate and dedicated network card as well as a private network (such as a 10.x.x.x network) for sending transaction logs between the primary and secondary servers. This ensures that user traffic will not be affected. You might also want to consider a faster network (such as Gigabit Ethernet) to ensure that your recoverability happens much more quickly.
Log shipping only captures anything in the initial database backup (including users, but not their corresponding server-level login) as well as any transactions captured in subsequent transaction logs. That leaves any other objects that reside outside the database or are not captured as part of the transaction log to be dealt with.
As far as objects go, outside of Data Transformation Services (DTS) packages, you should really be placing any objects related to the database in the database itself. Unless there is a logical reason to put the objects outside the database, you will create problems when you are trying to synchronize a secondary server to become a warm standby. If such objects exist, they should be recorded and placed in any document related to the solution. If possible, script the objects and have those scripts available.
As just noted, things such as stored procedures should be created in the database itself. Because the creation of a normal stored procedure is captured in the transaction log, it will automatically be created at the secondary. If you have anything residing outside the database, such as in msdb, you need to manually create all of these objects on the secondary server to ensure a successful role change.
There are two levels of logins that you need to take into account:
Server level These are the top-level users referenced by the user in a database. These reside in the syslogins system table.
Database level These are the users that a connection uses to access the database.
If you do not take both types into account, you could end up with a situation in which you have orphaned users in your database and application- related problems accessing the database after the role change process.
|More Info|| |
The section Step 3: Post-Wizard Configuration Tasks later in this chapter details one way to configure the transfer of your logins and users.
To move DTS packages to another SQL Server, the easiest method is probably to save the package to a file and manually copy and add it to the secondary. There is no automated way to perform this task.
Make sure that the DTS package is compatible with the version of SQL Server that is the destination. If, for example, functionality or the format of the file is changed due to a fix in a service pack (as it did between SQL Server 7.0 Service Pack 1 and SQL Server 7.0 Service Pack 2), if the other server cannot either handle the functionality or it will be broken on the standby, you need to resolve those issues prior to bringing that secondary online as a primary. Similarly, you might need to fix settings (such as source server names or destination server names) within the DTS package to match the new server. Do not assume that it works unmodified.
One of the most important considerations for using log shipping is planning for how you will redirect end users and applications to the new server. Unlike failover clustering, where the switch to another physical server is abstracted to an end user or application, a change to another server in most cases is not abstracted. That means the systems and applications accessing SQL Server must tolerate such a change. Like failover clustering, there will be some downtime involved.
The following are some tips for application developers when coding for a solution that will eventually include log shipping:
Heed the earlier warning about object placement. If developers create objects outside of the database, the IT staff should be notified and it should be well documented so that this can be accounted for in disaster recovery planning.
Make all transactions as small, or atomic, as possible. Atomic queries might increase performance and reduce some resource contention (such as I/O), but in the recovery process and application of transaction logs they could ensure that things go more quickly.
This is not to say that longer running transactions are not appropriate, and you should not take one logical transaction and break it up for the sake of making it smaller. Small transactions usually minimize lock contention. I/O will be close to the same as a longer transaction because you are essentially doing the same work, just broken up into smaller chunks . A long running transaction, if it is interrupted (that is, canceled ), could require a long undo operation, and it might also prevent log truncation .
Similarly, when queries are written, if they cross databases, how will that work after a role change? Can the queries tolerate two databases residing on different servers?
With SQL Server 2000, the worry of nonlogged operations does not exist because page changes and such are tracked with Bulk-Logged or Full recovery models (either is required for log shipping). However, those types of operations will impact the log size, so the use of those types of operations should still be taken into account to some degree.
Ensure all objects related to the log-shipped database are uniquely named. This prevents problems in the event that multiple databases are log shipped to a single secondary.
Do not code your application for a specific SQL Server service pack level, because if your secondary is not the same as your primary, you might encounter issues in a role change.
Do not hard-code server names and IP addresses into applications. This ensures that you could never name the server anything other than what it is in the application, and, for example, if you are using failover clustering for both your primary and secondary, you are unable to rename the server. Make applications flexible to handle a server name change.
Use fully qualified names for each object to ensure that you are always referencing the correct object. This not only helps your performance, it helps clarify things when you might have more than one database on a SQL Server with the same object name.
Code retry logic in your application to attempt a failed transaction again. If this is not possible, at a bare minimum you should post graceful error messages to give your end users a positive experience.
One way to mitigate lost transactions in the event of a failure is to use some form of middleware ”such as Microsoft Transaction Server, Microsoft BizTalk, Microsoft Message Queue, or Microsoft Distributed Transaction Coordinator ”to queue the transactions. This needs to be integrated into your application and overall solution should you choose to do this.
If you are using the SQL-DMO transfer operation in your code, it truncates the transaction log, thus breaking log shipping.
|More Info|| |
For updates or workarounds, see Knowledge Base article 316213, FIX: SQLDMO Transfer Operation Truncates Transaction Log of Destination Database.
Similarly, if you are using the Copy Objects task of DTS in any of your packages, it might break log shipping because it switches the recovery mode to Simple for the database in its operations.
|More Info|| |
For updates or workarounds, see Knowledge Base article 308267, FIX: DTS Copy Objects Task (DMO) Breaks Transaction Log Backup Chain By Switching Recovery Mode to Simple During Transfer.
Performing the role change from a SQL Server perspective is straightforward, but redirecting clients is not, so you can have the least impact on applications and end users. As in failover clustering, not only is there an interruption in service, you are also switching to a completely different server. You have a few options available to you to assist in the role change:
As mentioned earlier, code a mechanism into the application to set the SQL Server.
Code your application to use Open Database Connectivity (ODBC), and all you would need to do is change the server that the ODBC Data Source Name (DSN) is pointing to. With this solution, you would still have the interruption, but the connection string would always be the same.
You can also use Network Load Balancing to abstract the server switch during the role change. This is one of the few uses for which Network Load Balancing works well with SQL Server.
The goal is to pick a manageable solution that will incur the least downtime.
|More Info|| |
For more information on configuring Network Load Balancing for use with log shipping, see Chapter 5, Designing Highly Available Microsoft Windows Servers.
As with any technology, with log shipping there are security considerations you must consider up front that influence how you will plan and deploy a solution. Because all security changes (such as GRANT statements) are in the transaction log, they are applied automatically to the secondary server once log shipping is configured and working properly.
|More Info|| |
For additional information or changes to what is written in this section, please reference Knowledge Base article 321247, HOW TO: Configure Security for Log Shipping, which can be found at http://support.microsoft.com .
If the server hosting your SQL Server 2000 instance is part of a domain, and not a workgroup, you should use a domain account to start the SQL Server services. A domain account is absolutely required for certain features, such as failover clustering. If this is a nonclustered instance, you can use a local network account or the Local System account.
A local network account is one that is created on the server, but it is not the Local System account. This would be used if you are using pass-through security. This means that all SQL Servers participating in log shipping must use the same network account, privileges, and passwords created locally on the server. You are bypassing traditional network security when a SQL Server process requests resources from another SQL Server.
Various processes on the server use a Local System account, also known as LocalSystem. SQL Server can also potentially use it, but modifying it in any way could have consequences outside of SQL Server. LocalSystem cannot be used for network pass-through security because the LocalSystem account s password should be different on each server.
|More Info|| |
For more information, see the topic Setting Up Windows Service Accounts in SQL Server Books Online.
Because the transfer of logins using the built-in functionality requires a bulk copy out of the syslogins table, this file should be contained in a secure directory. Files generated by the bcp utility are in plaintext and not encoded in any way.
You also need to secure the directories that contain your transaction log backups. The files created by SQL Server are not encrypted in any way. The network share should be secured so that all servers participating in log shipping should be able to access it.
You can log ship a database across domains. To do this, you should set up two- way trusts between the domains involved in log shipping. This is done through the Active Directory Domains and Trusts tool located in the Start menu in the Administrative Tools folder, as shown in Figure 7-1. If it is not there, it can be started from %systemroot% \System32\Domain.msc. If you cannot establish trusts, you can use pass-through security, which involves the use of the local network account. This cannot be done on a clustered instance of SQL Server.
Microsoft recommends you use Windows Authentication for the SQL Servers participating in log shipping. If you use mixed mode security, with the built-in functionality, a login with the name of log_shipping_monitor_probe will be created if it does not already exist.