SQLServerCentral Article

Custom Log Shipping

,

The following article discusses the custom log shipping routines used for the

"MYDB" production database. The destination server is used for corporate

reporting functions whilst in standby mode.

Architecture

 

The architecture is simple enough.  We take full and transaction log

backups via a custom written stored procedure, the routine will dump the files

to a disk array on the source server and optionally gzip (compress) them. 

The stored procedure will then copy the file to the remote server over a network

share with appropriate service user privileges for the instance.  Any

failures are reported to the DBA via email.  Also note that the copy

to the remote server can be easily changed to an FTP and servers must be

time-synchronised otherwise you will get the error  "There is a time

difference between the client and the server".

At the destination server, we manually run the Initialise stored procedure. 

This routine will search the dump directory for specific pre and post fixed

files specified by the procedures incoming parameters.  Using xp_cmdshell

and the dir command, we build up a list of files in the directory, locate the

last FULL (unzip) then restore.  We then search for a differential backup,

apply this, and carry on the subsequent transaction log files.  All restore

commands used for the database are logged into a user defined master database

table.

Finally, we call a simple monitoring stored procedure that looks for errors

in the log table every N minutes (as defined by the DBA); emailing errors via

CDO-SYS and your local SMTP server.

Advantages

  • No requirement for linked servers
  • Simply scripts with no complex lookups over the MSDB database, very easy

    the change and enhance

  • Easy to setup and configure
  • Will not, by default, force the overwriting of existing database files

    used by other databases

  • Will search for full, differentials and logs and apply in correct order,

    so long as files copy OK

Disadvantages/Issues

  • Requires a network share to copy files (can be a security issue)
  • Cant pre-detect missed or missing files (as you could if you utilised the

    MSDB)

  • Cant pre-detect invalid file sizes
  • Does not do a quick header and file check and compare DBA's passed in

    parameters with

  • Relies on the DBA to supply move commands for the restore as a

    parameter (see later), does not dynamically pick up database files/filegroups

    from the backup files themselves.

  • User sessions must be killed on the log shipped database before attempting

    the restore command

    • The only way I can see around this is via a physical log reader/parser

      program and the DBA runs SQL scripts rather than applying the log itself.

Configuration

All scripts are custom written and are typically stored in the master

database of the instance.

Installation

Source Server (server-1)

The source server utilises the following database objects:

  • DBBackup_sp - master database - dumps full, log or differential

    backups to specified directory on source server, optionally zips files, emails

    on error, copies (duplexes) backups to destination server via UNC path, delete

    files older the N days.

  • SendMail_sp - master database - utilises simplecdo.dll

    (custom written VB COM that uses CDOSYS to send emails, can use JMail instead)

    to email the administrator on backup errors.

  • dtdelete.exe - c:\scripts\ -

    command line executable that will remove files from a directory (and

    recursively if so desired) that are N days old from the backup destination

    directory

  • gzip.exe - c:\scripts\ - command

    line file compression utility for backup files

Destination Server (server-2)

The destination server utilises the following database objects:

  • usp_LogShipping_Init - master database - run manually (1st time

    only or on serious error), searches the incoming backup directory, applies

    most recent FULL backup, then last differential (if any) and applies

    subsequent transaction log files, leaves database in norecovery or

    standby mode. Logs all recoveries to its audit table.

  • usp_LogShipping_Continue - master database - as above, but searches

    for differentials and transaction logs only.  If a full is found then

    Init is recalled to reapply the full backup again. Logs all recoveries to its

    audit table.

  • usp_LogShipping_Finish - master database - manually called by the

    DBA, will finalise the recovery of a database and make it available for

    read/write.  IMPORTANT - DBA must turn off log shipping jobs on the

    destination server before attempting this command.

  • usp_LogShipping_Monitor - master database - reads the audit table

    below and emails the DBA errors found

  • LogShipping_Audit - master database - table that logs all recovery

    attempts via the log shipping routines

  • SendMail_sp - master database - utilises simplecdo.dll

    (custom written VB COM that uses CDOSYS to send emails, can use JMail instead)

    to email the administrator on backup errors.

  • gzip.exe - c:\scripts\ - command

    line file de-compression utility for backup files

  • usp_KillUsers - master database - kills all users connected to a

    database for the instance

IMPORTANT - Consider using the command alter database xxx set

resticted_user rollback immediate  rather than utilising the

usp_KillUsers stored procedure.  The routines themselves use alter

database, switch as required.

Log Shipping Example 1 - Setup and Running

Server 1 (source)

Here we assume the server has been configured, instances and databases all

running nicely and the DBA is now ready to sort out the backup recovery path for

the database MYDB.  This database has four file groups (1 data file

per group) and a single log file.  The database itself is approx 3.6Gb in

size, full recovery mode and requires point in time recovery in 10min cycles.

The DBA creates the following directory on the server to hold backups:  

d:\dbbackup\mydb\

There is ample space for 4 days worth of backups. Litespeed or other 3rd party

backup products are not being used.

The DBA wants the full backup files zipped, and files older than 4 days

automatically removed.

On the remote server, the DBA creates the duplex directory: 

e:\dbbackup\mydb\

A share is created on the dbbackup directory called standbydest for one

of a better word and NT security configured accordingly.

The DBA configures the following stored procedure to run 2 x daily for FULL

backups via a DTS job:

exec DBBackup_sp 'full', 'mydb',

'c:\scripts', 4, 'c:\scripts', 1, 'd:\dbbackup\mydb\','\\server2\standbydest\mydb\',

'support@chriskempster.com', 'Y'

We are running full backups at 6am and 8pm to cover cover ourselves nicely in

terms of recovery (not shown here, in DTS).  We chose not to run

differentials and are happy with recovery times in general.  The script

above and its parameters tells the backup where our gzip and dtdelete.exe files

are (c:\scripts), the backup destination, the duplex destination on server-2. 

We are retaining files no more than 4 days old and the value one (1) tells the

routine to zip the file created.

Next we schedule the transaction log file backups:

exec DBBackup_sp 'log', 'mydb',

'c:\scripts', 4, 'c:\scripts', 0, 'd:\dbbackup\mydb\','\\server2\standbydest\mydb\',

'support@chriskempster.com', 'N'

The script above and its parameters tells the backup where our gzip and

dtdelete.exe files are (c:\scripts), the backup destination, the duplex

destination on server-2.  We are retaining files no more than 4 days old

and the value one (1) tells the routine to zip the file created.  The value

zero (0) represents the email, when zero the DBA is only notified on backup

failure, not success.

The DBA should actively monitor the backups for a good two or three days,

ensuring full and log backups are copied successfully, can be manually

restored on server-2, and the deletion of files older than N days is working

fine.

Server 2 (destination)

As mentioned in server 1 (source) setup, the DBA has already created

the duplex directory e:\dbbackup\mydb\

and configured a share on the \dbbackup directory called standbydest

using NT security.   For server-2, we schedule three jobs that execute

stored procedure routines to initialise, continue and monitor log-shipping.

Initialise

The main stored procedure is log shipping initialise, we supply the

routine a number of parameters, they being the name of the database to be

restored, the location of the backups (remember - files were copied from

server-1), the standby redo file, the pre and post-fix file extensions so the

routine can build a list of files from disk to restore from, and finally, the

MOVE command for each database filegroup.

Here is an example:

exec usp_LogShipping_Init

    'mydb'

    ,'e:\dbbackup\mydb\'

    ,'e:\dbbackup\mydb_standby.rdo'

    ,'mydb_'

    ,'.bak*'

    ,'_full.bak'

    ,'_dif.bak'

    ,'_trn.bak'

    ,'

    MOVE ''MYDB_SYSTEM'' TO ''c:\dbdata\mydb\mydbstandby_system01.mdf'',

    MOVE ''MYDB_DATA'' TO ''c:\dbdata\mydb\mydbstandby_data01.mdf'',

    MOVE ''MYDB_INDEX'' TO ''c:\dbdata\mydb\mydbstandby_index01.mdf'',

    MOVE ''MYDB_AUDIT'' TO ''c:\dbdata\mydb\mydbstandby_audit01.mdf'',

    MOVE ''MYDB_LOG'' TO ''c:\dbdata\mydb\mydbstandby_log01.mdf''

    '

    ,'c:\scripts'

The DBA may consider further customisation of this script, namely the

building of the MOVE statement by reading the backup file header.  YOu can,

but I work on the KISS principal and in this scenario we cant go wrong.

The initialise routine is NOT SCHEDULED and is only run manually if we need

to force the re-initialisation of log shipping from the last full backup.

The master..LogShipping_Audit is updated accordingly with the files

applied by the routine or any failure/error information.

NOTE - This routine will locate the last full backup and apply it, then the

last differential (if any) and all subsequent transaction logs.

Continue

This is the crux of the log shipping routines and is scheduled to run every

two hours from 7am to 10pm. The routine has identical parameters to that of the

initialise procedure.  When run, the routine will determine if Initialise

must be called (missing standby database), or a new full backup file has been

found and we need to start from scratch with the full and differentials. 

This routine is basically the driver for log shipping, in both its

initialisation and continuing to apply transaction logs as they arrive in the

backup directory from server-1.

exec usp_LogShipping_Continue
    'mydb'

    ,'e:\dbbackup\mydb\'

    ,'e:\dbbackup\mydb_standby.rdo'

    ,'mydb_'

    ,'.bak*'

    ,'_full.bak'

    ,'_dif.bak'

    ,'_trn.bak'

    ,'

    MOVE ''MYDB_SYSTEM'' TO ''c:\dbdata\mydb\mydbstandby_system01.mdf'',

    MOVE ''MYDB_DATA'' TO ''c:\dbdata\mydb\mydbstandby_data01.mdf'',

    MOVE ''MYDB_INDEX'' TO ''c:\dbdata\mydb\mydbstandby_index01.mdf'',

    MOVE ''MYDB_AUDIT'' TO ''c:\dbdata\mydb\mydbstandby_audit01.mdf'',

    MOVE ''MYDB_LOG'' TO ''c:\dbdata\mydb\mydbstandby_log01.mdf''

    '

    ,'c:\scripts'

Monitor

Using a simple-cdo custom DLL, this scheduled job running at the same

schedule as continue log shipping calls this:

    exec usp_LogShipping_Monitor

'support@chriskempster.com', 'LOGSHIP', 15

The DBA is emailed error rows from the master..LogShipping_Audit

table.  Here is an example of its contents:

Log Shipping Example 2 - Finalising Recovery / Failover

The DBA will attempt the following in order, I say "attempt" as the first

steps may fail and must be carefully monitored.

1)  Re-evaluate the need to cutover and double check that the

situation is such that cut over is required

2)  Attempt to run a backup on server-1 using your scheduled

DBBackup_sp command or via Query Analyser

3)  Verify backup and file copy, manually copy if required

4)  Run  usp_LogShipping_Continue 

(or its scheduled job) on server-2

5)  Disable above job

6)  Manually run  

exec..usp_LogShipping_Finish 'db-name-here'

Conclusion

The method presented is simple, easy to implement and does not rely on linked

servers or numerous system table lookups.  One of the big disadvantages

with log shipping is more to do with the recovery process, that being "user

sessions must be killed before attempting to recover the database".  If

users need to be kicked from the standby database to apply further logs, then

its tough setting a recover time if the database is also used for corporate

reporting.  Hopefully we will see changes in SQL Server Yukon.

In closing - as a DBA do not feel that you need to lock yourself into

maintenance plans or the EM wizards, take time to explore the alternatives with

T-SQL scripts and external utilities. Drill through on each command and leverage

the options to your advantage to really expand on the functionality and richness

of SQL Server 2000.

See

http://www.chriskempster.com/scripts/custom_logshipping_v1.zip  for all

scripts and external routines.

Rate

5 (3)

You rated this post out of 5. Change rating

Share

Share

Rate

5 (3)

You rated this post out of 5. Change rating