• Just thought I'd my ten cents worth. A case for Log shipping as a reporting solution.

    Replication is obviously the preferred method of ensuring Data is in Sync. But here is an example of where log shipping works as providing the reporting server. We are currently using a particular MS application, supposed Enterprise financial application whos name would seem to engender a sage like quality. Unfortunately runs like a 'hairy goat'. Normal operation of posting invoices, entering timesheets would grind to a halt whenever someone ran a report, user satisfaction of zip. I had to separate the reporting from the running system. As the application has never been 'Certified for Replication', management are wary of me offering replication as a soloution. Hence the solution. I set up a log ship sequence to an other server this occurs every six hours, which the Business at the moment deemed an acceptable lag in data. If there was any body running a report the restoration job would fail - normal. This is where a little magic comes in. I set the re-try to once every 1 minute for 1 hour. In the restoration job I check if there are any connections that have their last batch time less than the longer than the average of the longest running report. If there are the I force the restore job to fail (select 1 from XXX) - and wait 1 minute and try again. As soon as all connections are older than the expected average then the SPID's are killed. If after one hour there are still connections still active -- All are killed, the restore is done. Log shipping complete. I know this is not optimal but the refresh times are gazzetted and I have found most people have accepted this process. No, not all are satisfied and these are managed by exception, the rest are happy campers.

    Code On 😛