Blog Post

Do You Need A Log Recovery Tool?

,

logreader

A couple of weeks ago I wrote a post dispelling the myth that the simple recovery model disables the transaction log on a SQL Server database. In that post I touch briefly on the topic of a “point-in-time” restore, which is used to restore a database back to the way it looked at a specific point in time. The purpose of such a restore is to recover from an accidental data change or deletion, or possibly even database corruption, with minimal data loss. The key word in that last sentence is “minimal”.

Consider a long-running process that deletes data over a 5-minute period of time. Other activity is taking place in the database during that same 5-minute period – sales, financial transactions, appointment scheduling, all sorts of activity. At some point it is discovered that the deletion process was flawed and needs to be reversed. Doing a point-in-time restore back to the point where the deletion process began is going to result in the loss of ALL activity that occurred after that point. The deleted data will be recovered, but all of those sales, financial transactions and appointments will be lost. Is that an acceptable loss for your business?

To be honest, I’d never really thought about alternatives to this approach. I’ve always put my faith in the point-in-time restore. After writing that earlier blog post, I started thinking more about other options for reading, or even reversing, the contents of the SQL Server transaction log. That’s the key to “undoing” a loss of data.

Much to my surprise, there aren’t a lot of options out there for reading directly from the transaction log. My searching turned up a few third-party offerings, plus some tools that are built right in to SQL Server:

lumigent
Lumigent Log Explorer

No longer available, acquired by BeyondTrust

beyondtrust
BeyondTrust PowerBroker Auditor for SQL Server

From the vendor’s web site: “Audits and report changes made to SQL Server data sets, configurations, permissions, and other objects“. Doesn’t appear to offer any recovery capabilities.

SQLLogRescueLogo
RedGate Log Rescue

No support for SQL 2005 or later. From the vendor’s web site: “SQL Log Rescue is compatible with SQL Server 2000 only“,

apexsql_log_logo
ApexSQL Log

From the vendor’s web site: “Read the transaction log, Audit changes, Recover data“. Recover data? Sounds like exactly what we need.

SQL-Server-2012
Built-in commands

SQL Server offers a few built-in commands and functions for reading raw data from the transaction log – fn_dblog, fn_dump_dblog, DBCC PAGE, and DBCC LOG. They can be difficult to use, requiring DBA expertise.

As you can see, the options are limited. One no longer exists, one only works with really old versions of SQL Server, one doesn’t offer recovery capabilities, and one requires some heavy-duty DBA skills – leaving us with really one viable choice. Does this remaining choice help us solve our deleted data problem? Let’s find out!

Stack_of_firewood

Preparing The Logs

Before we can do anything, we’re going to need a database.

USE master;
GO
-- Create full mode database
IF DB_ID('FullRecoveryDemo') IS NOT NULL DROP DATABASE FullRecoveryDemo;
CREATE DATABASE FullRecoveryDemo;
GO

That database needs to be using the Full or Bulk-Logged recovery model. Why not Simple mode? Go read the earlier blog post.

-- Set to the FULL recovery model
ALTER DATABASE FullRecoveryDemo SET RECOVERY FULL;

Finally, we need to do a full backup of the database. Why? Go read the earlier blog post.

-- Backup the database
BACKUP DATABASE FullRecoveryDemo TO DISK = 'C:\SQLBackups\FullRecoveryDemo.BAK';

Assume that we’re doing transaction log backups every 5 minutes.

A database without any tables is pretty useless, so let’s create a table.

-- Create a table to hold some data
USE FullRecoveryDemo;
CREATE TABLE SomeTable
 (
  RowID int IDENTITY(1,1), 
  RowUID uniqueidentifier, 
  RowDate datetime CONSTRAINT DF_RowDate DEFAULT GETDATE()
 );
GO

Now let’s simulate some activity, starting with some writes to the table. Pretend that these writes represent incoming sales transactions.

-- Insert 10 rows of data
WHILE COALESCE((SELECT MAX(RowID) FROM SomeTable), 0) < 10
    INSERT INTO SomeTable(RowUID) VALUES (NEWID());
-- Are they all there?
SELECT * FROM SomeTable;

We now have a table containing ten sales records. Suppose we delete one of them. For sake of example, let’s say that this delete occurred at 11:46am (your own actual time will be different).

-- Delete one of them
DELETE FROM SomeTable WHERE RowID = 5;
-- Confirm that it's gone
SELECT * FROM SomeTable;

Business continues, and more sales transactions come in to the table.

-- Insert 10 more rows of data
WHILE COALESCE((SELECT MAX(RowID) FROM SomeTable), 0) < 20
    INSERT INTO SomeTable(RowUID) VALUES (NEWID());

Another delete occurs at 11:51am:

-- Delete another one
DELETE FROM SomeTable WHERE RowID = 6;

And even more sales transactions come in:

-- Insert 10 more rows of data
WHILE COALESCE((SELECT MAX(RowID) FROM SomeTable), 0) < 30
    INSERT INTO SomeTable(RowUID) VALUES (NEWID());

There are now (or should be) 28 sales transactions in the table. Business is booming.

-- There should be 28 rows in our table
SELECT * FROM SomeTable;

fulltable

At 1:00pm, it is discovered that those deletes that occurred at 11:46am and 11:51am were in error, and need to be reversed. The business is looking at you, the DBA, to make this happen. What do you do?

Here’s what we know. Transaction log backups are occurring every 5 minutes. At 11:45am, the log was backed up. At 11:46am, a row (RowID = 5) was deleted by mistake. More data was added. At 11:50am, another log backup occurred. At 11:51am, another delete (RowID = 6) occurs, also a mistake. It’s now 1:00pm, and you need to undo those two accidental deletes that occurred.

One option is to do a full restore of the database from our last full backup, followed by each subsequent log backup, using the STOPAT clause to restore all transactions up to a point in time. But to what point in time? The first delete occurred at 11:46am – if we stop there, we lose everything that happened between 11:46am and 1:00pm.

RESTORE DATABASE FullRecoveryDemo_restore
FROM DISK = 'C:\SQLBackups\FullRecoveryDemo.BAK'
WITH 
 MOVE 'FullRecoveryDemo' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\FullRecoveryDemo_restore.mdf',
 MOVE 'FullRecoveryDemo_log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\FullRecoveryDemo_restore_log.ldf',
 NORECOVERY;
RESTORE LOG FullRecoveryDemo_restore
FROM DISK = 'C:\SQLBackups\FullRecoveryDemo_FirstLogBackup.TRN'
WITH STOPAT = '2013-11-03 11:49:59'
RESTORE LOG FullRecoveryDemo_restore
WITH RECOVERY;
GO
USE FullRecoveryDemo_restore;
SELECT * FROM SomeTable ORDER BY RowID;

partialtable

 

Acceptable? Maybe, maybe not, it depends on your specific business needs. Stopping at 11:51am, the time of the second delete, still causes us to lose data, AND fails to recover the first deleted record. Seems like an even worse option to me. A point-in-time restore isn’t going to solve our problem.

We could manually re-insert the missing rows into the original database from our restored copy. This is a perfectly viable option, one that I’ve used myself many times, though a little labor-intensive. But what if the database we’re working with is hundreds of gigabytes in size, or even terabytes? Do we have enough space to restore a second copy? Do we have time to restore a second copy?

lumberjacks

A Seriously Powerful Log Tool

Let’s see what ApexSQL Log can do to help us out. Remember folks, I’m typically not a big fan of third-party tools, but sometimes they just make sense. Is this one of those times?

One “heads up” to my fellow DBA’s out there – if you install ApexSQL Log on your workstation and intend to use it to connect to a remote server, it will require you to install some server-side components first. Personally, I don’t like changes being made to my servers without fully understanding what those changes are. I’m still digging in to see exactly what these server-side components are, but at first glance they appear to simply be some extended stored procedures that allow for reading of the transaction log files. I’ll do a follow-up post once I’ve completely dissected these components.

When I launch ApexSQL Log, it prompts me to connect to my SQL Server instance.

ApexSQLLog_connect

Once connected, I’m presented with some options for what to use as a source of my transaction log analysis. I was surprised to see that the tool can be used to read transactions from backup files, in addition to the actual transaction log itself.

ApexSQLLog_source1

Let’s start by looking directly at the transaction log. I’m going to choose to look at the previous 24 hours of activity.

ApexSQLLog_source2

One final prompt asking how we want to view the results. I’m choosing to view them in the grid view.

ApexSQLLog_source3

ApexSQL Log goes off and does its thing, presumably digging through the transaction log to find the last 24 hours of activity. After a few seconds, it displays this:

Click for a larger view

 

Ok, that’s just cool. I can see all of the inserts that were done (those 30 fake sales transactions that were created) and, more importantly, I can see the two “accidental” deletes that were done. I can see when they were done, and I can see who did them – looks like I need to go have a discussion with “realsqlguy”.

As cool as that is, what can I actually DO with this now that I have it? Can I somehow reverse those deletes? Look, up there on the ApexSQL Log toolbar…

whats_that_button

There’s only one way to find out. I select my two “delete” rows, then click the button.

ApexSQLLog_undobutton

Again, way cool – ApexSQL Log generated the T-SQL for me to re-insert those two deleted records:

ApexSQLLog_undoscript

I can even run the script from right there inside the tool.

ApexSQLLog_undoscriptrun

I run the script (from inside ApexSQL Log), then query my table.

ApexSQLLog_undone

All 30 of my fake sales transactions are there, including RowID 5 and RowID 6, the two “accidentally” deleted rows. With no more effort than it took to click a few buttons. Very impressive.

Obviously this was a very simple example. There were no foreign keys involved, no replication or mirroring, and no data corruption to clean up. I’ve just scratched the surface of what real-world scenarios a tool like this might come up against, but then again, I’ve also just scratched the surface of what the tool can do. I plan to explore this product further, and will probably write more about it as I do so, including some more advanced recovery scenarios. There appears to be a lot of power hidden away in this utility.

Is there another log recovery tool out there that I possibly overlooked? Do you have other methods that you use to recover deleted data? Share them with me in the comments section below.

 

The post Do You Need A Log Recovery Tool? appeared first on RealSQLGuy.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating