SQLServerCentral Article

Oops, I deleted that data

,

We all have these days as a DBA,  hopefully they are few and far between for most of us.

This morning I walked into the office and found a stack of emails saying that a developer had responded to a support ticket and modified the values in one of our tables and got it wrong. OK, not a problem. I have all of the Redgate tools so I can do a data compare and get the values back that I need ...

It was not quite that simple!

Our backups run at midnight, and we have 15 minute t-log backups. The data change was at 5.30pm, therefore last nights backup was useless. It's time to go to the night before and start putting t-log restores against it.

Unfortunately my tools don't allow me to run a data compare against a backup and 68 log files, so I know that I have to restore my database somewhere and hand it over to the dev team so that they can work out what data changes to make  since the table is still in use on the live server.

Thankfully management blessed me with a DR test server that picks up the most recent backup for a few hundred databases and restores them at the start of the month. That data is too old for the user, but I can drop the database and restore the required database to our DR test server without hitting a live server. Perfect. This is what a DBA is here for.

Except the backup file is nearly half a terabyte, even when compressed. This means we sit and wait for 4 hours, handling all the other stuff we have to do, waiting for the database to come online.

I've had some mixed responses to the amount of time it took us to get the data back. In particular, I heard, "we paid a lot of money for all these tools, why can't we just get it back instantly" and "this should be easy just get it done".

Yes, it is easy, but it still doesn't change the fact that 500GB does not restore quickly. So, I sent an email out asking developers to follow a certain process when performing "manual" deletes or updates. A lot of them did not follow this process, and I have to say that I only do this when I'm not sure what will happen.

I'm interested to see if the people here can improve on this code.

SELECT * FROM dbo.account WHERE account_code='4c'
SELECT @@ROWCOUNT
BEGIN TRAN
UPDATE account SET active_yn =0 WHERE account_code='4c'
SELECT @@ROWCOUNT
--rollback tran
--commit tran

My thinking is that the first bit should be a SELECT statement to check what you are updating. You then check the rowcount and see if that is what you want.

Next, take the result set and save it in a CSV or Excel format for a nice easy rollback (why do developers never think about data rollback?).

Then open a transaction, run the UPDATE and check the rowcount. If the rowcount differs from the previous setp, then manually run the rollback. Otherwise, run the commit.

Next, attach the old data to your support ticket. At this point I will never need to restore a huge database because of the fact that internally we got the data wrong If a customer makes a data mistake, then I still have to go through the restore process, but we have protected ourselves.

I'm looking at putting Change Data Capture on some of the tables that are really vital, but with one legacy database having over 7000 objects it's almost pointless. For now, educating support engineers to think like a DBA feels like the best way. Feel free to show me a better way.

Rate

3 (5)

You rated this post out of 5. Change rating

Share

Share

Rate

3 (5)

You rated this post out of 5. Change rating