Have you ever tried to read a transaction log? I mean used a query against fn_dblog() to read data and try to reconstruct what happened with a transaction or a series of transactions? It's a cumbersome process and takes a lot of knowledge, practice, and most importantly, patience. It's not something I'd want to wish on anyone. There are a few products to help, but no one really does this that often, and it's almost easier to just change some data by applying your own manual fixes.
There are a lot of potential issues that we could discuss here. I've been a part of a failed rollout and I have sympathy for the IT staff dealing with this. The thing that I wonder about is the data. With the magnitude of customers (millions), the seemingly long list of places where things failed (notifications, scheduled payments, inquiries, etc.), and the rate at which people can bang on a system from their phones and various applications, how much data has been mangled and altered?
I'd guess a lot, in which case, we aren't just talking about updating rows on the basis of someone's authority. Whoever is tracking through data needs to essentially read transaction logs, unwind the actions where data was converted incorrectly and then (potentially) subsequently changed. Then they need to work out the reversing entries. The database needs help from DBAs, developers, and probably financial staff to understand why things are in a state. Why are closed accounts are open, why payments are scheduled years in the future, where balances are, and more. With the possible cross contamination of data between accounts, this is an area where TSB needs to be thorough and careful.
Data is important in today's complex, interconnected world. There are certain areas where data problems are highly disruptive and can have lasting repercussions if mistakes are made by the data processors. The financial and medical areas certainly fit in these categories, and it's sad that people are going to go through pain and problems that may affect them for years. Hopefully TSB will get things working soon and data issues corrected. If there's one thing I learned from this is that for certain issues, I need to ensure I have my own paperwork to prove my side of the story.
NEW SQL Provision: Create, protect, & manage SQL Server database copies for compliant DevOps
Create and manage database copies effortless and keeps compliance central to the process. With SQL Provisions virtual cloning technology, databases can be created in seconds using just MB of storage, enabling business to move faster. Sensitive data can be anonymized or replaced with realistic data to ensure data is protected as it moves between environments. Download your free trial
With SQL Server 2017, Microsoft announced the exciting news that SQL Server would now run in Docker containers. Laerte Junior provides a guide to get started creating SQL Server instances in Docker. More »
The GDPR is almost here and, just one week before it’s launch, Redgate is hosting the first SQL Privacy Summit in London. The schedule of presentations, panel discussions and workshops has been created to help SQL Server professionals ensure their business meets the new data privacy and protection regulations. More »
Quickly find solutions to dozens of common problems encountered while using XML and JSON features that are built into SQL Server. Content is presented in the popular problem-solution format. Look up the problem that you want to solve. Read the solution. Apply the solution directly in your own code. Problem solved! Get your copy from Amazon today.
Yesterday's Question of the Day
(by Steve Jones):
I have built an event session in Extended Events (XE) called MyEventSession. I set this to startup when the SQL Server instance starts. It's currently running with both a file and ring buffer targets. I decide to drop this session, buit don't want to lose data from the file target. What do I need to do?
Answer: Just run DROP EVENT SESSION
Sessions do not need to be stopped to be dropped. Data in the file target will persist in the file. Data in the ring buffer target will be lost.
Updating datetime2 column not working
Trying to update some datetime2 columns.
UPDATE T_MYTABLE SET dateTransferred = '2018-03-26 05.00.00' where id = 1223
Causes an error. So, I tried casting it.
This newsletter was sent to you because you signed up at SQLServerCentral.com.
Feel free to forward this to any colleagues that you think might be interested.
If you have received this email from a colleague, you can register to receive it here.