How often does human error cause issues? Recently we had a rocket crash in Russia,there have been numerous incidents of drone crashes as more and more unmanned aircraft take to the skies, and a few years ago we had an Air France disaster that might have been cause by humans making poor decisions or engaging the wrong controls. Those are incidents where the wrong button press has large consequences, either in physical damage or the loss of life.
Many of us make mistakes constantly as we work in the various tools and environments we need throughout our day. We click the wrong button in SSMS, we connect to the wrong server and run a script, or we fail to test a change. All of these are mistakes made by humans, and often are mistakes that can be prevented if we did a better job or sticking to routines and processes. That can be hard, but perhaps checklists can help here, along with some double checks by coworkers.
That's why I think using scripts in T-SQL, and using the Script button in SSMS to generate the code that you can run (and save) is the best way to work with your servers. As much as I think Powershell (PoSh) can be a pain to write and debug, there's a good argument to be made for using it when performing complex administrative tasks, especially across servers. Using code rather than forms and buttons is just a better way to accurately and consistently make changes in a controller manner.
This is an area where everyone could work more efficiently. Developers are usually used to working with version control systems and tracking all their changes. However they often will make configuration changes to their machines, SQL Server, IIS, or some other software, and forget to track the changes. Making these changes in code, through T-SQL or PoSh, and tracking these items in VCS, would help with smoothing software deployments. DBAs and other operations staff should learn to use version control systems as well, helping to track down root causes to issues.
Ultimately humans are often the weaknesses in most systems. We should understand that, accept it, and compensate as best we can.
When using SQL Server, you frequently need to work with data that represents intervals of time. For example, consider intervals representing sessions, contracts, projects, and so on. Tasks related to interval manipulation are typically quite intriguing, especially because coming up with efficient solutions isn't easy. Itzik Ben-Gan explains More »
Question of the Day
Today's Question (by Naseer Ahmad):
Which option would we use to encrypt the definition of the view and prevent users of the database from seeing it?
Think you know the answer? Click here, and find out if you are right.
We keep track of your score to give you bragging rights against your peers.
This question is worth
1 point in this category: Views.
Did you miss yesterday's question,
DBCC CHECKDB - answer it now or check out the answer.
We'd love to give you credit for your own question and answer.
To submit a QOD, simply log in to the
Expert Performance Indexing for SQL Server 2012
Expert Performance Indexing for SQL Server 2012 is a deep dive into perhaps the single-most important facet of good performance: indexes, and how to best use them. The book begins in the shallow waters with explanations of the types of indexes and how they are stored in databases. Moving deeper into the topic, and further into the book, you will look at the statistics that are accumulated both by indexes and on indexes. All of this will help you progress towards properly achieving your database performance goals.
One of the ways to find affected columns is to run a SELECT with a WHERE clause to return out-of-range data. But this doesn't always work. It is possible for data to be within a valid range but still fail the data purity check. It is also possible for the column to be totally corrupt, resulting in an arithmetic overflow error when you attempt to SELECT it.
The method I showed in my article for when a SELECT doesn't help was to use DBCC PAGE. But if you have more than a few columns with invalid data, it can be quite time consuming to do. This was the case for me recently when I migrated a database from SQL Server 2000 to 2008 R2 and the CHECKDB found 540,000 data purity errors!
A SELECT for values outside the range for the datatype (decimal(23, 8) in this case) didn't return any data and so DBCC PAGE was the only option - but obviously there was no way I could run it manually 540,000 times!
The script attached to this article was my solution to the problem.
1. It runs DBCC CHECKDB(dbname) WITH DATA_PURITY, NO_INFOMSGS, TABLERESULTS and captures the results in a temporary table
2. It extracts the page, slot, object id, column name and data type for each row returned
3. It loops through the results perfoming a DBCC PAGE for each one to get the primary key values of the rows containing the invalid data.
Within this loop, there is also code to derive the condition for the primary key (allowing for multi-column keys). The key value is then used to query the table to get the current value of the affected column. The results are stored in a table, tmp_final_results. This can be dropped once all investigation is complete.
There is further code, commented out, that I will explain shortly.
How to use the script
This is a 3 stage process.
1. First run the script within the context of the database to check.
It may take several hours to run, especially if you have a few hundred thousand data purity errors.
When it has finished, the table tmp_final_results will contain a row for each out-of-range column.
2. Use the first commented out section of the script to view the results.
Note: the conversion of [Value] to varchar is necessary, as attempting to retrieve some out-of-range data may result in arithmetic overflow errors. By converting them to varchar, these are displayed as -1.#IND instead.
3. Fix the data
Now you have to decide what to set each one to.
If you are lucky you may find that all the values are within a valid range for their datatypes, so a straightforward UPDATE to their existing value will fix the problem. The third section of my script (also commented out) generates an UPDATE statement for each column.
However, you may find out-of-range values, and some may be displayed as -1.#IND. For these you have to decide what they should be set to, and this means speaking with someone who knows the application well and getting them to look at your data.
4. Tidy up
The final commented out section needs to be run to drop the tables generated by the script.
Please remember that this script is a tool to aid in the identification of columns that have failed a data purity check. You should not blindly update the columns - ensure you are 100% confident of the correct values. However, the code in section 3 of the script is available if you decide the suggested values can be used. And of course, backup the database before making any data changes.
nested IIF for font color
I must have something missing unless I can't use IIF for font color?
iif(ReportItems!Textbox26.Value >0, "Red",iif(ReportItems!Textbox26.Value < 0, "Green","Black")
Thanks In Advance
Help! MDX > 8.000 characters
- Hi all,
I have a complex MDX that was build within the front-end-tool our users use. Now I want to include...
Need MS BI Developer SSIS SSAS SSRS
- Please send me your resume firstname.lastname@example.org
Exp: 3-5 yrs
Location: Chennai ( Hexaware)
1.depth knowledge of Microsoft Business intelligence product stack (SQL Server 2008/2012,...
C# Export Excel
- I think this is a great library for exporting to Excel. It is cheap too.
This newsletter was sent to you because you signed up at SQLServerCentral.com.
Feel free to forward this to any colleagues that you think might be interested.
If you have received this email from a colleague, you can register to receive it here.