But, but, but, my data is clean!

  • Comments posted to this topic are about the item But, but, but, my data is clean!

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Gail, I don't know if I would call this example of dirty data "weird" or "worrying" but it was certainly odd. We called it "The Curse of IVAN YEARS".

    In the early years of online trading, I was Test Manager on a project which built a website for a company which sold records and CDs etc. To test the online title search we imported the titles catalogue from the production system. (By-the-way, in the old production data, all the titles were UPPER CASE.) The testers noticed that every so often the system would find titles which looked like this "ALBUM TITLE IVAN YEARS". The album title was correct, but sometimes at the very end it would have the text "IVAN YEARS"! We investigated and found that the "IVAN YEARS" bit was in the original production data, so it wasn't a bug, but it puzzled everyone, including the Customer. There were 100s of these records randomly scattered over the database, all "...IVAN YEARS". We all wondered who or what IVAN YEARS was.

    In odd moments I investigated the problem and eventually found the cause. Lets say the album title column was char(80). In one (but only one!) of the maintenance screens in a green screen system, the album title field was (say) char(60). It turned out that for a time the data input people had been in the habit of not creating new records, but copying an old one AND BLANKING OUT THE TITLE! (it saved them quite a lot of keying). Unfortunately, what they didn't know was that the screen they were using had the 60 char field, and their favourite record was titled "I can't remember...SULLIVAN YEARS"! and char 60 fell on the second "L" of SULLIVAN! Every time they copied one of these records they were creating a title with an invisible (to them) "IVAN YEARS" at the end. The customer wasn't Amazon, but if you go there, you can still find CDs which (correctly) end ".... Sullivan Years" because that was a popular series of records.

    The root cause of the problem was a mis-match in field length between the database and the screen combined with a slightly dubious, but innocent practice in data-entry. The solution was a _carefully tested_ data-fix update done at about the same time as we converted the text in the database from UPPER CASE to Mixed Case. As dirty data goes, it was harmless, but it had me puzzled for quite a while!

    Tom Gillies LinkedIn Profilewww.DuhallowGreyGeek.com[/url]

  • On a less flippant note, Chris Date (among others) has argued that database systems should explicitly prohibit anything which is not allowed. That increases work in the short term, but decreases the possibility of putting rubbish into the database.

    Of course, it's quite hard to devise something which would prevent nonsense like "IVAN YEARS" getting into a "description field". Unless you can parse the field for meaning (and some legitimate album and book titles are not standard English) the I'm not sure how you can do it.

    Tom Gillies LinkedIn Profilewww.DuhallowGreyGeek.com[/url]

  • My SW company was bought by another SQ company about 18 months ago. So we had to build a system to extract data from the old DB to the new one. It's SW to track residents in nursing homes. As part of it we have converted over 600 clients so far. We found a large problem in how the the development created the front end. They allowed the end user to put new vendors, relative contacts types, education levels, religions, etc. on the fly. There is an option to limit who can do that, but is turned off by default. :crying:

    So we have gotten DBs that have 175 relation contacts with 25 being different ways to spell or abbreviate daughter. The race codes have countries of origin. There was one that had 125 education levels. The vendor or doctor will be repeated 5 times. :w00t:

    Where if the user had to go to a central place to add/modify the setups the users would have probably looked at the preset lists before making a change.



    ----------------
    Jim P.

    A little bit of this and a little byte of that can cause bloatware.

  • I would agree with Chris Date on that. Yes, it's up-front work, but in my opinion everything that can be validated in a DB should be.

    I love arguing this one with devs, so much 'agile' nonsense sometimes. 😀 "Creating validation in the database is a violation of the DRY principal"

    One dev (few years ago) with that initial attitude came back a couple weeks later: "I checked the DB, found that I had a bug in the app which was letting incorrect values through. I've fixed the bug and added some DB validations."

    Description fields, like comment fields are hard, they are free text. I suspect your problem was more a data type mismatch than a validation issue per se.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • @Gail, I think you have the right classification there. If anything, the cause was a "type mismatch". You can't really say that "IVAN YEARS" was invalid, only that it was nonsense in the context it was found.

    @jim, Yes, I've come across situations like that too. Data migrations are a time when the dirty data gets found. They are also a time when dirty data gets created too. I agree with your suggestion that lookup tables are a good thing. Anywhere that a screen has a field which says or implies "Type of..." and the entry is not something pre-determined or constrained makes me suspicious. Of course, thinking that way is one thing, convincing other people is quite another.;-)

    Tom Gillies LinkedIn Profilewww.DuhallowGreyGeek.com[/url]

  • I have been a consultant/developer/database analyst for many years and the thing that gets me most riled up is a user putting data into fields like *do not use* . However, as much 'power' as we think we have as developer's and DBAs the real power lies with our clients who say they don't want us to require fields or restrict entries... etc. Since they are the one's paying the bills if they insist they do not want this I cannot add the things I would need to add to restrict this type of thing. And, no matter what I added I could not prevent them from using someone's middle name as a 'do not use' indicator. They look at me like I am a little strange when I get so upset when I see these things. :w00t:

  • I agree with jscott, the customer creates a lot of these problems. How often do you deliver a quote to add some additional fields and the customer says, "we have these four fields that aren't in use so we'll just re purpose those. But there is some legacy data already there that they don't want cleansed or to pay for some re-names. Aver here in Australia the role of the data analyst is almost non-existant as developers can do that job during build. And with ORM taking over ... I use to specialise in data migration projects but there just aren't the jobs so I have had to change track. Such a shame as we all know you only capture data to use it and if it isn't fit for purpose how can it be used?

  • GilaMonster (6/21/2014)


    I love arguing this one with devs, so much 'agile' nonsense sometimes. 😀 "Creating validation in the database is a violation of the DRY principal"

    The concept's not nonsense - their interpretation is.

    My response to that is usually along the lines of;

    1) Microsoft has already written this. Why are you repeating their work - they've spent 10's of millions if not hundreds on sorting this, why are you spending resources on trying to write functionality that's there. So, don't repeat work that's already done.

    2) Where in the Agile Manifesto does it suggest data quality and accuracy don't matter - I missed thaht bit

    3) For anything else that we add later you're going to have to re-implement this, meaning you're going to have to repeat yourself, and increase technical debt. Unless you've ignored YAGNI, of course.

    plus if applicable. "Remember system X?"

    I'm a DBA.
    I'm not paid to solve problems. I'm paid to prevent them.

  • andrew gothard (6/23/2014)


    GilaMonster (6/21/2014)


    I love arguing this one with devs, so much 'agile' nonsense sometimes. 😀 "Creating validation in the database is a violation of the DRY principal"

    The concept's not nonsense - their interpretation is.

    True, I should have been clearer.

    My problem isn't with agile, it's with people who incorrectly use agile to justify poor coding practices.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I would like to add my two cents here too as I happen to work both as a .NET developer and a SQL Server DBA a lot. At one of our clients we build a data store that combines data from several systems into a single unified model. We started out with one golden rule: assume nothing! We check uniqueness of keys (logging duplicates of coarse) and referential integrity on the staged data before it enters the data store. We found so many unexpected instances of 'dirty' data that it has become hard to surprise us. We notify the application administrators or key users to correct these errors at the source and avoid similar mistakes in the near future. But habits do change over time and sometimes faults resurface after being away for quite a while. Making this 'litter' visible was not a simple task, but essential to ensure both data quality and transparency; nothing is ever excluded in this pipeline without being logged.

    DRY is a nice principle, but only feasible if the ORM-software did create all the constraints along with tables and columns. In a web application, clint-side validation is never sufficient for any user input, it must always be accompanied with server-side validation at least. In my opinion, every application-level constraint should have a database-level companion, because a developer might make assumptions about the integrity of the data in his code. Having validation encoded at three different locations in your application makes maintenance harder, but after tracing bugs caused by 'dirty' data somewhere in a multi-gigabyte database you know it is worth the effort. However, most developers are no DBA's and often see their databases only as object persistence providers, not as data integrity guards. Old habbits die hard ...

  • We started out with one golden rule: assume nothing!

    We have a saying here: "All data is bad, some is worse than the rest."

  • This one was YEARS ago and I'm still not sure how funny I consider this one, but I'll certainly never forget it! Our devs were building a web site that would serve many different customers. This site hooked up to one of our databases. Some of this information was web address information. The president of our company decided that he wanted to give the site a "test drive" and asked to be able to log in. So the devs loaded some test data and turned it over to the president for review. Even though they loaded test data, they neglected to get rid of their "dev" data. So there were a bunch of entries that were garbage. My favorite company was "XYZ Corporation".

    They were located at "123 Fake Street".

    Their city was "xxxxx"

    Their country was "xx" (I know, I know...shouldn't have even been allowed, but still not the funny part)

    Their phone number was "xxx-xxx-xxxx" (alpha character were allowed in case the customer had a number that spelled something)

    and their web address was "xxx.com"

    So when our president clicked on "Go to Website"...he got a few pop-ups he wasn't expecting. Fortunately, it led to a policy of periodic data review so good came from it. But I sure did (and still do) get a few snickers out of remembering this.

    -G

  • Often times developers are reluctant to hard code domain or relational constraints, because there are no written specifications formally defining what should be considered valid. How data is conformed, constraints, and exceptioned is definately something that needs to be documented upfront.

    One common issue I see are VarChar "date" columns. What I sometimes do (when refactoring the column to Date type isn't an option) is place a check constraint on the varchar column. First it casts the value to Date type (which will throw an exception if the value isn't a real date), and then it compares the value to a YYYYMMDD formatted conversion of itself (which verifies the value conforms to this standard string format).

    For example:

    create table foo

    (

    foo_date varchar(30) not null

    constraint ck_foo_date_yyyymmdd

    check (foo_date = convert(char(8),cast(foo_date as datetime),112))

    );

    insert into foo (foo_date) values ('2011/02/28');

    Msg 547, Level 16, State 0, Line 1

    The INSERT statement conflicted with the CHECK constraint "ck_foo_date_yyyymmdd".

    insert into foo (foo_date) values ('20110229');

    Msg 241, Level 16, State 1, Line 1

    Conversion failed when converting date and/or time from character string.

    insert into foo (foo_date) values ('20120229');

    (1 row(s) affected

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • The trade off is a little more complicated than just a technical one. You have to consider how the data gets into the database, and why no one is validating the data after it gets in.

    That being said, I appreciate when the data is foreign keyed.

Viewing 15 posts - 1 through 15 (of 32 total)

You must be logged in to reply to this topic. Login to reply