Blog Post

How Good is Good Enough?

,

I was running some tests on some purchased data this morning, and was getting a little perturbed because I was finding that "John A Smith DO", "John S Smith MD", "John B Smith", and a few others were all associated with the same identifier. 

To me, these are not the same people, and I wondered how much checking for this was done by the vendor who provided this data.  After all, this was my first query on the data, and I was finding glaring errors fairly quickly.  I believe the company pays a good amount of money for this data, and was thinking, doesn't the vendor have people dedicated to scrubbing this data?  After all, that is the business they are in, providing high-quality data.

But, as I looked at what was going on, I realized that the mistakes I was seeing -- close, but not exact, associations -- were similar to the mistakes I was finding in my own work when I was using the fuzzy lookup transformations in SSIS to clean up some internal data.  I'm not saying the fuzzy lookup transformations are bad.  In fact, they are designed to make these kind of loose associations, and it is up to later processes to take the data to the next stage of accuracy.  And with reams of data to process, a few mistakes are going to make it through to the customer in any case.

So I had to feel for this vendor, I could understand their process and its limitations.  But, providing high-quality data is the business they are in, so how high of a standard should I hold them to?  A little inspection showed that about 5% of the data had multiple names per code, and a few of these names might actually be the same person, just a misspelling, so the actual error rate might be less than that.  Was this acceptable?  (I should note that the reason I was running checks on this data was because they haven't a good reputation with us for providing high-quality data in the past.) 

As a customer of this data, relying on it to augment our own data, I think that an error rate of under 1% would be acceptable.  (Well, depending on how much the data cost to purchase, of course.  A very high cost should imply much less work on my part, and well under the 1% error rate.)   And in order to ensure that the 1% errors don't show up in the data used by your customers who provide you with the top 1% (or 10%) of revenue, a process should be in place internally to thoroughly scrub the data associated with your primary customers -- if you can identify them.  We have some wonderful tools at our disposal, but when dealing with data, it seems to me that people are always going to have to get involved to ensure accuracy.  As in software development, a good QA department is essential to producing accurate data.  I wonder how this is done by my vendor, or other data vendors?  Anyone have any experience with that?

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating