Using FULL JOINs to Compare Datasets

  • Comments posted to this topic are about the item Using FULL JOINs to Compare Datasets

  • The FULL OUTER JOIN method shown in the article has one major flaw compared to the EXCEPT method - identical rows are not excluded from the result set. With the example data provided it doesn't make a difference since the update ensures that there are differences in all rows - but in a real world example there will often be many identical rows and only a rather small number of differences to check (the last statement filtering out differences under a given tolerance only helps if it is possible to provide such a tolerance value - and it doesn't address the issue where a difference can occur in any number of fields.

    The OUTER JOIN is still a good method to be able to compare source and destination tables - but I would combine it with the EXCEPT statement in order to only look at rows with actual differences.

  • But you can of course add a WHERE clause to exclude the identical rows (like the final example, where small differences were excluded - change the requirement to an exact match, and you get all differences, but no matching rows..


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Agreed - however if the tables have a lot of fields instead, the where clause will be long and tedious to maintain.

  • It's not so bad to maintain. You just have to extend the WHERE clause:

    WHERE ABS(RandomNumberDiff) > @Tolerance

    to the following:

    WHERE (

    ABS(RandomNumberDiff) > @Tolerance

    OR ABS(RandomNumberDiff2) > @Tolerance

    OR ABS(RandomNumberDiff3) > @Tolerance

    ...

    )

    to get the desired result. If you had 50 columns to compare, then it would make sense to generate this code using a query off INFORMATION_SCHEMA.COLUMNS.

    This is definitely a real-world technique. We're using it at my company to compare 10's of thousands of rows across over 20 columns to reconcile datasets from disparate systems.

    In the next chapter of this article I'll explain why it's useful to have the exact differences like this, and not just the identification of rows that are different from source to destination.

  • Nicely written and detailed article. I like the mechanism too.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Nice solution that has great utility if extended to other data types with tolerances for dates, times, text, and binary values as well.

  • Thanks for the article!

    ---------------------------------------------------------------------
    Use Full Links:
    KB Article from Microsoft on how to ask a question on a Forum

  • Good article though the Except part at the beginning seems superfluous and unnecessary. If you wanted to compare data sets to see variation then I don't know why an Except query would even come to mind.

    This seems like the next intuitive step to compare a data set after you have done an Except query and have determined there are records in your data set that do not match.

    My nitpickyness (is that a word?) aside, this is a great article 😀

    Link to my blog http://notyelf.com/

  • ShannonJk,

    Thanks for the positive feedback, much appreciated!

    I was just reviewing the EXCEPT query syntax at the top as it's the classic way to test for dataset deviations. I needed to set the context for situations where EXCEPT is appropriate before discussing where it's not.

    In practice, if I were to run an EXCEPT query in some of the situations that I use the FULL JOIN, it would return many thousands of rows, with miniscule deviations. I can safely assume that the two datasets in these situations will never match exactly, which is why I go directly to the FULL JOIN technique to find the exact deviations.

    -Mark

  • Good article, but are you actually creating random numbers in your example, using NewID()? Random and pseudo-random numbers have properties that this technique may not create.

  • I'm just using the NEWID() function as a way to create unique random numbers for each row. When you use RAND() in a query like this, it generates the same number for each row.

    This is just for illustration in this example. In real situations, the numeric measures come from the source and destination queries, and aren't random numbers.

  • Thank you for the great article, I am seeing some work cut out for me to improve my previously written query 😀

    Billy

  • Nice article and good examples. Thanks

    Thanks

  • I love the articles with examples which work "out of the box". This is one of those. These very helpful techniques are going into my tool box. Thank you!

Viewing 15 posts - 1 through 15 (of 23 total)

You must be logged in to reply to this topic. Login to reply