Generic Data Comparison

  • Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/lPey

  • As you said

    "... The only danger with this solution, even though the chances are very slim, is that the different original values might produce the same hash and result will be wrong .."

    Then, Why NOT just use ?

     CHECKSUM_AGG(BINARY_CHECKSUM(*))

    That, is going to give you a real performance boost

    ex:

    SELECT

    (SELECT   CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM processed_customer)

     -

    (SELECT   CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM customer ) /* 0 means OK*/

    Just my $0.02

     


    * Noel

  • Same problem. Chances are very slim.

  • "Same problem. Chances are very slim"

     

    Yes but,

     1. the simplicity of the query

     2. the speed and

    3. the generality  

    are a lot better that the proposed solution

     


    * Noel

  • With binary conversion chances for the wrong result are smaller for the tables with 4+ columns. I was trying your method before I get proposed the other one. And actually, I was trying to calculate probability of two methods.

    Yours method has more chances to give you bad result (even chances for both are very small for the tables with 4+ columns) 

    The more  columns the cleaner the method with the binary column conversion and more chances with you method to have an error.

    Yes, you have performance boost, but not as much as between solution 1-2-3 and 4. And in most cases this is not a point. When you doing such comparison the time (usually) is irrelevant

  • The solution that converts data to varbinary and then to bigint appears to have a major limitation. Anytime you run through this conversion with more than 8 characters you risk ending up with the same bigint.  Data is being truncated with this conversion.

    For example these queries both return the same result:

    select convert(bigint,convert(varbinary(255),'VALUE001'))

    select convert(bigint,convert(varbinary(255),'This is a very long string that will end up being the same as the previous one. VALUE001'))

    Thanks,

    Dan Wilson

  • Yes, you right, solution has some limitations and as I pointed it has a probability to get a wrong result. But in many cases this probability is small.

  • I tried the third method today just on a lark for a new SP I created to import data.  The query runs on tables about 35k records long with 8 columns in about 2 seconds.  It did help me identify a problem with our legacy data (originally thought my new code was bad).  Thanks for sharing!

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply