Joining Tables with Right Function

  • ChrisM@Work (1/18/2016)


    Jeff Moden (1/18/2016)[hrCan we agree on "SARGable in one direction only" as it's often sufficient to permit tweaking a decent plan?

    With the battles I'm currently facing at work on this identical subject on some rather large tables and heavily used similar code, my most politically correct answer would have to be "Oh hell no!". πŸ˜€

    Now, how the heck did that happen? it looks like you posted that reply when it was me! Things have really been getting wonky on this site lately.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • ChrisM@Work (1/18/2016)


    Jeff Moden (1/18/2016)[hrCan we agree on "SARGable in one direction only" as it's often sufficient to permit tweaking a decent plan?

    With the battles I'm currently facing at work on this identical subject on some rather large tables and heavily used similar code, my most politically correct answer would have to be "Oh hell no!". πŸ˜€

    WTF?

    β€œWrite the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • Jeff Moden (1/18/2016)


    ChrisM@Work (1/18/2016)


    Jeff Moden (1/18/2016)[hrCan we agree on "SARGable in one direction only" as it's often sufficient to permit tweaking a decent plan?

    With the battles I'm currently facing at work on this identical subject on some rather large tables and heavily used similar code, my most politically correct answer would have to be "Oh hell no!". πŸ˜€

    Now, how the heck did that happen? it looks like you posted that reply when it was me! Things have really been getting wonky on this site lately.

    Ignoring the ssc weirdness for the moment, some of the best tuning gains I've had at this site recently have been from knowing that LIKE can be SARGable and tweaking the query to fit the data. How about 5 hours down to 20 seconds (with the help of a loop hint)? It's similar to an implicit conversion screwing up a join. If you can put an explicit conversion on the key column from the smaller table instead of relying on an implicit conversion of the matching column in the bigger table then you've won the battle.

    Change the column types and you've won the war πŸ™‚

    β€œWrite the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • ChrisM@Work (1/18/2016)


    Jeff Moden (1/18/2016)


    ChrisM@Work (1/18/2016)


    Jeff Moden (1/18/2016)[hrCan we agree on "SARGable in one direction only" as it's often sufficient to permit tweaking a decent plan?

    With the battles I'm currently facing at work on this identical subject on some rather large tables and heavily used similar code, my most politically correct answer would have to be "Oh hell no!". πŸ˜€

    Now, how the heck did that happen? it looks like you posted that reply when it was me! Things have really been getting wonky on this site lately.

    Ignoring the ssc weirdness for the moment, some of the best tuning gains I've had at this site recently have been from knowing that LIKE can be SARGable and tweaking the query to fit the data. How about 5 hours down to 20 seconds (with the help of a loop hint)? It's similar to an implicit conversion screwing up a join. If you can put an explicit conversion on the key column from the smaller table instead of relying on an implicit conversion of the matching column in the bigger table then you've won the battle.

    Change the column types and you've won the war πŸ™‚

    Doing an explicit conversion on the lookup table to avoid an implicit conversion is a technique I've used to tune queries at work when the data types don't match. Converting it on the small table eliminates the whole data type precedence and implicit conversion from coming into the picture. I have to admit that it feels a little dirty - I'd much prefer the data types were designed properly in the table to begin with, but once a system is in production, the choices are more limited.

  • ChrisM@Work (1/18/2016)


    Jeff Moden (1/18/2016)


    ChrisM@Work (1/18/2016)


    Jeff Moden (1/18/2016)[hrCan we agree on "SARGable in one direction only" as it's often sufficient to permit tweaking a decent plan?

    With the battles I'm currently facing at work on this identical subject on some rather large tables and heavily used similar code, my most politically correct answer would have to be "Oh hell no!". πŸ˜€

    Now, how the heck did that happen? it looks like you posted that reply when it was me! Things have really been getting wonky on this site lately.

    Ignoring the ssc weirdness for the moment, some of the best tuning gains I've had at this site recently have been from knowing that LIKE can be SARGable and tweaking the query to fit the data. How about 5 hours down to 20 seconds (with the help of a loop hint)? It's similar to an implicit conversion screwing up a join. If you can put an explicit conversion on the key column from the smaller table instead of relying on an implicit conversion of the matching column in the bigger table then you've won the battle.

    Change the column types and you've won the war πŸ™‚

    I'd love to see the code you did for that incredible gain because it's driving me nuts at work. I've already instituted the "narrow index" trick and have gotten times down from nearly a minute per lookup to sub-second times so I'm not sure that anything else obvious can be done but I wouldn't mind seeing your code to try to get a fresh idea.

    Ah... and you might ask why I'm moaning about sub-second times. One part is that, even with the narrow index, it's doing a scan on nearly 3 million rows 3 times per request and there are 3 different lookups per request and there are 40,000 requests per 8 hour period. It turns out to be about 8.3 hours of CPU time per 8 hours and total memory IO measured in the tens of terabytes. The 3 lookups per run are my top 3 worst statements on the entire system for CPU and logical reads outstripping even some of the very long winded and inefficient night jobs that I'm also working on fixing (except for these 3 lookups, I've finally fixed all of the GUI related code that was causing timeouts and screen delays not to mention beating the hell out of the system).

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (1/18/2016)


    ChrisM@Work (1/18/2016)


    Jeff Moden (1/18/2016)


    ChrisM@Work (1/18/2016)


    Jeff Moden (1/18/2016)[hrCan we agree on "SARGable in one direction only" as it's often sufficient to permit tweaking a decent plan?

    With the battles I'm currently facing at work on this identical subject on some rather large tables and heavily used similar code, my most politically correct answer would have to be "Oh hell no!". πŸ˜€

    Now, how the heck did that happen? it looks like you posted that reply when it was me! Things have really been getting wonky on this site lately.

    Ignoring the ssc weirdness for the moment, some of the best tuning gains I've had at this site recently have been from knowing that LIKE can be SARGable and tweaking the query to fit the data. How about 5 hours down to 20 seconds (with the help of a loop hint)? It's similar to an implicit conversion screwing up a join. If you can put an explicit conversion on the key column from the smaller table instead of relying on an implicit conversion of the matching column in the bigger table then you've won the battle.

    Change the column types and you've won the war πŸ™‚

    I'd love to see the code you did for that incredible gain because it's driving me nuts at work. I've already instituted the "narrow index" trick and have gotten times down from nearly a minute per lookup to sub-second times so I'm not sure that anything else obvious can be done but I wouldn't mind seeing your code to try to get a fresh idea.

    Ah... and you might ask why I'm moaning about sub-second times. One part is that, even with the narrow index, it's doing a scan on nearly 3 million rows 3 times per request and there are 3 different lookups per request and there are 40,000 requests per 8 hour period. It turns out to be about 8.3 hours of CPU time per 8 hours and total memory IO measured in the tens of terabytes. The 3 lookups per run are my top 3 worst statements on the entire system for CPU and logical reads outstripping even some of the very long winded and inefficient night jobs that I'm also working on fixing (except for these 3 lookups, I've finally fixed all of the GUI related code that was causing timeouts and screen delays not to mention beating the hell out of the system).

    I'd love to see the plan for that. I'm assuming you can't post it up here or you would have done. PM me an email address and we're in business.

    β€œWrite the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • ChrisM@Work (1/18/2016)


    Jeff Moden (1/18/2016)


    ChrisM@Work (1/18/2016)


    Jeff Moden (1/18/2016)


    ChrisM@Work (1/18/2016)


    Jeff Moden (1/18/2016)[hrCan we agree on "SARGable in one direction only" as it's often sufficient to permit tweaking a decent plan?

    With the battles I'm currently facing at work on this identical subject on some rather large tables and heavily used similar code, my most politically correct answer would have to be "Oh hell no!". πŸ˜€

    Now, how the heck did that happen? it looks like you posted that reply when it was me! Things have really been getting wonky on this site lately.

    Ignoring the ssc weirdness for the moment, some of the best tuning gains I've had at this site recently have been from knowing that LIKE can be SARGable and tweaking the query to fit the data. How about 5 hours down to 20 seconds (with the help of a loop hint)? It's similar to an implicit conversion screwing up a join. If you can put an explicit conversion on the key column from the smaller table instead of relying on an implicit conversion of the matching column in the bigger table then you've won the battle.

    Change the column types and you've won the war πŸ™‚

    I'd love to see the code you did for that incredible gain because it's driving me nuts at work. I've already instituted the "narrow index" trick and have gotten times down from nearly a minute per lookup to sub-second times so I'm not sure that anything else obvious can be done but I wouldn't mind seeing your code to try to get a fresh idea.

    Ah... and you might ask why I'm moaning about sub-second times. One part is that, even with the narrow index, it's doing a scan on nearly 3 million rows 3 times per request and there are 3 different lookups per request and there are 40,000 requests per 8 hour period. It turns out to be about 8.3 hours of CPU time per 8 hours and total memory IO measured in the tens of terabytes. The 3 lookups per run are my top 3 worst statements on the entire system for CPU and logical reads outstripping even some of the very long winded and inefficient night jobs that I'm also working on fixing (except for these 3 lookups, I've finally fixed all of the GUI related code that was causing timeouts and screen delays not to mention beating the hell out of the system).

    I'd love to see the plan for that. I'm assuming you can't post it up here or you would have done. PM me an email address and we're in business.

    If you're still interested in this Jeff, I have before and after plans saved off for a lunchtime session at work next week. The original query takes 10 seconds, the amended one - with the sole change being an explicit conversion (and maxdop 1 to make them both serial) - takes 3 milliseconds. All extraneous stuff has been chopped away to enhance the point for the talk so they're really simple. And because the three tables are around 20 million rows, folks would easily think that 10 seconds is perfectly okay :w00t:

    β€œWrite the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

Viewing 7 posts - 16 through 21 (of 21 total)

You must be logged in to reply to this topic. Login to reply