Looking to update old code

  • I've got a bunch of old code developed against SQL 2000 and 2005 that I am looking to update. A bunch of it involves joins to multiple tables and I'm wondering if SQL 2012 has any new T-SQL code structures I can look at that might be able to replace things like a 5-13 joined table SELECT script which inserts into staging tables.

    What new to SQL 2012 T-SQL code structures have made your life easier? What's your favorite and which ones do you not recommend for us?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • You can't really elminate the number of tables joined unless you refactor your data model. However, Common Table Expressions can reduce the complexity, and sometimes improve the performance, of queries that join multiple times to the same table.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • If some of your joins are to tables that simply provide descriptions, I'm a big fan of converting those joins to inline table-valued functions. I inherited one table that held the descriptions for codes across about two dozen columns. So it was a classic "join on the code for this column and the description contained in the column." Twenty joins to the same table.

    I wrote a script that read the table and generated ITVFs for each column , then replaced the joins with CROSS APPLYs of the functions. It made a limping query run like a scalded dog.

    __________________________________________________

    Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
    Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills

  • Also, get rid of scalar functions in queries wherever possible. Inline table valued functions perform wayyy better.

    __________________________________________________

    Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
    Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills

  • If this is something like a data warehouse, where you have very large tables that are sourced for reporting purposes from other transactional databases, then consider more of a Kimball style Star Schema data model rather than a third normal form model.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • The Dixie Flatline (9/15/2016)


    If some of your joins are to tables that simply provide descriptions, I'm a big fan of converting those joins to inline table-valued functions. I inherited one table that held the descriptions for codes across about two dozen columns. So it was a classic "join on the code for this column and the description contained in the column." Twenty joins to the same table.

    I wrote a script that read the table and generated ITVFs for each column , then replaced the joins with CROSS APPLYs of the functions. It made a limping query run like a scalded dog.

    Oh, now that sounds very interesting. I have to look into that. Because some of these joins are just to get one column of information.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • The Dixie Flatline (9/15/2016)


    Also, get rid of scalar functions in queries wherever possible. Inline table valued functions perform wayyy better.

    Hmmm. Not really using scalar functions. Using a lot of system Aggregate functions, though, as well as the notorious "other" of ISNULL() and a few Date/Time functions.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Eric M Russell (9/16/2016)


    If this is something like a data warehouse, where you have very large tables that are sourced for reporting purposes from other transactional databases, then consider more of a Kimball style Star Schema data model rather than a third normal form model.

    Tis a vended OLTP database that I'm pulling data out of, unfortunately. It's part of ETL to a SAP-type system.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Brandie Tarvin (9/22/2016)


    The Dixie Flatline (9/15/2016)


    If some of your joins are to tables that simply provide descriptions, I'm a big fan of converting those joins to inline table-valued functions. I inherited one table that held the descriptions for codes across about two dozen columns. So it was a classic "join on the code for this column and the description contained in the column." Twenty joins to the same table.

    I wrote a script that read the table and generated ITVFs for each column , then replaced the joins with CROSS APPLYs of the functions. It made a limping query run like a scalded dog.

    Oh, now that sounds very interesting. I have to look into that. Because some of these joins are just to get one column of information.

    It worked well for us. I'm attaching an example of how to use dynamic SQL to speed up the coding effort. The translate functions are just SELECT CASE statements so the logic is pretty simple.

    __________________________________________________

    Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
    Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills

  • The Dixie Flatline (9/22/2016)


    Brandie Tarvin (9/22/2016)


    The Dixie Flatline (9/15/2016)


    If some of your joins are to tables that simply provide descriptions, I'm a big fan of converting those joins to inline table-valued functions. I inherited one table that held the descriptions for codes across about two dozen columns. So it was a classic "join on the code for this column and the description contained in the column." Twenty joins to the same table.

    I wrote a script that read the table and generated ITVFs for each column , then replaced the joins with CROSS APPLYs of the functions. It made a limping query run like a scalded dog.

    Oh, now that sounds very interesting. I have to look into that. Because some of these joins are just to get one column of information.

    It worked well for us. I'm attaching an example of how to use dynamic SQL to speed up the coding effort. The translate functions are just SELECT CASE statements so the logic is pretty simple.

    Thanks! I appreciate the sample.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • If you're making the jump a version previous to 2012 to a version that 2012 or later...

    The newly added windowing functions and the addition of window frames...

  • Brandie Tarvin (9/22/2016)


    Eric M Russell (9/16/2016)


    If this is something like a data warehouse, where you have very large tables that are sourced for reporting purposes from other transactional databases, then consider more of a Kimball style Star Schema data model rather than a third normal form model.

    Tis a vended OLTP database that I'm pulling data out of, unfortunately. It's part of ETL to a SAP-type system.

    In that case, consider re-architecting your ETL process as SSIS packages. It has builtin tasks for parallel workflow, slowly changing dimensions, error row flow redirection, auditing, etc.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell (9/23/2016)


    Brandie Tarvin (9/22/2016)


    Eric M Russell (9/16/2016)


    If this is something like a data warehouse, where you have very large tables that are sourced for reporting purposes from other transactional databases, then consider more of a Kimball style Star Schema data model rather than a third normal form model.

    Tis a vended OLTP database that I'm pulling data out of, unfortunately. It's part of ETL to a SAP-type system.

    In that case, consider re-architecting your ETL process as SSIS packages. It has builtin tasks for parallel workflow, slowly changing dimensions, error row flow redirection, auditing, etc.

    It's actually is in an SSIS package. When SSIS first came out, I found I got the best performance by doing data pulls with Execute T-SQL tasks calling stored procedures. We use other tasks in the package too, but it's the procs / T-SQL code I'm most interested in cleaning up. I'm using the most basic SELECT / INSERT / UPDATE code with temp tables, and two of the procs are making some use out of CTEs. That's about it. Mega table joins to get the data, though.

    I'm not sure if MERGE will help me or not. I know in some instances using CTEs caused worse performance than what I was already doing, so I don't want to go CTE heavy just to use "the latest and greatest."

    Basically, I have some time to tinker and check performance with new code features. If I can clean up pages of code into more compact modules using more recent T-SQL functionality, then great! But if that new functionality is going to add complexity to my monster code without really giving a performance boost, I'll probably leave it as is. So, again, if anyone has any thoughts about their favorite 2k8 and up T-SQL features, please let me know so I can do some research while I have time to focus on this project.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Brandie Tarvin (9/23/2016)


    Eric M Russell (9/23/2016)


    Brandie Tarvin (9/22/2016)


    Eric M Russell (9/16/2016)


    If this is something like a data warehouse, where you have very large tables that are sourced for reporting purposes from other transactional databases, then consider more of a Kimball style Star Schema data model rather than a third normal form model.

    Tis a vended OLTP database that I'm pulling data out of, unfortunately. It's part of ETL to a SAP-type system.

    In that case, consider re-architecting your ETL process as SSIS packages. It has builtin tasks for parallel workflow, slowly changing dimensions, error row flow redirection, auditing, etc.

    It's actually is in an SSIS package. When SSIS first came out, I found I got the best performance by doing data pulls with Execute T-SQL tasks calling stored procedures. We use other tasks in the package too, but it's the procs / T-SQL code I'm most interested in cleaning up. I'm using the most basic SELECT / INSERT / UPDATE code with temp tables, and two of the procs are making some use out of CTEs. That's about it. Mega table joins to get the data, though.

    I'm not sure if MERGE will help me or not. I know in some instances using CTEs caused worse performance than what I was already doing, so I don't want to go CTE heavy just to use "the latest and greatest."

    Basically, I have some time to tinker and check performance with new code features. If I can clean up pages of code into more compact modules using more recent T-SQL functionality, then great! But if that new functionality is going to add complexity to my monster code without really giving a performance boost, I'll probably leave it as is. So, again, if anyone has any thoughts about their favorite 2k8 and up T-SQL features, please let me know so I can do some research while I have time to focus on this project.

    Regardless of SQL Server version, performance optimization should start by looking at execution plans.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell (9/23/2016)

    Regardless of SQL Server version, performance optimization should start by looking at execution plans.

    Not necessarily. If the best clustered indexes are not already in place, you should start by looking at all index stats; you might need to look at "heavy" queries as a part of that, but not always.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

Viewing 15 posts - 1 through 15 (of 20 total)

You must be logged in to reply to this topic. Login to reply