Temp Table vs Permenant Table

  • Hello,

    I have a SP that uses 4 big temp tables and then inserts data in real sql table and then I am dropping them at the end of SP It takes so much time. more than 25 mins for only 5K rows.

    Now if you use temp tables in SP then sql wont create or store execution plan for that SP and it affects performance.

    When more than 1 user tries to run this SP, does it give incorrect data??

    So I created 4 regular tables instead of these 4 temp tables and dropped them at the end of SP. and I see it runs in less than 2 mins.

    Now with this if more than 1 user tries to run this SP at a same time then it will not give correct data.

    So whats the best way to optimize SP with 4 big temp tables?

  • Post your code. Reduce the pointless guesses. Thanks.


    [font="Arial"]Low-hanging fruit picker and defender of the moggies[/font]

    For better assistance in answering your questions, please read this[/url].


    Understanding and using APPLY, (I)[/url] and (II)[/url] Paul White[/url]

    Hidden RBAR: Triangular Joins[/url] / The "Numbers" or "Tally" Table: What it is and how it replaces a loop[/url] Jeff Moden[/url]

  • ChrisM@home (7/22/2016)


    Post your code. Reduce the pointless guesses. Thanks.

    +1

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • Please see attached file for the code. Thanks.

  • How long do all those individual queries you're using to populate those temp tables take?

  • 3 to 4 mins

  • There is certainly some potential issues with concurrency when using persistent tables to hold values that are only relevant for a given run. I certainly did not dig through all 17 pages of your word document but it looks like a bunch of tables with inserts. I think it would be safer to continue using temp tables instead of persistent tables.

    I would also caution you against littering your queries with NOLOCK. In one respect you state you are concerned with accuracy but then many of your queries are using NOLOCK. When you use that hint you are saying that you cool with mostly accurate data most of the time. You can and will get missing and/or duplicate rows...along with a few other strange behaviors. Not saying you shouldn't use but all too often it is used without REALLY understanding what it does. http://blogs.sqlsentry.com/aaronbertrand/bad-habits-nolock-everywhere/[/url]

    I would also HIGHLY recommend you use aliases in your queries. It would reduce the amount of typing by a LOT in some of these queries. The added benefit is that your queries are also instantly easier to maintain.

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

  • This is a generalization that may or may not be useful, but a temp table will not initially have any statistics which might speed up your query. Although SQL may create statistics for the temp tables while your procedure runs, statistics on permanent tables will stick around and possibly reduce subsequent execution times. This is not an argument for using permanent tables for temp data in stored procedures, just an observation that might explain your execution time difference.

    What I think you should do is add some keys and indexes to the temp tables to speed up the queries. Well-designed indexes should compensate for missing statistics.

    If you can copy all the required tables to a test database, you could run the permanent-table version of your procedure without worrying about concurrent users. This would let you test the impact of various indexes, or even use the missing index statistics views to guide you.

  • Temp Tables or not, there are a whole lot of problems with this code and I wouldn't complicate it any further by building in the concurrency problems of using permanent "scratch" tables.

    There are a ton of non-SARGABLE predicates where you couldn't use an index if you wanted to.

    There are totally unnecessary predicate calculations such as ISNULL(somecolumn,0) <> 0. Since null can't be compared to even another null (unless you've done something horrible to the default settings of the server), the equivalent code would simply be somecolumn <> 0 because NULL will never be equal to zero.

    There are a ton of things that are used in the joins that might be better in the WHERE clause. Yeah, I get that the outer joins sometimes need such limits in the joins themselves, but not normally.

    The biggest problem of them all seems to be given away by the rather persistent use of DISTINCT on the SELECTs. DISTINCT is usually used to over come either a poor database design or a lack of proper criteria which causes many "micro" and some "macro" occurrences of many-to-many joins. Calling a Rose a Rose, many-to-many joins are also known as CROSS JOINs and Cartesian Products.

    The "best" way for a long term and healthy solution to this all is to examine the code and rewrite it so that it doesn't have the problems I just mentioned. If you can't actually change the design of some of the tables it uses, then a bit more "Divide'n'Conquer" to get rid of the accidental CROSS JOINs caused by the many-to-many criteria might be the ticket.

    (edit was a spelling correction)

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • delete from fct_iie_test_Ps_report_NEW

    Why not truncate table? Less logging maybe that would be some help.

    Permission issue maybe with the account? Truncate does require DDL admin perms.

  • Piling on

    😎

    Start by fixing these issues:

    LN: 27 #test_Tmp_JRNL is Missing Clustered Index

    LN: 71 #test_SAP_Tmp is Missing Clustered Index

    LN:158 Wide distinct selection and group by for a single aggregate

    LN:316 Wide distinct selection with non-sargable predicates

    LN:491 Wide distinct selection from heap temp table

    LN:664 Wide distinct selection

    LN:758 Wide distinct selection

    LN:898 Unnecessary dropping of temp tables

    BTW, you really like squeezing your tempdb don't you;-) All of the issues listed above will one way or the other put pressure on it.

  • Thanks a lot experts. I will definitely use all these advises and try one more time.

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply