20gb temp file and they say "its ok"

  • Being a junior analyst i have usually taken the external consultancies advise as "gospel" They do know alot!

    However, they have recently created a few new reports thru SP's. The end result is, given the db they report from is only 4gb in total, that one report drives up the tempdb to 8gb, the other all the way to 24gb I rasied this as a big issue - and in response they created a package that shrinks the temp db back to normal after the report is run....

    Am i being over-dramatic (disk useage/useable space/processor time) or are they just being lazy in not investigating the root of the problem??

  • What the hell type of query are they running?

    Generating a temp db six times the size of the database is not normal practice and must take forever to run. I would definitely recommend visiting the query and looking at explain plans to identify the offending code.


    ------------------------------
    The Users are always right - when I'm not wrong!

  • I agree, one of our DB's is a couple of TB with around 30000 users and has heavy temp table usage, our TempDB is set to 20GB (4*5GB) and as far as I can see has never actually got about 50% of the total.

     

  • unfortunately the business users arent very advanced in anything other than moaning.... So we are stuck with a report that is literally a data dump so they can put everything into excel We are addressing this but that is another story!

    By explain plans you mean execution plans - done it and 100% is in the "sort by" icon, subtree count is 1.5million There is a huge "group by" clause at the bottom of the query

  • Taking into account you user education/situation I'd keep tempdb at 24 Gb if you have the space - forget the shrinking part as well (autogrow is a bad thing). It's not the most elegant solution, but it seems to fit based on the post.

    RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."

  • I would definately look at the query/stored procedure.

    At a minimum run the query through index tuning wizard allowing and see if it suggests some better indexes.  Also check out the indexed views as that might reduce the sorting.

    You may need to have the process create additional indexes just before it is run, and drop them after, but that should be ok.  It sounds like they are doing some kind of cross product and probably a join or two is not correct.

    One thing I have learned is if you have multi column keys, for example Order tabe: customerID, OrderId and OrderItem customerID, OrderID and LineItem, when you join them and you specifiy Order.customerID = N AND OrderItem.customerID = Order.customerID ALSO include OrderItem.customerID = N because sometimes the query analyzer misses the fact that since A = 5 and A = B, B must = 5

    Good luck, you were right to complain.

    Brian

  • As Brian says; predicates in the join rather than the where clause, additional indexes (particularly a clustered index on the column being sorted) will help. the other 2 points i can think of are:

    1. large nested derived tables (or views) several layers deep - it can be quicker to create a temp table with the PK in, index them and join it back to the source tables to get the 'data columns'. grouping on 10 columns is much slower than 2.

    2. check if any temporary tables are used, what type they are and where they are located. some mid process tables could be cleared up earlier in the report process to release space.

    with regard to temp tables I try to avoid autogrow as its nasty. You just need to be aware that when the sql restarts it will recreate tempdb at 24GB which may look like it has hung for a few minutes.

  • I agree with jb_lbm two extra points.

    One little item brought up at the end, is the when SQL Server re-starts, tempdb will be the size set in model, as SQL Server copies model to become temp. 

    One trick to handling a runaway tempdb is to re-start the server.  I do recommend having a max size that prevents taking up all of the disk space.  (We have had it happen more than once on run-away queries.)

    Brian

  • sorry for the misleading info about tempdb. If you remove autogrow on tempdb it will be created according to the model db as Brian says. I increased the model db on my server to 3gb and that makes a noticeable delay in startup time. 24gb would be very slow and i wouldn't recommend altering the model db to 24GB.

  • The 'noticable' delay on startup is probably hardware (disk) related. I've got 25 SQL Servers with tempdbs ranging in size from 1 Gb to 16 Gb. All tempdbs are set with a primary allocation, a growth factor of 50% the primary allocation and a growth limit of 1 time only (we do a lot of monitoring for fine tuning this).  At present the startup time difference is negligable (5-15 seconds).

    RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply