TempDB

  • Hello.
     i have a doubt about the file size and the autogrowth.
    I read that is good to initializate the files with a sum(sizes) close to 90% disk space and without autogrowth.
    I always thought that the good practice was to initilizate the files with a size close to 50% disk space and autogrowth per file, 25% of file size
    What is the correct?
    Thanks for all.

  • msimone - Monday, June 25, 2018 8:36 AM

    Hello.
     i have a doubt about the file size and the autogrowth.
    I read that is good to initializate the files with a sum(sizes) close to 90% disk space and without autogrowth.
    I always thought that the good practice was to initilizate the files with a size close to 50% disk space and autogrowth per file, 25% of file size
    What is the correct?
    Thanks for all.

    Sizing it so that it doesn't need to grow is the best thing to do although this is sometimes hard to know how much it will need. I wouldn't recommend setting autogrowth as a percentage,  use a figure instead e.g. 512mb.

    Thanks

  • msimone - Monday, June 25, 2018 8:36 AM

    Hello.
     i have a doubt about the file size and the autogrowth.
    I read that is good to initializate the files with a sum(sizes) close to 90% disk space and without autogrowth.
    I always thought that the good practice was to initilizate the files with a size close to 50% disk space and autogrowth per file, 25% of file size
    What is the correct?
    Thanks for all.

    I don't think anyone can guess the size the files need to be initialize to - it really depends on how tempdb is used, the applications using the instance, etc.
    The autogrowth on or off debate...I think that comes down more to preference and how you want to handle the sizing, prevent tempdb growing out of control. There has been debates on both sides of that one for a long time. And both sides have their merits.
    In either case, if you size it so it's not growing all the time and then setting an alert for the space usage or percent free you will get a forewarning for space issues.

    Sue

  • NorthernSoul, Sue_H, thanks for yours answers.
    Then i can suppose that the best option is to control engine, applications, etc, etc to determine the best space for tempdb and to determine whether with autogrowth in MB or without autogrowth.
    Hard work.

  • Just to add my 2 cents...

    If you need a relatively large TEMPDB (regardless of the number of files that it may be configured as), you may have a serious problem with the queries that are being executed on the box.  Accommodating such inefficiencies simply by sizing TEMPDB to meet their requirements is one of the worst things you can do because you're simply putting a band-aid on  stab wound.

    To put things in perspective, I have eight 2GB files and one 2GB log file for TEMPDB on my primary production box, which contains multiple databases in excess of 1TB each, handles OLTP, and some pretty large and frequent batch file ETL processes and the files don't grow.  In high contrast, I've seen 250GB databases that "need" more than 100GB of TEMPDB space and it has always been due to really poor code that usually has a fairly easy fix, "Divide'n'Conquer" being the most frequent fix. 

    The server I'm talking about does have 100GB of drive space reserved just for TempDB but I never allocate more than the 16GB I previously described.  If something needs more than that, then that represents a problem that needs to be fixed but I don't want the process to fail early... I want time to troubleshoot what it is and allowing it to grow after  a warning appears usually gives me the time to do that and to decide if I'm going to kill the process or not.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff, thank you very much.

  • msimone - Wednesday, June 27, 2018 8:25 AM

    Jeff, thank you very much.

    You're welcome.  Thank you for the feedback.

    Sue is also correct in that you frequently can't estimate what will be needed, especially initially.  My rule of thumb for systems that will potentially 1 or 2TB databases is to allocate a dedicated "drive letter" with an allocation of 100GB and then to be rather stingy with the actually size of TempDB files but that's based only on my personal experience on the types of systems that I've had to work with.  It's quite a bit different than what some people claim to be a "best practice".  She's also correct in suggesting that you leave autogrowth for TempDB enabled.

    Shifting gears a bit, I also NEVER use a percentage for growth for any database and, especially, TempDB and Log files. 

    On the subject of log file growth, I consider the following articles to be some of the definitive articles on the subject.  Some of them are older articles and some parts don't necessarily apply to SSDs but most of the information is still spot on.

    https://www.sqlskills.com/blogs/paul/important-change-vlf-creation-algorithm-sql-server-2014/
    https://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/
    https://www.sqlskills.com/blogs/kimberly/8-steps-to-better-transaction-log-throughput/
    https://sqlserverperformance.wordpress.com/2010/06/22/detecting-and-reducing-vlfs-in-sql-server-20082008-r2-transaction-log-files/

    I'll also add a bit of personal experience to that and suggest that the initial size of a log file should always be about 100MB and then you can manually grow it to the desired size from there.  The reason for this is because of the eventuality of needing to shrink the log file after some crazy "accident" in code or when I just need to change the VLF pattern just due to how a database is used over time.  Having that first "fragment" as 100MB has made it easier for me to shrink to almost zero and then rebuild the log file with the correct settings.  I proved this to myself again just last night.  I was trying to increase the performance of a set of table transfers to an archive database that actually needs no log file because it will become READ_ONLY.  I preset the log file on the new database to 20GB (about the size I expected for the transfer and couldn't use minimal logging on INSERT/SELECT because it's an old 2005 box and could not use SELECT/INTO for reasons too long to go into here).  The performance didn't actually improve and, when I went to shrink the log file, it wouldn't go any less than 2.3GB no matter what I tried (DB is in SIMPLE recovery model and I did the drop clean buffers thing and did a CHECKPOINT, etc).

    Having the first log file fragment be 100MB solved that problem for me.  I don't know if it's necessary for any of the latter versions but it's a habit that I developed and simply got lazy last night but did reaffirm the merits of the 100MB initial fragment at least to myself.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply