TempDB question help and best practices to implement

  • Good day Forum,

    Have a question on TEMPDB config/settings and an issue currently.

    We using SQL 2016 and TempDB on T:\ . disk size 199GB .

    Settings MDF  initial size 1024 and growth by 100MB set to 140GB (Initially set to unlimited but it filled up the disk in matter of minutes ). LDF initial size 1024 growth by 100MB set to 10GB. Our developers are running a huge select query pulling millions of records for a datawarehouse and when examined the query shown no issues in the Executionplan . Is this situations what is the best way to resolve this issue i can restart the services but would like to know what are the other options available. I know their code might be messed up but for the temp DB setting can we do anything and are the fine with the growth and initial size. What are best practices i can implement in this situation . The query runs for 45mins and fails with

    "Could not allocate a new page for database 'TEMPDB' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup"

    I have only 1 temp DB file only.. Thanks in advance

     

     

     

  • There was a post not long ago regarding the same issue.

    https://www.sqlservercentral.com/forums/topic/tempdb-full-14#post-3743539

    it's not the most accurate post regarding tempdb issues but it might help even a little.

    Check it out 🙂

    I'm not a datawarehouse guy, I'm a newbie in this section but i think 200GB tempdb is not that big for a datawarehouse.

    The query might be running fine and the execution plan might be fine, it just means you have to take these queries apart and try to divide them in small sets unless you can get more tempdb space.

    regards,

  • Have a look this as well https://www.sqlserverblogforum.com/dba/tempdb-database-is-full/

    Muthukkumaran Kaliyamoorthy
    https://www.sqlserverblogforum.com/

  • The post you shared answers a lot of questions, thanks man!!

    Sadly we cant expand the disks so was checking for any ideas..

  • Thanks for sharing

  • JSB_89 wrote:

    Our developers are running a huge select query pulling millions of records for a datawarehouse and when examined the query shown no issues in the Executionplan .

    Heh... you've correctly identified the source of the problem and, yet, you still want to blame TempDB. 🙁

    You also say you've found no issues in the ExecutionPlan but are you sure?  Do you see any "spools" or do you see any arrows that are bigger than the source tables themselves?

    "Pulling millions of records for a datawarehouse" isn't a problem in itself... it's HOW that's being done in CODE that's the issue.

    Stop focusing on TempDB... it's a symptom, not a cause.  The problem is with the code...  you need to do a deeper dive there.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • @jeff

    Developers think its always DBA's fault bcz  the query is running slow . Initially they were pulling data to temp tables which we corrected into physical tables and drop the tables if not needed but this was totally different situation. And we cannot keep expanding the disks as we have capped on using storage .

    Agreed, need to dig deeper what is going on with their code and thanks a lot Jeff once again for the different perspective in tackling problems.

  • You really need to look at the code. It could be, for example, that large temporary tables are being created for the processing. You could get around this by using permanent tables on the datawarehouse instead and just truncating them before use. But it could be anything, I can't see how you can even think about fixing it without looking at the code.

  • JSB_89 wrote:

    @Jeff

    Developers think its always DBA's fault bcz  the query is running slow . Initially they were pulling data to temp tables which we corrected into physical tables and drop the tables if not needed but this was totally different situation. And we cannot keep expanding the disks as we have capped on using storage .

    Agreed, need to dig deeper what is going on with their code and thanks a lot Jeff once again for the different perspective in tackling problems.

    A lot of developers blame the DBA and/or the server, etc, etc, ad infinitum.  Let that roll off you.  😀

    You say that you looked at the execution plan.  If it's the ACTUAL execution plan, right click on it, save it as a file, and attach the file to this thread.  There's a lot we can tell from an Actual Execution Plan.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply