Huge tempdb log file

  • Thanks for the question and the comment thread.

    Does anyone know why it is so difficult (it seems) to know exactly which queries or SPIDs are the culprits for these cases of huge increases in tempdb log space being used?

    I know there are many variations of checking for likely causes, such as the types of diagnostics suggested in the StackExchange page here:

    you wouldn't necessarily be able to determine the actual query that caused the problem, since if it's not running now it won't be captured in the above query for active requests. You may be able to reactively check the most recent query using DBCC INPUTBUFFER but it may not tell you what you want to hear. You can outer join in a similar way to capture those actively running, e.g.:

    http://dba.stackexchange.com/questions/19870/how-to-identify-which-query-is-filling-up-the-tempdb-transaction-log

    Most of the caveats that Aaron Bertrand provides in his comments seem to have to do with trying to capture details for a query that may no longer be running. But is there any way to capture such a query as its tempdb log space usage crosses certain thresholds (such as 50% of the available tempdb log space) - so at least that info could be logged somewhere before the query stops, log fills up, or the SQL service needs to be restarted (heaven forbid)?

    The reason I am very interested in this is that in the recent past I faced a similar issue with tempdb log space getting used up - apparently by a runaway query - but I was never able to find the root cause. So I am constantly apprehensive that it could happen again because I don't know what caused it.

    Thanks for any help.

    - webrunner

    -------------------
    A SQL query walks into a bar and sees two tables. He walks up to them and asks, "Can I join you?"
    Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html

  • webrunner (11/1/2016)


    But is there any way to capture such a query as its tempdb log space usage crosses certain thresholds (such as 50% of the available tempdb log space) - so at least that info could be logged somewhere before the query stops, log fills up, or the SQL service needs to be restarted (heaven forbid)?

    The reason I am very interested in this is that in the recent past I faced a similar issue with tempdb log space getting used up - apparently by a runaway query - but I was never able to find the root cause. So I am constantly apprehensive that it could happen again because I don't know what caused it.

    Thanks for any help.

    - webrunner

    Actually, there is an inbuilt threshold - 70%.

    Tempdb is in simple recovery mode, so if its log file is filled by < 70% the active VLF's are immediately dropped.

    UNLESS! they are still locked in an unfinished transaction.

    Look for the processes with open transactions - there will be your offender(s).

    _____________
    Code for TallyGenerator

  • Sergiy (11/1/2016)


    webrunner (11/1/2016)


    But is there any way to capture such a query as its tempdb log space usage crosses certain thresholds (such as 50% of the available tempdb log space) - so at least that info could be logged somewhere before the query stops, log fills up, or the SQL service needs to be restarted (heaven forbid)?

    The reason I am very interested in this is that in the recent past I faced a similar issue with tempdb log space getting used up - apparently by a runaway query - but I was never able to find the root cause. So I am constantly apprehensive that it could happen again because I don't know what caused it.

    Thanks for any help.

    - webrunner

    Actually, there is an inbuilt threshold - 70%.

    Tempdb is in simple recovery mode, so if its log file is filled by < 70% the active VLF's are immediately dropped.

    UNLESS! they are still locked in an unfinished transaction.

    Look for the processes with open transactions - there will be your offender(s).

    This is great! Thank you so much for that information.

    - webrunner

    -------------------
    A SQL query walks into a bar and sees two tables. He walks up to them and asks, "Can I join you?"
    Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html

  • To shrink it properly, so that it does not grow out of control. Create a Full backup of temp, then create a transaction backup say every 30 minutes with truncate. This will clear the TempDB, Truncate the Log file. The last step is to delete the tempDB.bak and *.trn's created for Tempdb, run the job to perform the cleanup daily.

    I would also set your TempDB to a reasonable MAX size of say 10GB with a growth of 256MB not percentage.

    Or gain more performance and setup 4 TempBD's of equal size 2GB each. rule of thumb is on for each core in the CPU.

  • Timmorehead 14652 (11/2/2016)


    To shrink it properly, so that it does not grow out of control. Create a Full backup of temp, then create a transaction backup say every 30 minutes with truncate. This will clear the TempDB, Truncate the Log file. The last step is to delete the tempDB.bak and *.trn's created for Tempdb, run the job to perform the cleanup daily.

    I would also set your TempDB to a reasonable MAX size of say 10GB with a growth of 256MB not percentage.

    Or gain more performance and setup 4 TempBD's of equal size 2GB each. rule of thumb is on for each core in the CPU.

    It'll be real interesting to see how you take a backup of TempDB... especially the log file. 😉 Please post your tested code. Thanks.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Timmorehead 14652 (11/2/2016)


    To shrink it properly, so that it does not grow out of control. Create a Full backup of temp, then create a transaction backup say every 30 minutes with truncate. This will clear the TempDB, Truncate the Log file. The last step is to delete the tempDB.bak and *.trn's created for Tempdb, run the job to perform the cleanup daily.

    I would also set your TempDB to a reasonable MAX size of say 10GB with a growth of 256MB not percentage.

    Or gain more performance and setup 4 TempBD's of equal size 2GB each. rule of thumb is on for each core in the CPU.

    Jeff addressed the backup tempdb issues. I will take on a few others.

    One file per core is bad advice too. The majority of systems out there can get by with just one file without any issues whatsoever. One file per core (even physical core, which should be pointed out) gets you WAY WAY too many files which induces RANDOM IO for most/all tempdb hits and likely for all other hits on the same spindles due to head thrashing. Too many files on too few spindles probably ranks in the top 20 worst things I see in my 20+ years of SQL Server performance tuning as a consultant.

    I believe the complete "official" advice is now 1 file per 4 physical cores to a limit of 8. I do NOT do that with my clients until or unless I KNOW they have either PFS/SGAM allocation issues or IO capabilities where multiple files won't make things slower.

    I also don't understand the recommendation of 10GB single file or 4X 2GB and a growth of 256MB. The OP said the tempdb data size is currently 80GB. Your sizes are thus way too small, as is the growth needed if you have 80GB.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

Viewing 6 posts - 16 through 20 (of 20 total)

You must be logged in to reply to this topic. Login to reply