Reduce size of tempdb file mdf files

  • caz100

    SSCommitted

    Points: 1601

    Hi

    In order to improve performance on a sqlserver we have been instructed to increase the size of some temp datafiles, reduce the size of others and add some more files. In total we should have 6 mdf files all of the same size of 1600 MB.

    So I have added 2 more required mdf files of 1600, increased 2 existing files to 1600 but I cannot seem to shrink two existing files which are quite large back to 1600 MB.

    I was under the assumption this would be possible after a server restart but the server was restarted recently as part of windows patching and the files remained the same size despite me running a shrink command on them prior to server restart.

    This is a production system so getting access to restart is very difficult. Looking for advice on how to change the size of the larger files tempdb files so all will be the same size.

     

    thanks in advance

     

     

     

     

     

     

     

  • Jeff Moden

    SSC Guru

    Points: 994239

    To be totally honest, this will likely be a futile exercise.

    First, unless you can guarantee that the MDF files will be located on separate physical disks, there will be little to be had in the area of performance gains, if any at all.

    Second, even if such a thing IS guaranteed, if there's no planning as to how the data is to be split across the separate MDF files according to how the data is used, there may still be little or nothing to gain.

    Third, if you bring other physical disks into play and SQL Server isn't the only thing using those disks, you could actually end up slowing SQL Server down, A LOT!

    Last but not least, if you have SSDs, it's not going to matter so much.  It'll turn out that most of what you done is just add complexity.

    There are reasons, such as "Online Piecemeal Restores", to have multiple MDF files but performance usually isn't one of them.

    But, don't listen to me... in fact, don't listen to anyone on this subject.  Instead, do a test because "One good test result is worth a thousand expert opinions".  You might end up saving yourself a whole lot of work that has little or no ROI.  If it DOES result in some sort of wild increase in performance, you should write an article about what you did so others can test it and then, maybe, implement what you've done.

    Shifting gears a bit to something that you SHOULD listen to is my favorite bit of advice.  It's my mantra, in fact.  "Performance is in the code... or not".  If you're having performance issues, you should determine which code is actually causing the problem and fix it.  It takes more time than the sometimes silly hardware and file "fixes" that people propose but it's usually MUCH more effective and has a MUCH higher ROI.  Fixing your top 10 code problems will typically cause your server to operate much more efficiently.

    And, remember... it's not usually your longest running queries that are the performance issue.  It's usually the cumulative effect of queries that run tens of thousands of times per hour.

    And example of the above, we had a query that ran in "only" 100 ms (the "only" is in quotes because, for what it did, I thought that was horrible).  The problem was that it would recompile each and every time it ran and it did run tens of thousands of times per hour.  The recompile time varied from 2 seconds to 22 SECONDS (that's NOT a typo... SECONDS!!!) EVERYTIME IT RAN!.

    When we fixed the query so that it wouldn't recompile every time it was executed, it also started running in just several milliseconds.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
    "If you think its expensive to hire a professional to do the job, wait until you hire an amateur."--Red Adair
    "Change is inevitable... change for the better is not."
    When you put the right degree of spin on it, the number 3|8 is also a glyph that describes the nature of a DBAs job. 😉

    Helpful Links:
    How to post code problems
    Create a Tally Function (fnTally)

  • caz100

    SSCommitted

    Points: 1601

    Yes agreed, it is most likely a futile exercise but still a requirement from the vendor, so the question is how to do it not why.

  • rVadim

    Hall of Fame

    Points: 3890

    Did you try to run DBCC FREEPROCCACHE before shrinking?

    Also, check this: https://sqlsunday.com/2013/08/11/shrinking-tempdb-without-restarting-sql-server/

    --Vadim R.

  • caz100

    SSCommitted

    Points: 1601

    CHECKPOINT;

    GO

    DBCC DROPCLEANBUFFERS;

    GO

     

    Is that something that can be done on a production system?

     

    My understanding was it could be a bit risky ?

  • Leo.Miller

    SSChampion

    Points: 12821

    Not sure if people actually read the question.

    Best practice for TEMPDB is still to have multiple files. The general rule is one file per physical CPU up to 8 (not vCPU in hyper-threaded environments). This is because of the way SQL manages TempDB and "helps you avoid the GAM/SGAM/PFS page contention issues", and has nothing to do with disk IO. See Brent Ozar's article here: https://www.brentozar.com/archive/2016/01/cheat-sheet-how-to-configure-tempdb-for-microsoft-sql-server/

    The reason you can't shrink the old files is because you can't shrink ANY database file to less than its "initial create size". I feel this is an issue in SQL Server and have always wondered why Microsoft have never resolved this. You would need to drop and recreate the old files, but you'll not be able to drop the first MDF (tempdev) as this contains the system tables.

    Without knowing the size of the old files, my recommendations would be a bit general, and you will need to adjust them for your situation as practical.

    • Recommended best practice is to have one data file per physical CPU Core up to 8, and these should be the same size.
    • Depending on your version of SQL Server, Trace flags 1117 & 1118 should be used. The action of these is default in SQL 2016 and above (look them up to see what they do).
    • I can't see any reason for the files to be (as small as) 1600MB if there is already a file larger than this. I would use the initial the size of tempdev to determine the initial size of all the files if this is possible. If not, you may need to live with tempdev larger than the rest, or plan a rebuild or migration.
    • Test to determine the number of files that is best for your environment. I have a client that on a 16 core server found 7 TempDB files gives them the best performance.

    I hope this gives you something you can work with.

     

    Cheers

    Leo

    Nothing in life is ever so complicated that with a little work it can't be made more complicated.

    • This reply was modified 4 weeks ago by  Leo.Miller.

    Leo
    Nothing in life is ever so complicated that with a little work it can't be made more complicated.

  • Jeff Moden

    SSC Guru

    Points: 994239

    Not entirely true, Leo... I've shrunk the files for TempDB just like any other file (although the last time I did so was way back in SS 2012).  Contrary to popular previous beliefs, you don't need to reboot the server to do so and it isn't a sin to do so.

    Why would you want to shrink TempDB, especially if it's on it's own drive and nothing else can be allocated there.  The answer is simple... to catch bad code that inappropriately uses way too much of TempDB.  You setup a "growth" alert and have it capture what's currently going on in the server.  After that, it's pretty easy to isolate the bad code.

    And TF 1117 isn't just the default for TempDB... you can't actually turn it off even when you need to temporarily do so.  Why would you need to do so?  If you want to copy a large amount of data with IDENTITY INSERT turned on, ALL of the data goes through TempDB and only to a single file but, if all the files grow at the same time, you may not actually have enough space allocated to the drive TempDB lives on to handle the growth of all 8 files instead of just the one file.  I know this is true for two reasons... the is a fault listed on the new "connect" site and <drumroll please> it happened to me.  It destroy the final action of a massive table migration and totally negated all of the work done for the migration for that weekend release.

    Microsoft has to stop protecting idiots by doing things like a permanent TF 1117 on TempDB.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
    "If you think its expensive to hire a professional to do the job, wait until you hire an amateur."--Red Adair
    "Change is inevitable... change for the better is not."
    When you put the right degree of spin on it, the number 3|8 is also a glyph that describes the nature of a DBAs job. 😉

    Helpful Links:
    How to post code problems
    Create a Tally Function (fnTally)

  • Sue_H

    SSC Guru

    Points: 89970

    Jeff Moden wrote:

    Not entirely true, Leo... I've shrunk the files for TempDB just like any other file (although the last time I did so was way back in SS 2012).  Contrary to popular previous beliefs, you don't need to reboot the server to do so and it isn't a sin to do so.

    And if you do, you can shrink files to less than "initial size" since there really isn't such a thing maintained as "initial size" - it's just a label used (incorrectly) in SSMS. The documentation correctly mentions minimum size, how it's reset and how files can be shrunk to less than the size at creation.

    Sue

Viewing 8 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply