Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««12345

Adding an extra file to TEMPDB Expand / Collapse
Author
Message
Posted Thursday, February 28, 2013 12:42 PM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Yesterday @ 3:26 PM
Points: 2,355, Visits: 3,537
Robert Davis (2/28/2013)
Exactly. If the drive space is there, why not allocate it? I'm not going to use it for anything else.

The reason for allocating multiple files is to avoid tempdb contention, and overloading the drive with IO is rarely a concern. On those rare occasions, where IO becomes a problem on the drive, I will separate the files to multiple drives. simply writing the same amount of IO to more files does not greatly increase the amount of IO going to the drive. There is some increase because more files affect the size of I/O block you can write. If writing to one file, it will write a larger block (when it's writing enough to need to write the larger block) than when to multiple files. That IO block size decreases a little each time you add another file, but it flattens out at 8 files. So the IO block size will be the same for 8 data files as it is for 16 files or 24 files or more.

The SQLCat team measured this in benchmarking tests (specifically driving a workload that would take advantage of large writes):

1 file = 256 KB max IO block size
2 files = 196 KB max IO block size
4 files = 132 KB max IO block size
8 files = 64 KB max IO block size
16 files = 64 KB max IO block size
32 files = 64 KB max IO block size



That IO block size decreases a little each time you add another file


Yes, that's the problem. You're scattering (small) I/Os all over the drive, causing lots of drive head movement, which is brutal on performance.

Allocating "one drive" is very nebulous. Does this mean 2TB? 3TB? 5TB? No matter what SQL actually needs. How do you know ahead of time for a new instance what a massively-oversize, will absolutely never fill up drive size is? [Btw, how do you get the storage people to let you waste TBs of unused disk space on half-filled "tempdb drives"?]


SQL DBA,SQL Server MVP('07, '08, '09)

Carl Sagan said: "There is no such thing as a dumb question." Sagan obviously never watched a congressional hearing!
Post #1425257
Posted Thursday, February 28, 2013 12:48 PM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Yesterday @ 3:26 PM
Points: 2,355, Visits: 3,537
Int'l Paper also was buying lots of companies for awhile, some of them other paper companies.

During a merger, there are legal restrictions on what can and can't be viewed, in stages of nn-days before the official merger date (and other legal milestone dates as well).

Sometimes we kept old support personnel from the original company, other times not.

For one acquisition, one day I simply had 12 more prod SQL instances to support, which I had not been logged into before. Those kind of situations force you to develop more flexibility in how you handle certain things.

I guess I should wish I had the luxury of the ultra-controlled environments some people have, but it's a mixed blessing/curse. I know several people at FedEx and they are in NO-ERROR PERIOD production envs. But it takes them 3 days to change a title on a screen because of all the code and app checks everything must go thru.




SQL DBA,SQL Server MVP('07, '08, '09)

Carl Sagan said: "There is no such thing as a dumb question." Sagan obviously never watched a congressional hearing!
Post #1425260
Posted Monday, March 4, 2013 9:27 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Yesterday @ 3:26 PM
Points: 2,355, Visits: 3,537
GilaMonster (2/28/2013)
stehoban (2/27/2013)
Regarding point number 3 above. How to i reduce the size from 1.5gb to 800mb or 1gb ? Would this be by using the ALTER database command with the shrinkfile statement ?


Not shrink. Alter database and set the initial size of the original TempDB file. That size takes effect on a restart of SQL, resulting in 2 files of the defined size.

Regarding point number 4 above. Is a reboot necessary - could i just restart the sql server agent ?


No reboot necessary. Not SQL agent though, restart SQL Server itself



For SQL 2008, that works great.

For SQL 2005, as this forum is, you can't do that.

I need the changes to take automatically take effect on the next start of SQL 2005. I can't be available to do multiple stops and starts of SQL, and many (paper) mills wouldn't allow that to occur anyway. Some equipment is multi-multi-million dollar equipment and never stops except for schedule machine maintenance. It's nice when they can afford the time to reboot/restart SQL even once while up.

I don't know of any other automatic SQL 2005 method than the one I've outlined here. I've never had an issue shrinking log files at start up. And I wouldn't expect a 1.5G tempdb, as in the case we're discussing, to be so used it couldn't be shrunk to 1GB at start up.

If you're in the ideal situation of being able to be present for every server as it comes up, with the time available and allowed to restart as needed, etc., then, yeah, you can do everything strictly by the book.

But for someone without SQL expertise, I've found the automatic method works better. My experience has been that even Windows admins sometimes aren't ideal for starting SQL in special mode, issuing unfamiliar SQL-based commands, etc..


SQL DBA,SQL Server MVP('07, '08, '09)

Carl Sagan said: "There is no such thing as a dumb question." Sagan obviously never watched a congressional hearing!
Post #1426319
Posted Monday, March 4, 2013 9:49 AM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Tuesday, December 2, 2014 8:16 AM
Points: 1,618, Visits: 1,554
ScottPletcher (2/28/2013)

Yes, that's the problem. You're scattering (small) I/Os all over the drive, causing lots of drive head movement, which is brutal on performance.

Allocating "one drive" is very nebulous. Does this mean 2TB? 3TB? 5TB? No matter what SQL actually needs. How do you know ahead of time for a new instance what a massively-oversize, will absolutely never fill up drive size is? [Btw, how do you get the storage people to let you waste TBs of unused disk space on half-filled "tempdb drives"?]


We weren't talking size. We were talking whether or not additional files should be separated to a different drive. How is saying "on a single drive" nebulous?

There is no way to know what size could never possibly be filled up. That's true whether you pre-size the files out or not. Whether you pre-size or not, it doesn't change how you estimate how big the drive should be. You're really grasping at straws in an effort to continue arguing.

I don't waste TBs of space. You pulled that out of your backside, I never advocated having multi-TB drives for tempdb. You're so intent on arguing that you're making stuff up now. I aslo don't leave tempdb drives half-filled. I have stated many times in this thread that I pre-size the files out to consume at least 90% of the drive space. In what world is at least 90% allocated the same as half-filled?




My blog: SQL Soldier
Twitter: @SQLSoldier
My book: Pro SQL Server 2008 Mirroring
Microsoft Certified Master: SQL Server 2008
Principal DBA: Outerwall, Inc.
Also available for consulting: SQL DBA Master
Post #1426335
Posted Monday, March 4, 2013 10:10 AM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 4:41 AM
Points: 40,663, Visits: 37,128
ScottPletcher (3/4/2013)
GilaMonster (2/28/2013)
stehoban (2/27/2013)
Regarding point number 3 above. How to i reduce the size from 1.5gb to 800mb or 1gb ? Would this be by using the ALTER database command with the shrinkfile statement ?


Not shrink. Alter database and set the initial size of the original TempDB file. That size takes effect on a restart of SQL, resulting in 2 files of the defined size.

Regarding point number 4 above. Is a reboot necessary - could i just restart the sql server agent ?


No reboot necessary. Not SQL agent though, restart SQL Server itself


For SQL 2008, that works great.

For SQL 2005, as this forum is, you can't do that.


SQ 2000, 2005, 2008, 2008 R2, 2012. Probably 7, but I never worked with it.
At restart TempDB is reset back to the size defined in the system catalog and cleared. I've depended on that behaviour in SQL 2000 (several years ago) and SQL 2005 (just last month)

I've never had an issue shrinking log files at start up.


No one said anything about the log. It's the data files that are the ones that could get corrupted if shrinking TempDB. There's a KB article that describes the potential problems and discusses the ways to shrink TempDB, mostly being restart SQL and let it go back to default. Want to shrink below default, start SQL in single user mode and then shrink.

And I wouldn't expect a 1.5G tempdb, as in the case we're discussing, to be so used it couldn't be shrunk to 1GB at start up.


How heavily used TempDB is before the restart is completely irrelevant, since it's cleared upon restart. So unless you have TempDB defined at 1 GB and model with 1.5 GB of stuff in, there's no problems (and if you do, there are lots of other potential problems)

If you're in the ideal situation of being able to be present for every server as it comes up, with the time available and allowed to restart as needed, etc., then, yeah, you can do everything strictly by the book.


I used to work at an investment bank, the servers restarted at most once a month (patches), usually somewhere around 1AM on a sunday morning. I certainly wouldn't plan on being present when the restart happens, if I even knew about it before hand. Restarting as needed, not a chance in hell, took 5 documents and a week's notice to schedule a reboot.

I don't know why you're so fixated on shrinking TempDB anyway, if it's regularly growing, it needs a larger default size (and better monitoring so that it can be manually grown before an autogrow would be necessary). Only time when it's grown and you don't want a larger default size is if something unusual happened (runaway query, data import, etc). In that case, just check all files are still the same size, grow any that aren't and let the next restart (whenever it is) handle getting the files back to default size.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1426348
Posted Monday, March 4, 2013 10:12 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Yesterday @ 3:26 PM
Points: 2,355, Visits: 3,537
Robert Davis (3/4/2013)
ScottPletcher (2/28/2013)

Yes, that's the problem. You're scattering (small) I/Os all over the drive, causing lots of drive head movement, which is brutal on performance.

Allocating "one drive" is very nebulous. Does this mean 2TB? 3TB? 5TB? No matter what SQL actually needs. How do you know ahead of time for a new instance what a massively-oversize, will absolutely never fill up drive size is? [Btw, how do you get the storage people to let you waste TBs of unused disk space on half-filled "tempdb drives"?]


We weren't talking size. We were talking whether or not additional files should be separated to a different drive. How is saying "on a single drive" nebulous?

There is no way to know what size could never possibly be filled up. That's true whether you pre-size the files out or not. Whether you pre-size or not, it doesn't change how you estimate how big the drive should be. You're really grasping at straws in an effort to continue arguing.

I don't waste TBs of space. You pulled that out of your backside, I never advocated having multi-TB drives for tempdb. You're so intent on arguing that you're making stuff up now. I aslo don't leave tempdb drives half-filled. I have stated many times in this thread that I pre-size the files out to consume at least 90% of the drive space. In what world is at least 90% allocated the same as half-filled?




"How is saying "on a single drive" nebulous?":
All drives aren't a fixed size; saying you allocate "a whole drive" to tempdb is nebulous as to the amount of space.


"I don't waste TBs of space."
This person needs 1.5G of space for tempdb. What's the smallest drive size you can realistically buy now? Add a few of those together and you're easily wasting TBs of space.


"In what world is at least 90% allocated the same as half-filled?"
Allocated is not the same as used. In this case, a, say, 400GB allocation would be 99% unfilled/unused.


SQL DBA,SQL Server MVP('07, '08, '09)

Carl Sagan said: "There is no such thing as a dumb question." Sagan obviously never watched a congressional hearing!
Post #1426350
Posted Monday, March 4, 2013 10:19 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Yesterday @ 3:26 PM
Points: 2,355, Visits: 3,537
GilaMonster (3/4/2013)
ScottPletcher (3/4/2013)
GilaMonster (2/28/2013)
stehoban (2/27/2013)
Regarding point number 3 above. How to i reduce the size from 1.5gb to 800mb or 1gb ? Would this be by using the ALTER database command with the shrinkfile statement ?


Not shrink. Alter database and set the initial size of the original TempDB file. That size takes effect on a restart of SQL, resulting in 2 files of the defined size.

Regarding point number 4 above. Is a reboot necessary - could i just restart the sql server agent ?


No reboot necessary. Not SQL agent though, restart SQL Server itself


For SQL 2008, that works great.

For SQL 2005, as this forum is, you can't do that.


SQ 2000, 2005, 2008, 2008 R2, 2012. Probably 7, but I never worked with it.
At restart TempDB is reset back to the size defined in the system catalog and cleared. I've depended on that behaviour in SQL 2000 (several years ago) and SQL 2005 (just last month)

I've never had an issue shrinking log files at start up.


No one said anything about the log. It's the data files that are the ones that could get corrupted if shrinking TempDB. There's a KB article that describes the potential problems and discusses the ways to shrink TempDB, mostly being restart SQL and let it go back to default. Want to shrink below default, start SQL in single user mode and then shrink.

And I wouldn't expect a 1.5G tempdb, as in the case we're discussing, to be so used it couldn't be shrunk to 1GB at start up.


How heavily used TempDB is before the restart is completely irrelevant, since it's cleared upon restart. So unless you have TempDB defined at 1 GB and model with 1.5 GB of stuff in, there's no problems (and if you do, there are lots of other potential problems)

If you're in the ideal situation of being able to be present for every server as it comes up, with the time available and allowed to restart as needed, etc., then, yeah, you can do everything strictly by the book.


I used to work at an investment bank, the servers restarted at most once a month (patches), usually somewhere around 1AM on a sunday morning. I certainly wouldn't plan on being present when the restart happens, if I even knew about it before hand. Restarting as needed, not a chance in hell, took 5 documents and a week's notice to schedule a reboot.

I don't know why you're so fixated on shrinking TempDB anyway, if it's regularly growing, it needs a larger default size (and better monitoring so that it can be manually grown before an autogrow would be necessary). Only time when it's grown and you don't want a larger default size is if something unusual happened (runaway query, data import, etc). In that case, just check all files are still the same size, grow any that aren't and let the next restart (whenever it is) handle getting the files back to default size.




"I've never had an issue shrinking log files at start up."

Typo: should have been data files.

The ALTER command to try to change an existing file to a smaller size gives you an error on SQL 2005.

Yes, if the pure method worked in SQL 2005, as it does in SQL 2008, that would be great, but it doesn't (at least for SP2 and SP3, don't have other easily available right now to confirm).


"I don't know why you're so fixated on shrinking TempDB anyway"
Because that was one of the key qs here:
How to i reduce the size from 1.5gb to 800mb or 1gb?


and I've had to do it many times on many servers. I don't have time to do them all by hand, and there's not a DBA on site when it's done, so it needs to be automatic.


SQL DBA,SQL Server MVP('07, '08, '09)

Carl Sagan said: "There is no such thing as a dumb question." Sagan obviously never watched a congressional hearing!
Post #1426357
Posted Monday, March 4, 2013 3:36 PM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 4:41 AM
Points: 40,663, Visits: 37,128
ScottPletcher (3/4/2013)
The ALTER command to try to change an existing file to a smaller size gives you an error on SQL 2005.


That's when you're trying to reduce the initial size, so when the DB's size has been defined via alter database at 1.5 GB, then you cannot shrink it down to 1 GB. That requires DBCC ShrinkFile.
The case in the original post of this question was when the DB has grown too large, via autogrow, the initial size is defined in the system catalogs as 8 MB and you want to make the file 1 GB in size. With that, you can definitely use Alter Database, with TempDB you might (and I repeat might) need to restart first and then specify the larger file size. Just one restart though, not multiple. The restart puts TempDB back to the default size, then you specify the ALTER with the larger file size.

If you have a TempDB that someone has explicitly specified the size too large and you want to change the initial defined size that's in the system catalogs to a lower size (so someone mistakenly defined TempDB as 10GB not 1 GB), that's the time you need a DBCC ShrinkFile and for TempDB that should be done with the server in single user mode. That said, unless you have people often altering TempDB and specifying the wrong size, that certainly shouldn't be something done often, if it ever needs doing at all.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1426492
Posted Monday, March 4, 2013 4:04 PM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Yesterday @ 3:26 PM
Points: 2,355, Visits: 3,537
GilaMonster (3/4/2013)
ScottPletcher (3/4/2013)
The ALTER command to try to change an existing file to a smaller size gives you an error on SQL 2005.


That's when you're trying to reduce the initial size, so when the DB's size has been defined via alter database at 1.5 GB, then you cannot shrink it down to 1 GB. That requires DBCC ShrinkFile.
The case in the original post of this question was when the DB has grown too large, via autogrow, the initial size is defined in the system catalogs as 8 MB and you want to make the file 1 GB in size. With that, you can definitely use Alter Database, with TempDB you might (and I repeat might) need to restart first and then specify the larger file size. Just one restart though, not multiple. The restart puts TempDB back to the default size, then you specify the ALTER with the larger file size.

If you have a TempDB that someone has explicitly specified the size too large and you want to change the initial defined size that's in the system catalogs to a lower size (so someone mistakenly defined TempDB as 10GB not 1 GB), that's the time you need a DBCC ShrinkFile and for TempDB that should be done with the server in single user mode. That said, unless you have people often altering TempDB and specifying the wrong size, that certainly shouldn't be something done often, if it ever needs doing at all.



You live in a placid and sheltered environment ... what a joy it must be!

As we consolidate SQL instances, inevitably there are laggards who can't or won't move their stuff quickly enough to the more-shared instances. However, once most dbs and other things are moved, the instance doesn't need the same disk allocations for tempdb it had when it was fully loaded. Thus the need to shrink existing tempdbs. "There are more things in heaven and earth ... Than are dreamt of in your philosophy."


SQL DBA,SQL Server MVP('07, '08, '09)

Carl Sagan said: "There is no such thing as a dumb question." Sagan obviously never watched a congressional hearing!
Post #1426505
« Prev Topic | Next Topic »

Add to briefcase «««12345

Permissions Expand / Collapse