Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

Suggestions on TEMPDB Setup Expand / Collapse
Author
Message
Posted Friday, March 15, 2013 12:46 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Wednesday, September 24, 2014 11:14 AM
Points: 177, Visits: 411
I have a teradata environment with database aroudn 3TB. This server has around 40 core processors. Microsoft suggested to Use only 32 processors instead of 40 as there was a bug.
So, Inorder to build temp database I am starting with 8 temp data files and 1 log files. I have two solid state drives each of size 290GB, so Total of two drives is 580 GB . So, I am diving my files in the following way. I need your suggestion
in dividing the files.My environment uses lot of tempdb. Please suggest with my ideas.
T1: 70 GB
T2:70 GB
T3:70 GB
T4:70 GB
T5:70 GB
T6:70 GB
T7:70 GB
T8:70 GB
Tlog: 10 GB.

By dividing into this way do I across any problems.

Post #1431683
Posted Friday, March 15, 2013 1:03 PM


Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Yesterday @ 7:56 AM
Points: 3,941, Visits: 7,163
I'd recommend you place your TEMPDB Log file on a different drive...

Seems like nice hardware for the server regarding CPU's but what about RAM and the disk subsystem? I am going to assume that there's nothing else on these 2 SSD's except for tempdb, and would suggest purchasing two additional drives so you can put them into a RAID 10 configuration (ideal for tempdb performance and reliability).

There's really no need to have 8 tempdb data files until you're experiencing tempdb contention and then I would suggest only adding 1 new file at a time to see if things improve.

If you split out all your space across smaller data files you may run into issues with command DBCC operations like CHECKDB not having enough space available in your tempdb data files to perform CHECKTABLE on your larger tables (unlikely, but still possible).

If you're stuck with 2 disks, I would recommend:
Disk 1
T1: 140 GB (set size, NO AUTOGROWTH)
T2: 140 GB (set size, NO AUTOGROWTH)

Disk 2
Tlog: 260 GB, leaving a few GB to grow


______________________________________________________________________________
"Never argue with an idiot; They'll drag you down to their level and beat you with experience"
Post #1431696
Posted Friday, March 15, 2013 1:09 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 6:44 AM
Points: 39,983, Visits: 36,351
MyDoggieJessie (3/15/2013)
There's really no need to have 8 tempdb data files until you're experiencing tempdb contention and then I would suggest only adding 1 new file at a time to see if things improve.


I would usually recommend doubling the files when there's contention identified (allocation contention that is), so 1->2->4->8 and so on

If you split out all your space across smaller data files you may run into issues with command DBCC operations like CHECKDB not having enough space available in your tempdb data files to perform CHECKTABLE on your larger tables


Why? TempDB files, like user database files, are used with a proportional fill algorithm, so there's no difference between 2 files of 140GB and 4 files of 70GB, it's still 280GB total space.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1431701
Posted Friday, March 15, 2013 1:38 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Wednesday, September 24, 2014 11:14 AM
Points: 177, Visits: 411
Gilla thanks for quick explanation. We had microsoft vendor who came in past and recommended to use 8 data files of equal size as we have lot of tempdb contention issues. Also, I am looking at the following article "http://support.microsoft.com/kb/328551" and it is recommending the same. We are no more in finance to get new hardware. So, the thought is to divide the .mdf files into two drives and to place .ldf into one of the drives with around 5 GB and have mdf's around 70 GB of each and setting no autogrowth.
Post #1431722
Posted Friday, March 15, 2013 1:57 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 6:44 AM
Points: 39,983, Visits: 36,351
If it's allocation contention, doesn't matter how many drives the data files are split over, just the number of files. If you have IO contention that's when you need separate IO channels.


Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1431730
Posted Saturday, March 16, 2013 8:01 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Today @ 6:36 AM
Points: 6,423, Visits: 13,818
GilaMonster (3/15/2013)
[quote]MyDoggieJessie (3/15/2013)
so there's no difference between 2 files of 140GB and 4 files of 70GB, it's still 280GB total space.

To add to the above, remember that when adding multiple files to a single filegroup you are forcing SQL Server to stripe across the files. The overhead of maintaining the proportional fill and the constant file switching may actually have an adverse effect on the system.


-----------------------------------------------------------------------------------------------------------

"Ya can't make an omelette without breaking just a few eggs"
Post #1431900
Posted Saturday, March 16, 2013 10:07 AM


Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Yesterday @ 7:56 AM
Points: 3,941, Visits: 7,163
GilaMonster (3/15/2013)

Why? TempDB files, like user database files, are used with a proportional fill algorithm, so there's no difference between 2 files of 140GB and 4 files of 70GB, it's still 280GB total space.

I said this because I've encountered issues with CHECKDB on database servers that have multiple smaller Temp DB data files for particularly large databases. Due to space restrictions on the disk subsystem, I couldn't just increase all 8 tempdb data files to a "happy" amount, instead I opted for lesser files (4), absorbing the space from the other files spread evenly across the new files...making them larger - The issue was fixed.

While I can't remember the exact error it was something like:
: "Could not allocate space for object 'dbo.SORT temporary run storage: 140846049001472' in database 'tempdb' because the 'PRIMARY' filegroup is full. Create disk space by..."


______________________________________________________________________________
"Never argue with an idiot; They'll drag you down to their level and beat you with experience"
Post #1431909
Posted Sunday, March 17, 2013 9:12 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Wednesday, September 24, 2014 11:14 AM
Points: 177, Visits: 411
Jessie,


Thanks for your detailed explanation. I was referring to the following article "http://sqlserverpedia.com/wiki/TempDB". In this it informs " in SQL Server 2005/2008 it is now recommend to have 1/2 or 1/4 as many files as CPU cores. This is only a guide and TempDB should be monitored to see if PAGELATCH waits increase or decrease with each change. ". Even the microsoft recommends the same. I have 32 processors, but my drive space is around 560 GB(2 drives) only. My ram is 64 gb. Can you provide recommendation for this. Thanks
Post #1431992
Posted Sunday, March 17, 2013 12:27 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 6:44 AM
Points: 39,983, Visits: 36,351
TempDB files = 1/2-1/4 the number of cores, start with no more than 8, increase if you're still seeing allocation contention (and only split if you're seeing allocation contention to start with)


Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1432009
Posted Sunday, March 17, 2013 1:03 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Wednesday, September 24, 2014 11:14 AM
Points: 177, Visits: 411
Yes Gila, I want to do the same. But, as per jessi experience it seems we might encounter checkdb constraints. So, I think even I need to experiment this. So, Gila this is my consideration: For 2 drives, each of size 290 GB, I will divide into 8 tempdb data files of equal size into two drives and place 1 log file separately.
Post #1432015
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse