Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12

Analyzing Disk Capacity for SQL Server Expand / Collapse
Author
Message
Posted Wednesday, July 5, 2006 8:59 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Thursday, August 24, 2006 12:29 AM
Points: 13, Visits: 1

Hi Laue,

This article was only intended for Disk capacity management. Now this article talks about some of the best practices that should be taken care of before you plan for your server and assists on the guidelines. Now, database hogging can happen for so many reasons and as I said, each problem comes with its own individual flavour. Thats exactly why we are employed to take care of.

In SQL Server 2000, there were some limitations and pain to figure out what exactly caused the problem. All you could do is check the trace file and event logs to figure out the exact issue. In SQL Server 2005, there are a lot of handy and nifty things made available to readily narrow down the problem. I find DMVs most handy and the added XML support in profiler enhanced the probing job for us.

Network plays a very big role in case of query performance and not choosing appropriate options and not having a plan may result to a very slow responce where your query may perform superbly, but due to network clogging, you may not get adequate performance.

Now all these call for different articles and it will be done. My next article will be on memory and it is already submitted. Coming are on CPU and network. Please do check those and you may find something useful

Thanks for your time and the valuable input.

 With Regards

~Arindam.

Post #292227
Posted Wednesday, July 5, 2006 9:12 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Wednesday, July 9, 2014 12:14 PM
Points: 148, Visits: 655

...and in the correct order of importance, even.   What a treat. 

(That smilely is called "woot"!!one!!)

It's a good start, I'll be looking for the rest. 




Post #292231
Posted Wednesday, July 5, 2006 11:06 AM
Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Today @ 9:53 AM
Points: 3,475, Visits: 584

Hi,

I would like to add that the database size should be multiplied by 5:

First, some people tend to create a an extra database on the same machine to QA something before the upgrade so the QA test would go not in the similar environment (this is a must) but also in a real environment.

And the other 3 times the database size is for the 3 days worth of the maintenance plan backups. It is true that we need 2 days most before the thing goes to the tape or SAN, but I had multiple cases when developers or sysadmins during the upgrade or update would backup the database in between planned backups and then the next planned backup finds itself short on space.




Regards,
Yelena Varshal

Post #292276
Posted Thursday, July 6, 2006 5:47 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Thursday, August 24, 2006 12:29 AM
Points: 13, Visits: 1

Hi Yelena,

That is a very good approach to proactively identify the risk area, but if it is a disk, then we do not need much testing on that before an upgrade. I would certainly ask for a lab test machine from the vendor if i was buying CPU or memory, bind it to a cluster, failover to that machine for couple of business days and check the profiler logs. For production box disk, I would not let anyone do ad hoc testing on that.

In case of backup plan, it depends upon how much data I am accumulating for a full business day. Depending upon the volume of data and the maintainance plan, I would decide upon the extra space that I might need. On a normal scenario, if I keep the default 15min continuous backup plan, I do not need much space to hold the data as logshipping is set up.

We have to remember that to decide a backup plan, we need to check the network bandwidth and business requirement also.

This article talks about general disk capacity management and some of the formulae given to identify the database growth. It is upon individual requirement to decide how much buffer they want to keep for safety.

 Nevertheless, a very good thought from you. This should also be taken care of in case of planning disk capacity. Thanks

~Arindam.

Post #292473
Posted Wednesday, July 19, 2006 5:54 PM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Monday, September 1, 2014 7:11 PM
Points: 39, Visits: 68

Something slightly off the topic, but you may have dealt with it when concentrating on the io of disk management, is the idea of having the hard disk formatted to 8 kb (or 16, 32kb) to match the page file size of SQL to reduce the number of reads.

 

Have you tried this and do you know of any cases where it has helped? I’ve read little bits about it but never been able to test it and know if it really is worth while.

 

Cheers

Fred

Post #295731
Posted Monday, July 24, 2006 9:59 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, September 23, 2014 2:45 AM
Points: 127, Visits: 406

maybe my question bit out from your discussion.
but I keeping asking this question myself

in article, it mention for disk throughput capacity, we should monitor physical disk counter, disk read bytes/sec, disk write bytes/sec.
my question is should we take physical disk counter /logical disk counter?
any why. we need take physical disk counter instead of logical disk counter?

--soonyu

 

Post #296965
Posted Wednesday, July 4, 2007 6:56 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, December 23, 2008 1:40 PM
Points: 3, Visits: 15
This article is great, now i just need to convince make my IT to be more concious....  


Post #378943
Posted Sunday, January 6, 2008 10:59 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Monday, February 18, 2013 4:40 PM
Points: 11, Visits: 30
Just some ambiguity there. Your example of linear growth is incorrect. You state that the database starts at 100GB and grows at 10% per year for 2 years. That gives it a size of 100 * 1.1 = 110GB for the end of the first year and 110 * 1.1 = 121GB for the end of the second year. Percentage growth plans are always exponential, never linear. If your target growth is by a certain number of MB (or GB) per year then the growth plan should be specifiying MB (or GB) and not a percentage.
Post #439436
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse