Analyzing Disk Capacity for SQL Server

  • Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/aBanerjee/analyzingdiskcapacityforsqlserver.asp

  • Its one of the most helpful articles I've ever came across.Thanks Arindam

  • Thanks Angshu. I will be posting more on capacity management. Please look around for more on this

  • Not too bad at all, it definately covered the normal user requirements

    Unfortunately our system is growing at geometiric rates (was 10GB a week last year), now 20, with occasional blips up to 40-50GB.

    It's a little worrying when you buy an extra 4TB of storage and expect to have to get more by christmas next year, but at the same time it's also exciting, which is why i still love this job so many years into it.

  • what would be intersesting for would be how you would best partition your filesystem - how do marry the best practice of separate disks and channels for tempdb, data files, transaction logs, OS, backups and pagefiles when you typically only have 2 channels and rarely more than space 9 physical disks per server.

  • Thanks for the responce, firthr. I would love to research on your mentioned topic and see  whether I could get something to write on. Thanks for your input

    ~Arindam.

  • Thanks for the responce! You are exactly right in your opinion. Certainly we could find out new rules and formulae, but every problem comes out to be a new one

    I love this job too. Hope you could cope up with the geometrical growth!

  • Yep, we're coping quite well, our SAN is more than up to the job so long as we keep adding extra storage and partitioning the data properly.

    I might get round to writing an article on what we did in the hopes that it helps others, but it might be a while as I'm currently way too busy

  • I will be eagerly waiting to see something on that

    Write your experience and how you came up with a solution. That will be interesting.

    ~Arindam.

  • I'd call this "Disk Management 101", in that it covers the basics.  I'd say the majority of the people in this forum already have implementation plans for storage capacity planning.  I was personally really hoping to see more about External Capacity...  can we expect one of those in the future, too? 

    Given the above, SQL Profiler, and a good trace template, I have found it quite easy to point out Internal (SQL-object based) performance problems.  But what about those problems not associated with I/O?  Is it a Network Problem?  Client problem?  I have theories about the problems that I have encountered over the years (like if MS Access is running a query and the CPU is pegged, then leave me alone ), but what if it isn't Access?  How do you work with your network team to find the problem?

    I am a firm believer that network n0mes don't exist.

  • Hi Laue,

    This article was only intended for Disk capacity management. Now this article talks about some of the best practices that should be taken care of before you plan for your server and assists on the guidelines. Now, database hogging can happen for so many reasons and as I said, each problem comes with its own individual flavour. Thats exactly why we are employed to take care of.

    In SQL Server 2000, there were some limitations and pain to figure out what exactly caused the problem. All you could do is check the trace file and event logs to figure out the exact issue. In SQL Server 2005, there are a lot of handy and nifty things made available to readily narrow down the problem. I find DMVs most handy and the added XML support in profiler enhanced the probing job for us.

    Network plays a very big role in case of query performance and not choosing appropriate options and not having a plan may result to a very slow responce where your query may perform superbly, but due to network clogging, you may not get adequate performance.

    Now all these call for different articles and it will be done. My next article will be on memory and it is already submitted. Coming are on CPU and network. Please do check those and you may find something useful

    Thanks for your time and the valuable input.

     With Regards

    ~Arindam.

  • ...and in the correct order of importance, even.   What a treat. 

    (That smilely is called "woot"!!one!!)

    It's a good start, I'll be looking for the rest. 

  • Hi,

    I would like to add that the database size should be multiplied by 5:

    First, some people tend to create a an extra database on the same machine to QA something before the upgrade so the QA test would go not in the similar environment (this is a must) but also in a real environment.

    And the other 3 times the database size is for the 3 days worth of the maintenance plan backups. It is true that we need 2 days most before the thing goes to the tape or SAN, but I had multiple cases when developers or sysadmins during the upgrade or update would backup the database in between planned backups and then the next planned backup finds itself short on space.

    Regards,Yelena Varsha

  • Hi Yelena,

    That is a very good approach to proactively identify the risk area, but if it is a disk, then we do not need much testing on that before an upgrade. I would certainly ask for a lab test machine from the vendor if i was buying CPU or memory, bind it to a cluster, failover to that machine for couple of business days and check the profiler logs. For production box disk, I would not let anyone do ad hoc testing on that.

    In case of backup plan, it depends upon how much data I am accumulating for a full business day. Depending upon the volume of data and the maintainance plan, I would decide upon the extra space that I might need. On a normal scenario, if I keep the default 15min continuous backup plan, I do not need much space to hold the data as logshipping is set up.

    We have to remember that to decide a backup plan, we need to check the network bandwidth and business requirement also.

    This article talks about general disk capacity management and some of the formulae given to identify the database growth. It is upon individual requirement to decide how much buffer they want to keep for safety.

     Nevertheless, a very good thought from you. This should also be taken care of in case of planning disk capacity. Thanks

    ~Arindam.

  • Something slightly off the topic, but you may have dealt with it when concentrating on the io of disk management, is the idea of having the hard disk formatted to 8 kb (or 16, 32kb) to match the page file size of SQL to reduce the number of reads.

     

    Have you tried this and do you know of any cases where it has helped? I’ve read little bits about it but never been able to test it and know if it really is worth while.

     

    Cheers

    Fred

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply