increase in database size

  • Hi All,

    We have a 2 TB disk to stored of the sql database files. db has 3 data files and 1 log file. log file is stored on a different drive.

    The issue is with the data file drive. Current disk usage (E:\ drive) reached is 1.20 TB out of 2TB.

    After monitoring for last week 5 days , we are seeing one of the index data is getting increased as below.

    monday - 100GB

    tuesday - 20GB

    wednesday -150GB

    thursday - 50GB

    fiday - 60GB

    So, i am little worried the disk will be full very soon.

    Questions

    1. how much more disk can I add? with this rate of growth?

    2. How many days will this drive will this drive sustain before the drive gets completely filled up. I want to know the calculation behind it.

    The problem is, we are reached out the number of disk limits for the Azure VM. only option is to go to next VM size. or create new VM and move some db's to the new VM.

    Before that, we need to provide data to the management and app team that the data is growing and very soon we might run out of disk space and hit database downtime.

    Alternatively, we are figuring out to archiving some of the unwanted backup tables and old data in large tables within the databases. Also, trying to figureout unused indexes and disable / dropping them.

    Regards,

    Sam

  • you can't realistically expect the answer to this from a forum post, we have no way of knowing what's driving the size changes.

    Check index create dates and find out if any have been created or even changed recently, for example a filter removed from an index, etc

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" ūüėČ

  • Are you getting more and more data? If so, that's the challenge (note, not a problem or an issue, just something you have to deal with). And yeah, it may entail adding additional storage media on Azure, moving this index and/or table to a separate file group, maybe (big MAYBE) look at partitioning to deal with data management.

    Are you seeing outrageous index growth, but no additional data? Then, guessing, cause I'm not there and can't see anything, you're doing something through your code that is resulting in massive amounts of page splits or lots of sparsely populated pages (lots of inserts followed by lots of deletes, something like that).

    You need to dig into the behaviors on the system, not just look only at the size of the index. What is causing it to grow?

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I understand my ask is to vague at this point. I totally get it.

    All my ask is, at very high level . I we have to build a new server , in terms of capacity planning what are the key things we need to consider to size  disks for at least 5 years plus some buffer size. What data points to collect and what formulas need to applied to do some basic disk space capacity planning. Any online resource link will be also be helpful. Also, are there any 3rd party tools or free sql server tools which can do some db size  predictions or forecasting?

     

  • The only sure way to do forecasting is to measure today, measure next week (or have measured last week), and do some calculations. Redgate Monitor (yes, full disclosure, my employer) has just such a mechanism built into the tool. It's not magic. It measures the changes over time and does a linear projection. You can do the same.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • vsamantha35 wrote:

    Hi All,

    We have a 2 TB disk to stored of the sql database files. db has 3 data files and 1 log file. log file is stored on a different drive. The issue is with the data file drive. Current disk usage (E:\ drive) reached is 1.20 TB out of 2TB. After monitoring for last week 5 days , we are seeing one of the index data is getting increased as below.

    monday - 100GB tuesday - 20GB wednesday -150GB thursday - 50GB fiday - 60GB

    So, i am little worried the disk will be full very soon. Questions

    1. how much more disk can I add? with this rate of growth? 2. How many days will this drive will this drive sustain before the drive gets completely filled up. I want to know the calculation behind it.

    The problem is, we are reached out the number of disk limits for the Azure VM. only option is to go to next VM size. or create new VM and move some db's to the new VM. Before that, we need to provide data to the management and app team that the data is growing and very soon we might run out of disk space and hit database downtime. Alternatively, we are figuring out to archiving some of the unwanted backup tables and old data in large tables within the databases. Also, trying to figureout unused indexes and disable / dropping them.

    Regards, Sam

    Post the CREATE TABLE statement for the table the index is a part of including the CREATE statements for the index(es) in quesiton.  Then, post the results of a run of sys.dm_dm_index_physical_stats using the "sampled" mode for the whole table (all indexes).  If you dump that to a spreadsheet, attach it and I'll open the spreadsheet.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Grant Fritchey wrote:

    The only sure way to do forecasting is to measure today, measure next week (or have measured last week), and do some calculations. Redgate Monitor (yes, full disclosure, my employer) has just such a mechanism built into the tool. It's not magic. It measures the changes over time and does a linear projection. You can do the same.

    In SQLMonitor, where can I find the projection information. Could you please provide the link to check this growth info or graph?

  • [/quote]

    Post the CREATE TABLE statement for the table the index is a part of including the CREATE statements for the index(es) in quesiton.  Then, post the results of a run of sys.dm_dm_index_physical_stats using the "sampled" mode for the whole table (all indexes).  If you dump that to a spreadsheet, attach it and I'll open the spreadsheet.

    [/quote]

    Sir, I didn't get it. could you please elaborate the steps? is it for any specific table or all tables?

     

  • On¬† high level, assuming its a new project? what questions you will be asking to the business ? and lets say. I have collected the database size per file for 6 months - 1 year, will you take the average ? and how much buffer size will you keep while calculating disk space for a 2 TB database (lets say)? just looking for simple calculation for getting rough estimates.

     

  • before worrying about how much you are going to need you should be looking at what tables did grow, and why - is that expected business data being added or just a temporary growth, for example a index rebuild which can "duplicate" the size of the table while being rebuilt.

    and do the tables have compression? if not ask why - and see if it can/should have it - and depending on table type and usage, this can be page/row or even columnstore  compression.

    other than that give a buffer of 20% over examined growth - and doing that on excel is easy (and no I'm not going to explain such a easy operation)

  • vsamantha35 wrote:

    Post the CREATE TABLE statement for the table the index is a part of including the CREATE statements for the index(es) in quesiton.  Then, post the results of a run of sys.dm_dm_index_physical_stats using the "sampled" mode for the whole table (all indexes).  If you dump that to a spreadsheet, attach it and I'll open the spreadsheet.

    [/quote]

    Sir, I didn't get it. could you please elaborate the steps? is it for any specific table or all tables?

    [/quote]

    It does say "for all indexes".  That means that you give the function the DB_ID(), 3 NULLs, and 'SAMPLED' for the last parameter.  Please lookup the function for how to use it.  It something that you need to learn more about.

    You also said "After monitoring for last week 5 days , we are seeing one of the index data is getting increased as below." so we need the CREATE TABLE and the creates for all the index for THAT table, whatever it is.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • vsamantha35 wrote:

    Grant Fritchey wrote:

    The only sure way to do forecasting is to measure today, measure next week (or have measured last week), and do some calculations. Redgate Monitor (yes, full disclosure, my employer) has just such a mechanism built into the tool. It's not magic. It measures the changes over time and does a linear projection. You can do the same.

    In SQLMonitor, where can I find the projection information. Could you please provide the link to check this growth info or graph?

    It's on the Enterprise set of menus. Click on Disk under Enterprise. You'll see the disks and growth projections.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • vsamantha35 wrote:

    On  high level, assuming its a new project? what questions you will be asking to the business ? and lets say. I have collected the database size per file for 6 months - 1 year, will you take the average ? and how much buffer size will you keep while calculating disk space for a 2 TB database (lets say)? just looking for simple calculation for getting rough estimates.

    You've got the basics. On a brand new project you ask how much data they think they'll be loading initially and how much it's likely to grow. They'll be wrong. It'll be over or under estimated, but you ask and then respond accordingly.

    For an existing project, you have the numbers. You can make projections and plan based on those. This one is pretty straight forward.

    As to buffer, there is no single formula I'd follow. I do suggest you grow the files yourself rather than relying on auto grow. In my mind, auto grow is a backup, not the driver for file growth. As to buffer, personally, depending on the database, activity, etc., I'd say 3-6 months of space. Keep it monitored, and add 3-6 as needed so you're only growing 4 times a year or less. But honestly, that's just ball park, finger in the air to test the wind, kind of measures. It completely depends on the systems, their activity, how many I'm managing (as the number I manage goes up, the less bespoke work you do and you just standardize and automate), SLAs, recovery needs, all that.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I am grateful for your valuable suggestions. Thank you very much sir.

     

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply