Database Growth and File Growth

  • We have a database and its files grow 1 GB  every 3 days and the file growth is given as 512MB . I am planning to make the filegrowth to 1GB with in order to avoid the frequent growth or Do i need to make it a higher value? This frequent growth can cause page splits and fragmentation  if I am not wrong.

  • If you know that your file grows about 1 GB every 3 days, then why not just give it about 150GB so you won't have to worry about it for over a year?   The database is going to need that space anyway, so why not give it to the file right now?  Don't forget that when auto grow is used, it means that at least one process is waiting until the file will finish getting the space that it needs.  Also giving the file a small chunk each time increases the chance that file will be fragmented (notice that this is  fragmentation on the file system level), while giving one big chunk once decreases that chance.  Regarding your question - this has no influence on page splits and logical fragmentation.

    Adi

    --------------------------------------------------------------
    To know how to ask questions and increase the chances of getting asnwers:
    http://www.sqlservercentral.com/articles/Best+Practices/61537/

    For better answers on performance questions, click on the following...
    http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • if you're going to employ large file growths ensure you have set the local security policy "Perform volume maintenance" for the sql server service account

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Adi Cohn-120898 - Thursday, March 16, 2017 1:59 AM

    If you know that your file grows about 1 GB every 3 days, then why not just give it about 150GB so you won't have to worry about it for over a year?   The database is going to need that space anyway, so why not give it to the file right now?  Don't forget that when auto grow is used, it means that at least one process is waiting until the file will finish getting the space that it needs.  Also giving the file a small chunk each time increases the chance that file will be fragmented (notice that this is  fragmentation on the file system level), while giving one big chunk once decreases that chance.  Regarding your question - this has no influence on page splits and logical fragmentation.

    Adi

    Thanks Adi,

    But when we increase  the  file size  to  150GB at once then I have to find the same space in my Non-Production environments if i restore the same there right?

  • Perry Whittle - Thursday, March 16, 2017 5:52 AM

    if you're going to employ large file growths ensure you have set the local security policy "Perform volume maintenance" for the sql server service account

    Thanks Perry,

    Yes, service account is already part of local security policy "Perform volume maintenance"

  • Rechana Rajan - Sunday, March 19, 2017 12:45 AM

    Adi Cohn-120898 - Thursday, March 16, 2017 1:59 AM

    If you know that your file grows about 1 GB every 3 days, then why not just give it about 150GB so you won't have to worry about it for over a year?   The database is going to need that space anyway, so why not give it to the file right now?  Don't forget that when auto grow is used, it means that at least one process is waiting until the file will finish getting the space that it needs.  Also giving the file a small chunk each time increases the chance that file will be fragmented (notice that this is  fragmentation on the file system level), while giving one big chunk once decreases that chance.  Regarding your question - this has no influence on page splits and logical fragmentation.

    Adi

    Thanks Adi,

    But when we increase  the  file size  to  150GB at once then I have to find the same space in my Non-Production environments if i restore the same there right?

    Yes, that's right.

    Adi

    --------------------------------------------------------------
    To know how to ask questions and increase the chances of getting asnwers:
    http://www.sqlservercentral.com/articles/Best+Practices/61537/

    For better answers on performance questions, click on the following...
    http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • If you have instant initialization enabled (Perform Volume Maintenance policy) - and a robust IO subsystem - then a larger auto growth won't have a large impact on the users.  At your current settings - the file is going to grow as the users add data - and as you rebuild/reorganize indexes.  Each time the file grows at 512MB you are potentially introducing file fragmentation which can be just as bad as index fragmentation.

    The better way to manage this is to manually grow the file in larger increments so the database has enough space for growth and enough space to manage your index rebuild operations.  Auto growth should only be used in case you cannot get to the manual growth operation in time.

    And finally - if you are more worried about the space requirements for your test systems than about making sure your production environment is performing at the best possible level it can then you are not doing your company any favors.  There are several ways you can 'restore' copies of production into a test environment using less space...SQL Clone from Redgate is just one option that I am aware of...I believe they also have a product that can mount a backup file as a read/write database.  If this is important enough then you should be able to get the funding which could be a lot less expensive than expanding your SAN and network to allow for full copies.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Adi Cohn-120898 - Sunday, March 19, 2017 4:37 AM

    Rechana Rajan - Sunday, March 19, 2017 12:45 AM

    Adi Cohn-120898 - Thursday, March 16, 2017 1:59 AM

    If you know that your file grows about 1 GB every 3 days, then why not just give it about 150GB so you won't have to worry about it for over a year?   The database is going to need that space anyway, so why not give it to the file right now?  Don't forget that when auto grow is used, it means that at least one process is waiting until the file will finish getting the space that it needs.  Also giving the file a small chunk each time increases the chance that file will be fragmented (notice that this is  fragmentation on the file system level), while giving one big chunk once decreases that chance.  Regarding your question - this has no influence on page splits and logical fragmentation.

    Adi

    Thanks Adi,

    But when we increase  the  file size  to  150GB at once then I have to find the same space in my Non-Production environments if i restore the same there right?

    Yes, that's right.

    Adi

    Thanks Adi.

  • Jeffrey Williams 3188 - Sunday, March 19, 2017 10:41 AM

    If you have instant initialization enabled (Perform Volume Maintenance policy) - and a robust IO subsystem - then a larger auto growth won't have a large impact on the users.  At your current settings - the file is going to grow as the users add data - and as you rebuild/reorganize indexes.  Each time the file grows at 512MB you are potentially introducing file fragmentation which can be just as bad as index fragmentation.

    The better way to manage this is to manually grow the file in larger increments so the database has enough space for growth and enough space to manage your index rebuild operations.  Auto growth should only be used in case you cannot get to the manual growth operation in time.

    And finally - if you are more worried about the space requirements for your test systems than about making sure your production environment is performing at the best possible level it can then you are not doing your company any favors.  There are several ways you can 'restore' copies of production into a test environment using less space...SQL Clone from Redgate is just one option that I am aware of...I believe they also have a product that can mount a backup file as a read/write database.  If this is important enough then you should be able to get the funding which could be a lot less expensive than expanding your SAN and network to allow for full copies.

    Thanks Jeffrey for the post.

    I partially agree on your final points but since I have to manage all the environments and same time have to give optimum performance I have to do some sacrifice. Delphix is a tool which virtualises the DB but then again we dont have any plan to purchase any new softwares as of now.

    We have enough space in Production to grow the DB but restoring multiple copies for various projects in Non-Prod is the bottle neck.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply