Basic Question: Do you have to restart the instance if you change database or log file growth

  • Hi All, I have changed the default autogrowth to 300000MB. Do I need to restart the server instance in order for it to start to function.

    Also, Is there a list of changes that would require an instance restart.

    Thank You,

  • bdkdavid - Wednesday, November 21, 2018 1:30 PM

    Hi All, I have changed the default autogrowth to 300000MB. Do I need to restart the server instance in order for it to start to function.

    Also, Is there a list of changes that would require an instance restart.

    Thank You,

    https://www.sqlservergeeks.com/sql-server-what-all-changes-require-a-restart-of-sql-service/

  • bdkdavid - Wednesday, November 21, 2018 1:30 PM

    Hi All, I have changed the default autogrowth to 300000MB. Do I need to restart the server instance in order for it to start to function.

    Also, Is there a list of changes that would require an instance restart.

    Thank You,

    A restart is not required when changing the database file growth increments. Are you really sure you want it to grow almost 300 GB anytime it needs to grow?
    The only lists I'm aware of for changes requiring instance restarts (there could be others) would be the configuration settings changed with sp_configure. They are listed in this documentation:
    Server Configuration Options (SQL Server)

    Sue

  • Thank You, Sue and Ryan
    The database is 13 TB in size already. They do alot of batch loading! I know that i am going to have to add a ndf file in the not sofar future. I am coming up with a short term solution. Since the MDF is approaching the max size fo 16TB. Also, the netapp lun is 16TB in size. The database has lots of issues. This was a quick simple fix??? to maybe improve performanace.

    Ryan, the above link is not working for me!

    Appreciate the info sue!
    Thank you,
    David

  • bdkdavid - Wednesday, November 21, 2018 1:30 PM

    Hi All, I have changed the default autogrowth to 300000MB. Do I need to restart the server instance in order for it to start to function.

    Also, Is there a list of changes that would require an instance restart.

    Thank You,

    and you sure you typed that right? that's 300 GB growth - won't it be a bit too much?

  • bdkdavid - Wednesday, November 21, 2018 3:04 PM

    Thank You, Sue and Ryan
    The database is 13 TB in size already. They do alot of batch loading! I know that i am going to have to add a ndf file in the not sofar future. I am coming up with a short term solution. Since the MDF is approaching the max size fo 16TB. Also, the netapp lun is 16TB in size. The database has lots of issues. This was a quick simple fix??? to maybe improve performanace.

    Ryan, the above link is not working for me!

    Appreciate the info sue!
    Thank you,
    David

    The databases size isn't necessarily correlated to necessary growth increments for the files. The activity could be a factor, such as you mentioning a lot of batch loading, but that is still a matter of how much growth how often which isn't related to the database size.
    You need to know how much is the database growth for those batches, growth for an average week/month or whatever time frame.
    Another consideration with growth increments is the time it takes to grow the file to that size as you don't want to introduce issues with waits on the growth.
    I may be missing something but I'm not sure what setting the growth to 500 GB is going to do to help with anything or how it's a short term solution to something.
    It sounds like you might have some concerns about growth? Maybe not but LUNS, aggregates can be expanded. You can also create additional database files on other LUNS. But it sounds like you already know about adding files. So is this 13 TB database all in one file? And what really is the problem you are trying to solve? You may have some other options but it may not be clear the issues really are? Space? Performance?

    Sue

  • bdkdavid - Wednesday, November 21, 2018 3:04 PM

    Thank You, Sue and Ryan
    The database is 13 TB in size already. They do alot of batch loading! I know that i am going to have to add a ndf file in the not sofar future. I am coming up with a short term solution. Since the MDF is approaching the max size fo 16TB. Also, the netapp lun is 16TB in size. The database has lots of issues. This was a quick simple fix??? to maybe improve performanace.

    Ryan, the above link is not working for me!

    Appreciate the info sue!
    Thank you,
    David

    If you have instant file initialization enabled, it won't help with performance.  If you don't have instant file initialization enabled, the file might not be available for a longer time than you might expect for 300GB of growth and it will always happen at the worst time which will because something needs the file to grow to continue.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Hi Jeff and Sue,
    Yes the 13 TB file is the mdf file. I just started working with the database yesterday. The last batch had growned the data base from 11.5 TB to 13 TB which was on Monday. I have not had time to go into further analysis at this time. I am planning on adding a few ndf file to the system and migrating certain tables to them. Then shrink the database file. Maybe you guy's can evaluate my plan. First I have to talk with the programmers with the application. They will help me pick out certain large tables. that I will create a separate ndf file for each table in question.

    I will move those tables into those selected files and shrink the database file mdf.
    I am toying with either creating them in the same file group Primary or separting them in there own individual file group.
    I only want those tables in that particular file in question. I believe only the separate file groups would be an answer to that. This is a rough draft. Do you have any particular recommendations!

    Did I mention this is SQL Server 2014 running in 2012 compatibility mode:
    It is not instant file initialization enabled

    there are several problems:
    1. Approaching max file size both lun NetApp and database
    2. database file backups are taking longer than expected usually 15 hours now it is 24 hours (FULL)
    3. batches are runing during backup in process running slowly
    4. log file backup fails several times throughout the day
    5. logfile maxs out at 2TB several times. Stops Database:
    6. ran shrink log file had to place database in simple first:ran dbcc cmd: returned to full
    7. it is not idea but it was a quick fix.

  • bdkdavid - Wednesday, November 21, 2018 5:37 PM

    Hi Jeff and Sue,
    Yes the 13 TB file is the mdf file. I just started working with the database yesterday. The last batch had growned the data base from 11.5 TB to 13 TB which was on Monday. I have not had time to go into further analysis at this time. I am planning on adding a few ndf file to the system and migrating certain tables to them. Then shrink the database file. Maybe you guy's can evaluate my plan. First I have to talk with the programmers with the application. They will help me pick out certain large tables. that I will create a separate ndf file for each table in question.

    I will move those tables into those selected files and shrink the database file mdf.
    I am toying with either creating them in the same file group Primary or separting them in there own individual file group.
    I only want those tables in that particular file in question. I believe only the separate file groups would be an answer to that. This is a rough draft. Do you have any particular recommendations!

    Did I mention this is SQL Server 2014 running in 2012 compatibility mode:
    It is not instant file initialization enabled

    there are several problems:
    1. Approaching max file size both lun NetApp and database
    2. database file backups are taking longer than expected usually 15 hours now it is 24 hours (FULL)
    3. batches are runing during backup in process running slowly
    4. log file backup fails several times throughout the day
    5. logfile maxs out at 2TB several times. Stops Database:
    6. ran shrink log file had to place database in simple first:ran dbcc cmd: returned to full
    7. it is not idea but it was a quick fix.

    Before you get into all that, let's find out a very important point. 

    You say the last batch caused the file to grow so let's talk about the nature of your batches vs the rest of your file.  Is most of the rest of your file static?  In other words, once you load a batch and maybe make an update or two, do the rows for that batch become totally static after a month or two?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Bit curious - there may be underlying problems not mentioned.

    re: it is not instant file initialization enabled - very few cases where this should not be enabled - you know why it wasn't done?

    And would it be possible for you to supply us with the output of the following 2 queries.
    select top 50
       s.Name as schemaname
      , t.NAME as tablename
      , i.type_desc
      , p.data_compression_desc
      , p.rows as rowcounts
      , sum(a.total_pages) * 8 as totalspacekb
      , cast(round(((sum(a.total_pages) * 8) / 1024.00), 2) as numeric(36, 2)) as totalspacemb
      , sum(a.used_pages) * 8 as usedspacekb
      , cast(round(((sum(a.used_pages) * 8) / 1024.00), 2) as numeric(36, 2)) as usedspacemb
      , (sum(a.total_pages) - sum(a.used_pages)) * 8 as unusedspacekb
      , cast(round(((sum(a.total_pages) - sum(a.used_pages)) * 8) / 1024.00, 2) as numeric(36, 2)) as unusedspacemb
    from sys.tables t
    inner join sys.indexes i
      on t.object_id = i.object_id
    inner join sys.partitions p
      on i.object_id = p.object_id
      and i.index_id = p.index_id
    inner join sys.allocation_units a
      on p.partition_id = a.container_id
    left outer join sys.schemas s
      on t.schema_id = s.schema_id
    where t.NAME not like 'dt%'
      and t.is_ms_shipped = 0
      and i.object_id > 255
    group by t.Name
       , s.Name
       , p.Rows
       , i.type_desc
       , p.data_compression_desc
    order by totalspacekb desc

    select top 50
       s.Name as schemaname
      , t.NAME as tablename
      , i.type_desc
      , p.data_compression_desc
      , p.rows as rowcounts
      , sum(a.total_pages) * 8 as totalspacekb
      , cast(round(((sum(a.total_pages) * 8) / 1024.00), 2) as numeric(36, 2)) as totalspacemb
      , sum(a.used_pages) * 8 as usedspacekb
      , cast(round(((sum(a.used_pages) * 8) / 1024.00), 2) as numeric(36, 2)) as usedspacemb
      , (sum(a.total_pages) - sum(a.used_pages)) * 8 as unusedspacekb
      , cast(round(((sum(a.total_pages) - sum(a.used_pages)) * 8) / 1024.00, 2) as numeric(36, 2)) as unusedspacemb
    from sys.tables t
    inner join sys.indexes i
      on t.object_id = i.object_id
    inner join sys.partitions p
      on i.object_id = p.object_id
      and i.index_id = p.index_id
    inner join sys.allocation_units a
      on p.partition_id = a.container_id
    left outer join sys.schemas s
      on t.schema_id = s.schema_id
    where t.NAME not like 'dt%'
      and t.is_ms_shipped = 0
      and i.object_id > 255
    group by t.Name
       , s.Name
       , p.Rows
       , i.type_desc
       , p.data_compression_desc
    order by unusedspacekb desc

  • schemanametablenametype_descdata_compression_descrowcountstotalspacekbtotalspacembusedspacekbusedspacembunusedspacekbunusedspacemb
    dbotable1CLUSTEREDNONE806313384174494344170404.63169368600165399.02512574421646393.98
    dbotable2CLUSTEREDNONE467457873419467224409635.96417114624407338.5235260052026064.5
    dbotable3CLUSTEREDNONE117229400486287968474890.59484038224472693.58224974460313302.42
    dbotable4CLUSTEREDNONE92892280381154216372220.91379228720370340.55192549647273936.45
    dbotable5CLUSTEREDNONE113451550467851336456886.07466475024455542.02137631258025874.98
  • The above table is 5 results from a the above query. I just have taken the top 1 record from each number field the database file is 13 TB +

  • there were 2 queries - this is the output of which one?
    and do give the 50 rows of each please.

    and are you on a Enterprise or Standard edition? and what is the server spec for this?

  • The two queries look like one query. It is dedicated standard edition, 2014 database is runing in 2012 compatiblity, 2 processor 16 core each
    RAM 512GB. Storage is on NetApp.

  • they are different in that the order by is different - one gets the biggest tables (order by totalspacekb desc) the other the ones with most unused space (unusedspacekb desc)

    was asking about version because with these big tables being on Enterprise (or 2016 SP1 standard) would mean you could/should have compression on these big tables and was curious of why it wasn't turned on.

    One of the issues I was trying to see if you had is related to heaps - but as it seems that the output you have above is from the "unusedspacekb desc" query it seems it does not apply here.

    curious still about the other query - and also of the DDL for table 3, 4 and 5 - assuming my query is correct it looks like they are 4 Kb average record size

Viewing 15 posts - 1 through 15 (of 23 total)

You must be logged in to reply to this topic. Login to reply