November 21, 2018 at 1:30 pm
Hi All, I have changed the default autogrowth to 300000MB. Do I need to restart the server instance in order for it to start to function.
Also, Is there a list of changes that would require an instance restart.
Thank You,
November 21, 2018 at 2:37 pm
bdkdavid - Wednesday, November 21, 2018 1:30 PMHi All, I have changed the default autogrowth to 300000MB. Do I need to restart the server instance in order for it to start to function.Also, Is there a list of changes that would require an instance restart.
Thank You,
https://www.sqlservergeeks.com/sql-server-what-all-changes-require-a-restart-of-sql-service/
November 21, 2018 at 2:37 pm
bdkdavid - Wednesday, November 21, 2018 1:30 PMHi All, I have changed the default autogrowth to 300000MB. Do I need to restart the server instance in order for it to start to function.Also, Is there a list of changes that would require an instance restart.
Thank You,
A restart is not required when changing the database file growth increments. Are you really sure you want it to grow almost 300 GB anytime it needs to grow?
The only lists I'm aware of for changes requiring instance restarts (there could be others) would be the configuration settings changed with sp_configure. They are listed in this documentation:
Server Configuration Options (SQL Server)
Sue
November 21, 2018 at 3:04 pm
Thank You, Sue and Ryan
The database is 13 TB in size already. They do alot of batch loading! I know that i am going to have to add a ndf file in the not sofar future. I am coming up with a short term solution. Since the MDF is approaching the max size fo 16TB. Also, the netapp lun is 16TB in size. The database has lots of issues. This was a quick simple fix??? to maybe improve performanace.
Ryan, the above link is not working for me!
Appreciate the info sue!
Thank you,
David
November 21, 2018 at 3:05 pm
bdkdavid - Wednesday, November 21, 2018 1:30 PMHi All, I have changed the default autogrowth to 300000MB. Do I need to restart the server instance in order for it to start to function.Also, Is there a list of changes that would require an instance restart.
Thank You,
and you sure you typed that right? that's 300 GB growth - won't it be a bit too much?
November 21, 2018 at 3:43 pm
bdkdavid - Wednesday, November 21, 2018 3:04 PMThank You, Sue and Ryan
The database is 13 TB in size already. They do alot of batch loading! I know that i am going to have to add a ndf file in the not sofar future. I am coming up with a short term solution. Since the MDF is approaching the max size fo 16TB. Also, the netapp lun is 16TB in size. The database has lots of issues. This was a quick simple fix??? to maybe improve performanace.Ryan, the above link is not working for me!
Appreciate the info sue!
Thank you,
David
The databases size isn't necessarily correlated to necessary growth increments for the files. The activity could be a factor, such as you mentioning a lot of batch loading, but that is still a matter of how much growth how often which isn't related to the database size.
You need to know how much is the database growth for those batches, growth for an average week/month or whatever time frame.
Another consideration with growth increments is the time it takes to grow the file to that size as you don't want to introduce issues with waits on the growth.
I may be missing something but I'm not sure what setting the growth to 500 GB is going to do to help with anything or how it's a short term solution to something.
It sounds like you might have some concerns about growth? Maybe not but LUNS, aggregates can be expanded. You can also create additional database files on other LUNS. But it sounds like you already know about adding files. So is this 13 TB database all in one file? And what really is the problem you are trying to solve? You may have some other options but it may not be clear the issues really are? Space? Performance?
Sue
November 21, 2018 at 3:44 pm
bdkdavid - Wednesday, November 21, 2018 3:04 PMThank You, Sue and Ryan
The database is 13 TB in size already. They do alot of batch loading! I know that i am going to have to add a ndf file in the not sofar future. I am coming up with a short term solution. Since the MDF is approaching the max size fo 16TB. Also, the netapp lun is 16TB in size. The database has lots of issues. This was a quick simple fix??? to maybe improve performanace.Ryan, the above link is not working for me!
Appreciate the info sue!
Thank you,
David
If you have instant file initialization enabled, it won't help with performance. If you don't have instant file initialization enabled, the file might not be available for a longer time than you might expect for 300GB of growth and it will always happen at the worst time which will because something needs the file to grow to continue.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 21, 2018 at 5:37 pm
Hi Jeff and Sue,
Yes the 13 TB file is the mdf file. I just started working with the database yesterday. The last batch had growned the data base from 11.5 TB to 13 TB which was on Monday. I have not had time to go into further analysis at this time. I am planning on adding a few ndf file to the system and migrating certain tables to them. Then shrink the database file. Maybe you guy's can evaluate my plan. First I have to talk with the programmers with the application. They will help me pick out certain large tables. that I will create a separate ndf file for each table in question.
I will move those tables into those selected files and shrink the database file mdf.
I am toying with either creating them in the same file group Primary or separting them in there own individual file group.
I only want those tables in that particular file in question. I believe only the separate file groups would be an answer to that. This is a rough draft. Do you have any particular recommendations!
Did I mention this is SQL Server 2014 running in 2012 compatibility mode:
It is not instant file initialization enabled
there are several problems:
1. Approaching max file size both lun NetApp and database
2. database file backups are taking longer than expected usually 15 hours now it is 24 hours (FULL)
3. batches are runing during backup in process running slowly
4. log file backup fails several times throughout the day
5. logfile maxs out at 2TB several times. Stops Database:
6. ran shrink log file had to place database in simple first:ran dbcc cmd: returned to full
7. it is not idea but it was a quick fix.
November 22, 2018 at 1:56 pm
bdkdavid - Wednesday, November 21, 2018 5:37 PMHi Jeff and Sue,
Yes the 13 TB file is the mdf file. I just started working with the database yesterday. The last batch had growned the data base from 11.5 TB to 13 TB which was on Monday. I have not had time to go into further analysis at this time. I am planning on adding a few ndf file to the system and migrating certain tables to them. Then shrink the database file. Maybe you guy's can evaluate my plan. First I have to talk with the programmers with the application. They will help me pick out certain large tables. that I will create a separate ndf file for each table in question.I will move those tables into those selected files and shrink the database file mdf.
I am toying with either creating them in the same file group Primary or separting them in there own individual file group.
I only want those tables in that particular file in question. I believe only the separate file groups would be an answer to that. This is a rough draft. Do you have any particular recommendations!Did I mention this is SQL Server 2014 running in 2012 compatibility mode:
It is not instant file initialization enabledthere are several problems:
1. Approaching max file size both lun NetApp and database
2. database file backups are taking longer than expected usually 15 hours now it is 24 hours (FULL)
3. batches are runing during backup in process running slowly
4. log file backup fails several times throughout the day
5. logfile maxs out at 2TB several times. Stops Database:
6. ran shrink log file had to place database in simple first:ran dbcc cmd: returned to full
7. it is not idea but it was a quick fix.
Before you get into all that, let's find out a very important point.
You say the last batch caused the file to grow so let's talk about the nature of your batches vs the rest of your file. Is most of the rest of your file static? In other words, once you load a batch and maybe make an update or two, do the rows for that batch become totally static after a month or two?
--Jeff Moden
Change is inevitable... Change for the better is not.
November 22, 2018 at 2:52 pm
Bit curious - there may be underlying problems not mentioned.
re: it is not instant file initialization enabled - very few cases where this should not be enabled - you know why it wasn't done?
And would it be possible for you to supply us with the output of the following 2 queries.select top 50
s.Name as schemaname
, t.NAME as tablename
, i.type_desc
, p.data_compression_desc
, p.rows as rowcounts
, sum(a.total_pages) * 8 as totalspacekb
, cast(round(((sum(a.total_pages) * 8) / 1024.00), 2) as numeric(36, 2)) as totalspacemb
, sum(a.used_pages) * 8 as usedspacekb
, cast(round(((sum(a.used_pages) * 8) / 1024.00), 2) as numeric(36, 2)) as usedspacemb
, (sum(a.total_pages) - sum(a.used_pages)) * 8 as unusedspacekb
, cast(round(((sum(a.total_pages) - sum(a.used_pages)) * 8) / 1024.00, 2) as numeric(36, 2)) as unusedspacemb
from sys.tables t
inner join sys.indexes i
on t.object_id = i.object_id
inner join sys.partitions p
on i.object_id = p.object_id
and i.index_id = p.index_id
inner join sys.allocation_units a
on p.partition_id = a.container_id
left outer join sys.schemas s
on t.schema_id = s.schema_id
where t.NAME not like 'dt%'
and t.is_ms_shipped = 0
and i.object_id > 255
group by t.Name
, s.Name
, p.Rows
, i.type_desc
, p.data_compression_desc
order by totalspacekb desc
select top 50
s.Name as schemaname
, t.NAME as tablename
, i.type_desc
, p.data_compression_desc
, p.rows as rowcounts
, sum(a.total_pages) * 8 as totalspacekb
, cast(round(((sum(a.total_pages) * 8) / 1024.00), 2) as numeric(36, 2)) as totalspacemb
, sum(a.used_pages) * 8 as usedspacekb
, cast(round(((sum(a.used_pages) * 8) / 1024.00), 2) as numeric(36, 2)) as usedspacemb
, (sum(a.total_pages) - sum(a.used_pages)) * 8 as unusedspacekb
, cast(round(((sum(a.total_pages) - sum(a.used_pages)) * 8) / 1024.00, 2) as numeric(36, 2)) as unusedspacemb
from sys.tables t
inner join sys.indexes i
on t.object_id = i.object_id
inner join sys.partitions p
on i.object_id = p.object_id
and i.index_id = p.index_id
inner join sys.allocation_units a
on p.partition_id = a.container_id
left outer join sys.schemas s
on t.schema_id = s.schema_id
where t.NAME not like 'dt%'
and t.is_ms_shipped = 0
and i.object_id > 255
group by t.Name
, s.Name
, p.Rows
, i.type_desc
, p.data_compression_desc
order by unusedspacekb desc
December 11, 2018 at 9:41 am
schemaname | tablename | type_desc | data_compression_desc | rowcounts | totalspacekb | totalspacemb | usedspacekb | usedspacemb | unusedspacekb | unusedspacemb |
dbo | table1 | CLUSTERED | NONE | 806313384 | 174494344 | 170404.63 | 169368600 | 165399.02 | 5125744 | 21646393.98 |
dbo | table2 | CLUSTERED | NONE | 467457873 | 419467224 | 409635.96 | 417114624 | 407338.5 | 2352600 | 52026064.5 |
dbo | table3 | CLUSTERED | NONE | 117229400 | 486287968 | 474890.59 | 484038224 | 472693.58 | 2249744 | 60313302.42 |
dbo | table4 | CLUSTERED | NONE | 92892280 | 381154216 | 372220.91 | 379228720 | 370340.55 | 1925496 | 47273936.45 |
dbo | table5 | CLUSTERED | NONE | 113451550 | 467851336 | 456886.07 | 466475024 | 455542.02 | 1376312 | 58025874.98 |
December 11, 2018 at 9:44 am
The above table is 5 results from a the above query. I just have taken the top 1 record from each number field the database file is 13 TB +
December 11, 2018 at 10:08 am
there were 2 queries - this is the output of which one?
and do give the 50 rows of each please.
and are you on a Enterprise or Standard edition? and what is the server spec for this?
December 11, 2018 at 2:18 pm
The two queries look like one query. It is dedicated standard edition, 2014 database is runing in 2012 compatiblity, 2 processor 16 core each
RAM 512GB. Storage is on NetApp.
December 11, 2018 at 3:02 pm
they are different in that the order by is different - one gets the biggest tables (order by totalspacekb desc) the other the ones with most unused space (unusedspacekb desc)
was asking about version because with these big tables being on Enterprise (or 2016 SP1 standard) would mean you could/should have compression on these big tables and was curious of why it wasn't turned on.
One of the issues I was trying to see if you had is related to heaps - but as it seems that the output you have above is from the "unusedspacekb desc" query it seems it does not apply here.
curious still about the other query - and also of the DDL for table 3, 4 and 5 - assuming my query is correct it looks like they are 4 Kb average record size
Viewing 15 posts - 1 through 15 (of 23 total)
You must be logged in to reply to this topic. Login to reply