db file size

  • I recently keep getting quite a few pages about Less Than 10% Database File Space of one of our database.

    What does it mean, and what should I do about it?

    It is setup to autogrow by 100 mg, initial size is 4200 mg, and max size is 5000 mb.

    Thanks

  • Grow by whatever it takes to last for 6-24 months. I wouldn't put a max size unless this db could interfere with something mission critical.

    Find the job that pages you about low freespace to understand how it works.

    Also I'd put the growth to something more around 1 GB, 100 mb these days is nothing really.

  • Ninja's_RGR'us (6/1/2011)


    Find the job that pages you about low freespace to understand how it works.

    great advice.

    Also I'd put the growth to something more around 1 GB, 100 mb these days is nothing really.

    I MIGHT disagree on this one. In a very active system with lots of concurrent users and/or a slow disk subsystem, allocating 1GB during mid-day primetime activities could delay things enough. At least we are not talking about the tlog, but still. That could be a rough checkpoint!

    I typically go for a 500MB or smaller auto growth setting, just in case, but, like your earlier advise, I pregrow my files during a maintenance window large enough to last me one year. And I reevaluate weekly/monthly (with tools/report) to ensure we are on track.

    Jim Murphy
    http://www.sqlwatchmen.com
    @SQLMurph

  • Great advice Jim. Our san creates 1 GB in no time flat > 1 GB / sec IIRC.

    BUt I agree, it depends on that actual subsystems.

    I'll post a script that I use to see db growth. I'll sure turn this into a real report something but right now SSMS is enough for me...

    All you have to do here is to take backups and NOT clear the msdb backup history.

    IF OBJECT_ID('tempdb..#dbs') > 0

    DROP TABLE #dbs

    IF OBJECT_ID('tempdb..#Drives') > 0

    DROP TABLE #Drives

    IF OBJECT_ID('tempdb..#Results') > 0

    DROP TABLE #Results

    CREATE TABLE #dbs

    (

    DBNAME sysname

    , DBID INT

    , [Total Size in MB] INT

    , [Available Space In MB] INT

    , DriveLetter CHAR(1)

    )

    INSERT INTO

    #dbs

    (

    DBNAME

    , DBID

    , [Total Size in MB]

    , [Available Space In MB]

    , DriveLetter

    )

    EXEC sp_MSforeachdb '

    USE [?];

    SELECT

    DB_NAME() As DBNAME

    , DB_ID() AS DBID

    , SUM(size / 128) AS ''Total Size in MB''

    , SUM(size / 128 - CAST(FILEPROPERTY(name , ''SpaceUsed'') AS int) / 128) AS ''Available Space In MB''

    , LEFT(physical_name, 1) AS DriveLetter

    FROM

    [?].sys.database_files

    WHERE

    type_desc = ''ROWS''

    GROUP BY LEFT(physical_name, 1) '

    CREATE TABLE #Drives

    (

    DriverLetter CHAR(1) PRIMARY KEY CLUSTERED

    , FreeMBs INT NOT NULL

    , FreeGBs AS CONVERT(DECIMAL(18 , 2) , FreeMBs / 1024.0)

    )

    INSERT INTO

    #Drives ( DriverLetter , FreeMBs )

    EXEC xp_fixeddrives

    --

    --SELECT

    -- DB_NAME() As DBNAME

    -- , DB_ID() AS DBID

    -- , SUM(size / 128) AS 'Total Size in MB'

    -- , SUM(size / 128 - CAST(FILEPROPERTY(name , 'SpaceUsed') AS int) / 128) AS 'Available Space In MB'

    --FROM

    -- sys.database_files

    --WHERE

    -- type_desc = 'ROWS'

    --Rémi : I deleted 4 logging tables I had build on March 25th, hence the ±350 MB drop.

    ;

    WITH CTE_Backups ( database_name, BackupDate, MinutesForBackup, GB_backup_size, seqFirst, seqLast )

    AS (

    SELECT

    bs.database_name

    , DATEADD(D , 0 , DATEDIFF(D , 0 , bs.backup_start_date)) AS BackupDate

    , CONVERT(DECIMAL(18 , 1) , DATEDIFF(s , bs.backup_start_date ,

    bs.backup_finish_date)

    / 60.0) AS MinutesForBackup

    , CONVERT(DECIMAL(18 , 3) , bs.backup_size / 1024 / 1024 / 1024) AS GB_backup_size

    , ROW_NUMBER() OVER ( PARTITION BY bs.database_name ORDER BY bs.backup_start_date ) AS seqFirst

    , ROW_NUMBER() OVER ( PARTITION BY bs.database_name ORDER BY bs.backup_start_date DESC ) AS seqLast

    FROM

    msdb.dbo.backupset bs

    WHERE

    name IS NULL

    AND bs.[type] = 'D'

    )

    SELECT

    CONVERT(INT , dtBackups.[Available Space In GB]

    / CASE WHEN dtBackups.GB_ExpectedDailyGrowth <> 0

    THEN dtBackups.GB_ExpectedDailyGrowth

    ELSE 0.0001

    END) AS DaysUntillDBGrowth

    , *

    -- INTO

    -- #Results

    FROM

    (

    SELECT

    a.database_name

    , dbs.DriveLetter

    , drv.FreeGBs AS FreeGBs_Drive

    , CONVERT(DECIMAL(18 , 1) , ( drv.FreeGBs * 0.85 )

    / CONVERT(DECIMAL(18 , 3) , ( b.GB_backup_size - a.GB_backup_size )

    / DATEDIFF(dd , a.BackupDate , b.BackupDate) * 30.468)) AS FreeGBs_Drive_InMonths_WithExpected_DB_Growth

    , a.BackupDate AS BackupDate_First

    , b.BackupDate AS BackupDate_Last

    , DATEDIFF(dd , a.BackupDate , b.BackupDate) AS DaysPeriod

    , a.MinutesForBackup AS MinutesForBackup_First

    , b.MinutesForBackup AS MinutesForBackup_Last

    , b.MinutesForBackup - a.MinutesForBackup AS MinutesForBackup_Delta

    , a.GB_backup_size AS GB_backup_size_First

    , b.GB_backup_size AS GB_backup_size_Last

    , b.GB_backup_size - a.GB_backup_size AS GB_BackupGrowth

    --, a.seqLast - a.seqFirst AS QtyofBackups

    , CONVERT(DECIMAL(18 , 3) , ( b.GB_backup_size - a.GB_backup_size )

    / DATEDIFF(dd , a.BackupDate , b.BackupDate)) AS GB_ExpectedDailyGrowth

    , CONVERT(DECIMAL(18 , 3) , ( b.GB_backup_size - a.GB_backup_size )

    / DATEDIFF(dd , a.BackupDate , b.BackupDate) * 365.256) AS GB_ExpectedAnnualGrowth

    , CONVERT(DECIMAL(18 , 3) , dbs.[Total Size in MB] / 1024.0) AS [Total Size in GB]

    , CONVERT(DECIMAL(18 , 3) , dbs.[Available Space In MB] / 1024.0) AS [Available Space In GB]

    FROM

    CTE_Backups a

    INNER JOIN CTE_Backups b

    ON a.seqFirst = b.seqLast

    AND a.seqLast = b.seqFirst

    AND a.database_name = b.database_name

    INNER JOIN #dbs dbs

    ON b.database_name = dbs.DBNAME

    INNER JOIN #Drives drv

    ON dbs.DriveLetter = drv.DriverLetter

    WHERE

    a.seqFirst = 1

    AND a.seqFirst < a.seqLast

    ) dtBackups

    ORDER BY

    database_name

    --

    --IF EXISTS ( SELECT

    -- *

    -- FROM

    -- #Results R

    -- WHERE

    -- R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth >= 0 AND R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth < 1

    -- OR R.DaysUntillDBGrowth >= 0 AND R.DaysUntillDBGrowth <= 30 )

    -- BEGIN

    -- INSERT INTO

    -- dbo.RPT_Space_Warnings

    -- (

    -- [DaysUntillDBGrowth]

    -- , [Warning_Description]

    -- , [database_name]

    -- , [DriveLetter]

    -- , [FreeGBs_Drive]

    -- , [FreeGBs_Drive_InMonths_WithExpected_DB_Growth]

    -- , [BackupDate_First]

    -- , [BackupDate_Last]

    -- , [DaysPeriod]

    -- , [MinutesForBackup_First]

    -- , [MinutesForBackup_Last]

    -- , [MinutesForBackup_Delta]

    -- , [GB_backup_size_First]

    -- , [GB_backup_size_Last]

    -- , [GB_BackupGrowth]

    -- , [GB_ExpectedDailyGrowth]

    -- , [GB_ExpectedAnnualGrowth]

    -- , [Total Size in GB]

    -- , [Available Space In GB]

    -- )

    -- SELECT

    -- [DaysUntillDBGrowth]

    -- , CASE WHEN R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth < 1 THEN 'HD IS FULL' ELSE 'AUTOGROWTH WARNING' END AS Warning_Description

    -- , [database_name]

    -- , [DriveLetter]

    -- , [FreeGBs_Drive]

    -- , [FreeGBs_Drive_InMonths_WithExpected_DB_Growth]

    -- , [BackupDate_First]

    -- , [BackupDate_Last]

    -- , [DaysPeriod]

    -- , [MinutesForBackup_First]

    -- , [MinutesForBackup_Last]

    -- , [MinutesForBackup_Delta]

    -- , [GB_backup_size_First]

    -- , [GB_backup_size_Last]

    -- , [GB_BackupGrowth]

    -- , [GB_ExpectedDailyGrowth]

    -- , [GB_ExpectedAnnualGrowth]

    -- , [Total Size in GB]

    -- , [Available Space In GB]

    -- FROM

    -- #Results R

    -- WHERE

    -- R.DaysUntillDBGrowth >= 0 AND R.DaysUntillDBGrowth <= 30

    --UNION ALL -- I want to see 2 warnings in the same day when it's the case... those are the really critical events.

    -- SELECT

    -- [DaysUntillDBGrowth]

    -- , CASE WHEN R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth < 1 THEN 'HD IS FULL_' ELSE 'AUTOGROWTH WARNING_' END AS Warning_Description

    -- , [database_name]

    -- , [DriveLetter]

    -- , [FreeGBs_Drive]

    -- , [FreeGBs_Drive_InMonths_WithExpected_DB_Growth]

    -- , [BackupDate_First]

    -- , [BackupDate_Last]

    -- , [DaysPeriod]

    -- , [MinutesForBackup_First]

    -- , [MinutesForBackup_Last]

    -- , [MinutesForBackup_Delta]

    -- , [GB_backup_size_First]

    -- , [GB_backup_size_Last]

    -- , [GB_BackupGrowth]

    -- , [GB_ExpectedDailyGrowth]

    -- , [GB_ExpectedAnnualGrowth]

    -- , [Total Size in GB]

    -- , [Available Space In GB]

    -- FROM

    -- #Results R

    -- WHERE

    -- R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth >= 0 AND R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth <= 1

    -- END

    --

    --IF OBJECT_ID('tempdb..#dbs') > 0

    -- DROP TABLE #dbs

    --

    --IF OBJECT_ID('tempdb..#Drives') > 0

    -- DROP TABLE #Drives

    --

    --IF OBJECT_ID('tempdb..#Results') > 0

    -- DROP TABLE #Results

    --

  • Thanks .

    I do need a way to figure out how fast the database grow.

    I will give a try of the script.

    Also is the database full backup file size pretty much the same as database file size, or bigger or smaller?

    I can just by monitoring the backup file size and calculate the database growth rate?

    Thanks

  • sqlfriends (6/1/2011)


    Thanks .

    I do need a way to figure out how fast the database grow.

    I will give a try of the script.

    Also is the database full backup file size pretty much the same as database file size, or bigger or smaller?

    I can just by monitoring the backup file size and calculate the database growth rate?

    Thanks

    The backup file has to be smaller than the db. It contains only the pages with data on it so the worst case scenario is that the backup is exactly the size of the db.

    Since you only track the " used pages " in the db with the backup size you know how much space is actually used in the db.

    Now it's just a matter of comparing that with the free space on the drive and the free space in the data files. That's why I have 1 column for autogrowth warning and another for hd fill warning (with 15% buffer).

    Now keep in mind that I do the math only on a db by db base. So if you have 1000 dbs then my guess for the drive fill will be way off...

  • Jim Murphy (6/1/2011)


    Ninja's_RGR'us (6/1/2011)


    Find the job that pages you about low freespace to understand how it works.

    great advice.

    Also I'd put the growth to something more around 1 GB, 100 mb these days is nothing really.

    I MIGHT disagree on this one. In a very active system with lots of concurrent users and/or a slow disk subsystem, allocating 1GB during mid-day primetime activities could delay things enough. At least we are not talking about the tlog, but still. That could be a rough checkpoint!

    I typically go for a 500MB or smaller auto growth setting, just in case, but, like your earlier advise, I pregrow my files during a maintenance window large enough to last me one year. And I reevaluate weekly/monthly (with tools/report) to ensure we are on track.

    This can be mitigated by enabling instant initialization. Also, the size of the auto growth should be enough to allow the DBA time to address the issue before it hits again.

    On some of my systems - we are pushing greater than 600MB growth per day. Having an auto growth setting of 500MB just isn't go to cut it if I am on vacation and can't get time to manually grow the file.

    It does depend on the IO subsystem and how long it takes to actually grow the file, but if you have increments that are too small - then you are going to introduce file fragmentation which can also cause performance issues.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Jeffrey Williams-493691 (6/1/2011)


    Jim Murphy (6/1/2011)


    Ninja's_RGR'us (6/1/2011)


    Find the job that pages you about low freespace to understand how it works.

    great advice.

    Also I'd put the growth to something more around 1 GB, 100 mb these days is nothing really.

    I MIGHT disagree on this one. In a very active system with lots of concurrent users and/or a slow disk subsystem, allocating 1GB during mid-day primetime activities could delay things enough. At least we are not talking about the tlog, but still. That could be a rough checkpoint!

    I typically go for a 500MB or smaller auto growth setting, just in case, but, like your earlier advise, I pregrow my files during a maintenance window large enough to last me one year. And I reevaluate weekly/monthly (with tools/report) to ensure we are on track.

    This can be mitigated by enabling instant initialization. Also, the size of the auto growth should be enough to allow the DBA time to address the issue before it hits again.

    On some of my systems - we are pushing greater than 600MB growth per day. Having an auto growth setting of 500MB just isn't go to cut it if I am on vacation and can't get time to manually grow the file.

    It does depend on the IO subsystem and how long it takes to actually grow the file, but if you have increments that are too small - then you are going to introduce file fragmentation which can also cause performance issues.

    Yep. Exactly why it depends on the speed of the IO subsystem and the practice of mass-extending once a year. Instant Initialization is an option, but still a 'bandaid'; reacting to our negligence, rather than a proper solution. I've got one system that adds about 1.5G/day, so I know what you are saying there, but their IO subsystem is fast enough so I autogrow it by larger than 500MB. Other systems, I autogrow by 100MB since the IO system sucks (cheap client with a cheap server), but again, I never let an autogrowth happen [regularly].

    File fragmentation is a concern, yes, however an autogrowth should NEVER happen. Ever. Unless we, as a DBA are not doing our job. I have also seen IO subsystems that actually timeout after trying to extend a file by too large of a chunk. In fact, fixed one of those 2 hours ago for one of my clients. SQL Server gives up and throws an entry in the error log, waits for another convenient time to retry and eventually may lead to a hard error if not done in time. I'd rather have a touch of fragmentation until the weekend when I can fix that RAID fragmentation, then a hard failure/down system due to a busy IO system mid-day. Keeping the system online is way more important than performance.

    And with proper monitoring and reporting (like the tool/report we've written or the script above), we should proactively see the calamity before it strikes, and pre-grow by a large chunk at night or this weekend before the autogrowth occurs.

    Jim Murphy
    http://www.sqlwatchmen.com
    @SQLMurph

Viewing 8 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply