T-SQL MDF & LDF size for spreadsheet

  • Hello - I need a script that will return the mdf & ldf for multiple databases.

    I am currently running...

    sp_helpdb 'TestDataname'

    ...and copying the size of the mdf and ldf into an excel spreadsheet.

    How can I get the mdf AND ldf file size for all of the databases in an instance? I need the MDF and LDF seperated and I want the actual size of the file as it appears on the file system.

    Thanks in advance

    Dave

  • i built this proc from some examples i found here on SQLServerCentral:

    IF OBJECT_ID('[dbo].[sp_dbspaceused]') IS NOT NULL

    DROP PROCEDURE [dbo].[sp_dbspaceused]

    GO

    --#################################################################################################

    --developer utility function added by Lowell, used in SQL Server Management Studio

    --Purpose: Get the file size and available space for every database file

    --#################################################################################################

    CREATE PROCEDURE sp_dbspaceused

    AS

    BEGIN

    SET NOCOUNT ON

    DECLARE @CurrentDB NVARCHAR(128)

    -- The table #tblServerDatabases holds the names of databases on the server.

    -- This table is used here to allow us to loop through each database,

    -- rather than to run undocumented procedures, such as sp_msforeachdb

    -- (which eliminated the need for a loop).

    IF OBJECT_ID('tempdb..#tblServerDatabases', 'U') IS NOT NULL

    DROP TABLE #tblServerDatabases

    CREATE TABLE #tblServerDatabases (DBName NVARCHAR(128))

    -- The table #tblDBFilesExtendedInfo holds the data and log files info

    -- (name, size, used, free spaces, etc.).

    IF OBJECT_ID('tempdb..#tblDBFilesExtendedInfo', 'U') IS NOT NULL

    DROP TABLE #tblDBFilesExtendedInfo

    CREATE TABLE #tblDBFilesExtendedInfo (

    Idx INT IDENTITY(1, 1),

    FileID INT,

    FileGroupID INT,

    TotalExtents BIGINT,

    UsedExtents BIGINT,

    DBFileName NVARCHAR(128),

    LogicalFileName NVARCHAR(128),

    DBFilePath NVARCHAR(1024),

    DBFileType VARCHAR(16),

    DBName NVARCHAR(128),

    [TotalFileSize(MB)] MONEY,

    [TotalUsed(MB)] MONEY,

    [TotalFree(MB)] MONEY,

    [SpaceUsed(%)] MONEY,

    Status INT)

    -- This table will hold the output of sp_helpfile, for each database.

    -- This is needed in order to get the log file path and file name.

    IF OBJECT_ID('tempdb..#tblDBFilesBasicInfo', 'U') IS NOT NULL

    DROP TABLE #tblDBFilesBasicInfo

    CREATE TABLE #tblDBFilesBasicInfo (

    DBName NVARCHAR(128),

    DBFileName NVARCHAR(128),

    FileID INT,

    FilePath NVARCHAR(1024),

    FileGroupDesc NVARCHAR(128),

    FileSizeKB NVARCHAR(64),

    MaxSizeDesc NVARCHAR(64),

    Growth NVARCHAR(64),

    Usage NVARCHAR(64))

    -- First - the data files are handled throught the

    -- DBCC SHOWFILESTATS command.

    INSERT INTO #tblServerDatabases (DBName)

    SELECT [name]

    FROM master.dbo.sysdatabases

    SELECT @CurrentDB = MIN(DBName)

    FROM #tblServerDatabases

    WHILE @CurrentDB IS NOT NULL

    BEGIN

    INSERT INTO #tblDBFilesExtendedInfo(

    FileID,

    FileGroupID,

    TotalExtents,

    UsedExtents,

    DBFileName,

    DBFilePath)

    EXEC ('USE [' + @CurrentDB + '] DBCC SHOWFILESTATS')

    UPDATE #tblDBFilesExtendedInfo

    SET DBName = @CurrentDB,

    DBFileType = 'Data File'

    WHERE DBName IS NULL

    -- Run the sp_helpfile in order to get log file data.

    INSERT INTO #tblDBFilesBasicInfo(

    DBFileName,

    FileID,

    FilePath,

    FileGroupDesc,

    FileSizeKB,

    MaxSizeDesc,

    Growth,

    Usage)

    EXEC ('USE [' + @CurrentDB + '] EXEC sp_helpfile ')

    UPDATE #tblDBFilesBasicInfo

    SET DBName = @CurrentDB

    WHERE DBName IS NULL

    SELECT @CurrentDB = MIN(DBName)

    FROM #tblServerDatabases WITH (NOLOCK)

    WHERE DBName > @CurrentDB

    END

    -- Update the total file size, used and free space, based on the

    -- extents information returned from DBCC SHOWFILESTATS.

    UPDATE #tblDBFilesExtendedInfo

    SET [DBFileName] = RIGHT(DBFilePath, CHARINDEX('\', REVERSE(DBFilePath)) -1),

    [TotalFileSize(MB)] = CAST(((TotalExtents*64) / 1024.00) AS MONEY),

    [TotalUsed(MB)] = CAST(((UsedExtents*64) / 1024.00) AS MONEY),

    [TotalFree(MB)] = CAST(((TotalExtents*64) / 1024.00) AS MONEY)

    - CAST(((UsedExtents*64) / 1024.00) AS MONEY),

    [SpaceUsed(%)] = CASE

    WHEN CAST(((TotalExtents*64) / 1024.00) AS MONEY) = 0.0 THEN 0.0

    ELSE (CAST(((UsedExtents*64) / 1024.00) AS MONEY)*100)

    / CAST(((TotalExtents*64) / 1024.00) AS MONEY)

    END

    -- We are now done with the data file statuses, and we shall move

    -- on to get the log files info, by using DBCC SQLPERF(LOGSPACE)

    INSERT INTO #tblDBFilesExtendedInfo (DBName, [TotalFileSize(MB)], [SpaceUsed(%)], Status)

    EXEC('DBCC SQLPERF(LOGSPACE)')

    UPDATE a

    SET [TotalUsed(MB)] = (a.[SpaceUsed(%)]/100.00)*a.[TotalFileSize(MB)],

    [TotalFree(MB)] = (1.0 - (a.[SpaceUsed(%)]/100.00))*a.[TotalFileSize(MB)],

    DBFileType = 'Log file',

    DBFilePath = b.FilePath,

    DBFileName = RIGHT(b.FilePath, CHARINDEX('\', REVERSE(b.FilePath)) -1)

    FROM #tblDBFilesExtendedInfo a

    INNER JOIN #tblDBFilesBasicInfo b

    ON a.DBName = b.DBName

    WHERE a.DBFileType IS NULL

    AND b.Usage = 'log only'

    --we want the logical file name as well, in case we need it for other purposes, like getting everything in a single spot to move files

    UPDATE MyTarget

    SET MyTarget.LogicalFileName = mff.name

    FROM #tblDBFilesExtendedInfo MyTarget

    INNER JOIN sys.master_files mff

    ON MyTarget.DBFilePath = mff.physical_name

    SET NOCOUNT OFF

    -- That's it. We now need to return a readable recordset.

    SELECT DBName,

    DBFileType,

    LogicalFileName,

    DBFileName,

    DBFilePath,

    [TotalFileSize(MB)],

    [TotalUsed(MB)],

    [SpaceUsed(%)],

    [TotalFree(MB)]

    FROM #tblDBFilesExtendedInfo WITH (NOLOCK)

    ORDER BY DBName ASC, DBFileType ASC, FileGroupID ASC, FileID ASC

    -- Cleanup

    IF OBJECT_ID('tempdb..#tblServerDatabases', 'U') IS NOT NULL

    DROP TABLE #tblServerDatabases

    IF OBJECT_ID('tempdb..#tblDBFilesExtendedInfo', 'U') IS NOT NULL

    DROP TABLE #tblDBFilesExtendedInfo

    IF OBJECT_ID('tempdb..#tblDBFilesBasicInfo', 'U') IS NOT NULL

    DROP TABLE #tblDBFilesBasicInfo

    END -- PROC

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • IF OBJECT_ID('tempdb.dbo.#db_sizes') IS NOT NULL

    DROP TABLE #db_sizes

    CREATE TABLE #db_sizes (

    db_name varchar(100) NOT NULL,

    data_size_kb bigint NULL,

    log_size_kb bigint NULL,

    data_file_count smallint NULL,

    log_file_count smallint NULL

    )

    CREATE CLUSTERED INDEX db_sizes__CL ON #db_sizes ( db_name );

    DECLARE @db_name varchar(100);

    DECLARE @source_database_id smallint;

    DECLARE @sql_template varchar(8000);

    DECLARE @sql varchar(8000);

    DECLARE @print_sql bit;

    DECLARE @exec_sql bit;

    SET @print_sql = 0

    SET @exec_sql = 1

    SET @sql_template = '

    INSERT INTO #db_sizes (

    db_name,

    data_size_kb,

    log_size_kb,

    data_file_count,

    log_file_count )

    SELECT ''$db$'',

    SUM(CASE WHEN df.type_desc = ''LOG'' THEN 0 ELSE size * 8 END) AS data_size_kb,

    SUM(CASE WHEN df.type_desc = ''LOG'' THEN size * 8 ELSE 0 END) AS log_size_kb,

    SUM(CASE WHEN df.type_desc = ''LOG'' THEN 0 ELSE 1 END) AS data_file_count,

    SUM(CASE WHEN df.type_desc = ''LOG'' THEN 1 ELSE 0 END) AS log_file_count

    FROM [$db$].sys.database_files df

    '

    DECLARE cursor_dbs CURSOR LOCAL STATIC FOR

    SELECT d.name, d.source_database_id

    FROM sys.databases d

    WHERE d.state_desc = 'ONLINE'

    ORDER BY d.name;

    OPEN cursor_dbs;

    WHILE 1 = 1

    BEGIN

    FETCH NEXT FROM cursor_dbs INTO @db_name, @source_database_id;

    IF @@FETCH_STATUS <> 0

    BREAK;

    SET @sql = REPLACE(@sql_template, '$db$', @db_name)

    IF @print_sql = 1

    PRINT @sql

    IF @exec_sql = 1

    EXEC(@sql)

    END --WHILE

    DEALLOCATE cursor_dbs;

    SELECT *

    FROM #db_sizes

    ORDER BY db_name

    --COMPUTE not allowed on SQL2012+

    --COMPUTE SUM(data_size_kb), SUM(log_size_kb), SUM(data_file_count), SUM(log_file_count)

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • Excellent - thank you for the help - much appreciated!:-)

  • Why not just do a [font="Courier New"]SELECT SizeMB = Size/128.0, * FROM sys.Master_Files [/font]and call it a day?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (8/28/2015)


    Why not just do a [font="Courier New"]SELECT SizeMB = Size/128.0, * FROM sys.Master_Files [/font]and call it a day?

    I thought sys.master_files was not always kept current, particularly for tempdb.

    As to size, some people may care about those less than 1MB. If they don't, it's very easy to round up/down after the fact, but obviously impossible to add detail size data that's already been rounded away.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • ScottPletcher (8/28/2015)


    Jeff Moden (8/28/2015)


    Why not just do a [font="Courier New"]SELECT SizeMB = Size/128.0, * FROM sys.Master_Files [/font]and call it a day?

    I thought sys.master_files was not always kept current, particularly for tempdb.

    As to size, some people may care about those less than 1MB. If they don't, it's very easy to round up/down after the fact, but obviously impossible to add detail size data that's already been rounded away.

    Dividing by 128.0 does give 6 decimal places of resolution with the understanding that the answer is in binary MB (1024*1024 bytes). For those that don't know, there are 128 pages in a binary MB each of which is a binary 8K bytes or 8,192 decimal bytes.

    SELECT 1/128.0;

    If you want actual bytes, case the size as BIGINT and multiply by 8192 (the number of bytes in a page). If you want decimal MB (1,000*1,000), multiply the size by 0.008192.

    As for the claim that sys.master_files isn't always kept up to date, I suppose you're referring to the note on the sys.master_files page in BOL, which states...

    [font="Arial Black"]Note:[/font]

    When you drop or rebuild large indexes, or drop or truncate large tables, the Database Engine defers the actual page deallocations, and their associated locks, until after the transaction commits. Deferred drop operations do not release allocated space immediately. Therefore, the values returned by sys.master_files immediately after dropping or truncating a large object may not reflect the actual disk space available. For more information about deferred allocations, see Dropping and Rebuilding Large Objects.

    I'm not sure what it is that they're actually talking about because we all know that even if you drop every object from the database, it will not change the size of the MDF or LDF files. Only "shrink database" or "shrink file" operations will decrease the size of the underlying files. I can see it getting behind if growth is in process but I've never seen it stay far behind.

    There's also another "problem" associated with sys.master_files but I'm not sure that it's accurately reported anywhere else, either (mostly because I don't use them). SIZE for a snapshot does not indicate the actual size of the snapshot. It indicates the maximum size that the snapshot can ever use.

    The only way that I know of to get truly up to date file sizes is to make a call to the operating system with the understanding that the answer is only valid for that very instant and could change in the very next instant.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (8/28/2015)


    ScottPletcher (8/28/2015)


    Jeff Moden (8/28/2015)


    Why not just do a [font="Courier New"]SELECT SizeMB = Size/128.0, * FROM sys.Master_Files [/font]and call it a day?

    I thought sys.master_files was not always kept current, particularly for tempdb.

    As to size, some people may care about those less than 1MB. If they don't, it's very easy to round up/down after the fact, but obviously impossible to add detail size data that's already been rounded away.

    Dividing by 128.0 does give 6 decimal places of resolution with the understanding that the answer is in binary MB (1024*1024 bytes). For those that don't know, there are 128 pages in a binary MB each of which is a binary 8K bytes or 8,192 decimal bytes.

    SELECT 1/128.0;

    If you want actual bytes, case the size as BIGINT and multiply by 8192 (the number of bytes in a page). If you want decimal MB (1,000*1,000), multiply the size by 0.008192.

    As for the claim that sys.master_files isn't always kept up to date, I suppose you're referring to the note on the sys.master_files page in BOL, which states...

    [font="Arial Black"]Note:[/font]

    When you drop or rebuild large indexes, or drop or truncate large tables, the Database Engine defers the actual page deallocations, and their associated locks, until after the transaction commits. Deferred drop operations do not release allocated space immediately. Therefore, the values returned by sys.master_files immediately after dropping or truncating a large object may not reflect the actual disk space available. For more information about deferred allocations, see Dropping and Rebuilding Large Objects.

    I'm not sure what it is that they're actually talking about because we all know that even if you drop every object from the database, it will not change the size of the MDF or LDF files. Only "shrink database" or "shrink file" operations will decrease the size of the underlying files. I can see it getting behind if growth is in process but I've never seen it stay far behind.

    There's also another "problem" associated with sys.master_files but I'm not sure that it's accurately reported anywhere else, either (mostly because I don't use them). SIZE for a snapshot does not indicate the actual size of the snapshot. It indicates the maximum size that the snapshot can ever use.

    The only way that I know of to get truly up to date file sizes is to make a call to the operating system with the understanding that the answer is only valid for that very instant and could change in the very next instant.

    I've seen it be off after autogrowth, esp. for tempdb. sys.master_files probably isn't intended to be up to date, so that's not necessarily unexpected.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply