Right Size for LDF physical disk

  • Dear friends,
    Please, help me to take action, the facts are:
    Big database 1.5 terabyte database size, recovery model simple, 12 % Pct Free space for MDF but for LDF  0 % free just 9 megabytes remains in the physical disks of each unit, using DBCC SQLPERF(LOGSPACE) it says only 0.1333386 % used. The next growth is automatically enabled to 262.144 mb same as MDF.
    Should I add more physical space to the disk where LDF is located, in spite of, very low internal use ?????
    am I risking something or forgeting something??
    Your recommendations, please

  • luismarinaray - Friday, August 25, 2017 1:44 PM

    Dear friends,
    Please, help me to take action, the facts are:
    Big database 1.5 terabyte database size, recovery model simple, 12 % Pct Free space for MDF but for LDF  0 % free just 9 megabytes remains in the physical disks of each unit, using DBCC SQLPERF(LOGSPACE) it says only 0.1333386 % used. The next growth is automatically enabled to 262.144 mb same as MDF.
    Should I add more physical space to the disk where LDF is located, in spite of, very low internal use ?????
    am I risking something or forgeting something??
    Your recommendations, please

    Probably both!

    The metric you need to capture is growth of the files.  
    If the data file is growing, then when will it run out of space based upon the growth history?
    The log file will grow and shrink depending upon usage. There are any number of things that cause the log to expand.  

    The key take away is if you are certain that a growth will not occur, then you are safe.  If you cannot say with certainty that a growth will not occur, then you probably need to add space.  

    Are these on separate drives?  If they are not, it may be easier to create a new drive for the logs, run the alter database statement to move the log file.
    ALTER DATABASE YourDB MODIFY FILE( NAME = YourLogFile, FILENAME = 'L:\Folder\YourLogFile.ldf')
    Stop SQL, copy the log file to the new location, and start SQL.  

    Also, and it may not make sense, but you can shrink the log file.  It will likely just grow again, but that will get you some breathing room until you get a better handle on what is happening.

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Thanks Michael for your response,
    Yes, they are in different disks. So, I will not have nightmares ?
    Thanks again

  • I would suggest to monitor the log growth (can use Alerts with the help of DB mail) to have an idea of how much you can expect at peak business hours. Won't advise shrinking unless u r sure that it wont use the space for a long time , as SQL server reuses the VLF's.

  • luismarinaray - Friday, August 25, 2017 1:44 PM

    Dear friends,
    Please, help me to take action, the facts are:
    Big database 1.5 terabyte database size, recovery model simple, 12 % Pct Free space for MDF but for LDF  0 % free just 9 megabytes remains in the physical disks of each unit, using DBCC SQLPERF(LOGSPACE) it says only 0.1333386 % used. The next growth is automatically enabled to 262.144 mb same as MDF.
    Should I add more physical space to the disk where LDF is located, in spite of, very low internal use ?????
    am I risking something or forgeting something??
    Your recommendations, please

    What size is the actual LDF file and what size is the drive it's on?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff,
    LDF file 507.965 Mb
    Drive 507.974 Mb
    free 9 mb
    Thanks for your suggestions

  • luismarinaray - Saturday, August 26, 2017 5:49 PM

    Jeff,
    LDF file 507.965 Mb
    Drive 507.974 Mb
    free 9 mb
    Thanks for your suggestions

    If I'm reading that correctly, that's only a half Giga Byte and it seems that it has been slammed into the stops by the similar size of the Drive that it's on.  For a 1.5TB database, I'm thinking that's way too small.

    My recommendation would be to put that particular LDF on a 100GB drive and see what it grows to with a growth setting of 8GB.  I'd also make sure that I had some code in place to capture what's going on when it does grow so that you can check the code responsible for the growth for accidental many-to-many joins.

    If it turns out that I'm reading the numbers incorrectly (if the decimal point is being used as comma), then that's a half TeraByte file and you have a real problem especially for the length of time it would take to do a restore.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Hi,
    Yes, dear friend, 500 Gb LDF file and 500 Gb physical drive.
    Thanks again

  • luismarinaray - Monday, August 28, 2017 12:11 PM

    Hi,
    Yes, dear friend, 500 Gb LDF file and 500 Gb physical drive.
    Thanks again

    Ugh!  Just to do a quick litmus strip test, how big is your largest index (almost always a clustered index)?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff,
    Yes, a primary, unique, clustered index of 146 Mb
    Thanks

  • luismarinaray - Tuesday, August 29, 2017 9:41 AM

    Jeff,
    Yes, a primary, unique, clustered index of 146 Mb
    Thanks

    You sure that's not GB instead of MB?  I ask because it seems odd that a 1.5 Terabyte DB wouldn't have any CI less than at least 10's of GB.  Could explain a lot, though.  What's the largest HEAP in the database?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Hey,
    Just checked again, 147 Gigas :crying:
    Largest heap table 1 Gigabyte
    :doze:
    Thanks for your suggestions

  • luismarinaray - Tuesday, August 29, 2017 11:57 AM

    Hey,
    Just checked again, 147 Gigas :crying:
    Largest heap table 1 Gigabyte
    :doze:
    Thanks for your suggestions

    A 147GB CI isn't a problem.  I've got a couple myself that are well over 200GB and I know some folks that have 1TB+ CIs.

    Couple of more questions to try to discover what may be going on.

    1. Do you have index maintenance routines running?
    2. Have you done a large number of deletes, inserts, or updates in a single batch since the inception of this database?
    3. Have you added any columns to the tables in the database?
    4. Would you run the following code (make sure you change the database name to the name of your 1.5TB database) and post the results, please?


     SELECT type_desc
            ,SizeMB = size/128.0
            ,max_size
            ,growth
            ,is_percent_growth
            ,state_desc
       FROM sys.master_files
      WHERE database_id = DB_ID('putyourdatabasenamehere')
    ;

    5. How long do the top 5 queries take to run and do any of them have a DISTINCT or GROUP BY in them?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff,

    1. Yes, weekly dbcc,Index reorg and rebuilt, (if the maintenance window allows it).
    2. Yes, that is the normal behavior of the app
    3. No
    4.  ROWS    1971849.500000    -1    262144    0    ONLINE
         LOG         507790.187500    -1    262144    0    ONLINE
    5. Top queries does not include DISTINCT or GROUP BY, they are  of this form.....

    select 164 576 067  executions for top query #1, total elapsed time 13498 , last elapsed time 0
     Insert with 5.482.022  executions for top query #2, total elapsed time 66520, last elapsed time 0
    update with 4.066.625 executions for top query # 3, total elapsed time 5930, last elapsed time 0
     select with 656 036 for top query #4, total elapsed time 506, last elapsed time 0
    Insert with 1 494 117 executions for top query #5, total elapsed time 22006,last elapsed time 0

    Any idea,
    Thanks

  • I could certainly be wrong but from what I'm seeing, you don't actually have a problem with your log file drive.  There's no % growth (it's set to grow by a nice 2GB).  You don't have a super monster index that would beat-up on the log file during reorgs (rebuilds will be minimally logged because you're in the SIMPLE Recovery Model).

    It looks like someone setup the drive and then allocated it all for the log file.  Except for the time it takes to format the VLFs in such a large log file when trying to restore a database, that's not such a bad idea.

    I'd say, either leave the drive alone because it's ready to serve just about any request made of it or shrink the log file down to 0 and then regrow it to 2GB and let mother nature regrow it to a "normal" size to finally see what the system will really use.

    To answer your original question... no... I don't believe that you need to add any more space to the drive.  I think someone just did a full drive allocation/assignment.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply