TransactionLog. VLF Fragmentation.

  • Hi Guys!!
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------->8

    During the tests, I set the true size for my transaction log file (its a size after maintenance plan, include rebuild index and update statistics) - 88 GB.
    I calculate the best VLF size and autogrow to my transaction log files.
    I decide set intitial size for my transaction log files = 30 GB and set autogrowth =
    1875 MB

    ALTER DATABASE test
    MODIFY FILE
        (
        NAME = test_log,
        SIZE = 30000MB,
        FILEGROWTH = 1875MB
        )
    GO
    DBCC LOGINFO

    8<-------------------------------------------------------------------------------------------------------------------------------------
    Then i modify my transaction log file i get 
    16 VLF and each VLF have size - 1875 MB, and physical file have size - 30 GB, all fine.
    I thought naively that the the next generation of data, where new portion VLF will have 32 VLF and each VLF have size - 1875 MB, and physical file have size - 60 GB, all fine.
    as you guessed it the third new portion VLF will have 48 VLF and each VLF have size - 1875 MB, and physical file have size - 90 GB, all fine.
    But something  goes wrong! The second portion of VLF have 32 VLF and each VLF have size -
    117 MB but not 1875 MB, therefore physical file have size - 31.8 GB but not 60 GB
    I have no ideas why second portion VLF have autogrow 117 MB but not 1875 MB.

    8<--------------------------------------------------------------------------------------------------------------------------------------
    Who knows how to do it:
    16VLF + 16VLF + 16VLF = 64 VLF, when
    each VLF on each VLF portion have size 1875 MB
    and physical file have size - 90 GB
    PS : I think its the  best  decision for my database and especially my transaction log file.
    8<--------------------------------------------------------------------------------------------------------------------------------------
    Thanks for help!

  • dmbsseller - Thursday, November 9, 2017 1:08 AM

    Hi Guys!!
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------->8

    During the tests, I set the true size for my transaction log file (its a size after maintenance plan, include rebuild index and update statistics) - 88 GB.
    I calculate the best VLF size and autogrow to my transaction log files.
    I decide set intitial size for my transaction log files = 30 GB and set autogrowth =
    1875 MB

    ALTER DATABASE test
    MODIFY FILE
        (
        NAME = test_log,
        SIZE = 30000MB,
        FILEGROWTH = 1875MB
        )
    GO
    DBCC LOGINFO

    8<-------------------------------------------------------------------------------------------------------------------------------------
    Then i modify my transaction log file i get 
    16 VLF and each VLF have size - 1875 MB, and physical file have size - 30 GB, all fine.
    I thought naively that the the next generation of data, where new portion VLF will have 32 VLF and each VLF have size - 1875 MB, and physical file have size - 60 GB, all fine.
    as you guessed it the third new portion VLF will have 48 VLF and each VLF have size - 1875 MB, and physical file have size - 90 GB, all fine.
    But something  goes wrong! The second portion of VLF have 32 VLF and each VLF have size -
    117 MB but not 1875 MB, therefore physical file have size - 31.8 GB but not 60 GB
    I have no ideas why second portion VLF have autogrow 117 MB but not 1875 MB.

    8<--------------------------------------------------------------------------------------------------------------------------------------
    Who knows how to do it:
    16VLF + 16VLF + 16VLF = 64 VLF, when
    each VLF on each VLF portion have size 1875 MB
    and physical file have size - 90 GB
    PS : I think its the  best  decision for my database and especially my transaction log file.
    8<--------------------------------------------------------------------------------------------------------------------------------------
    Thanks for help!

    As the file growth is set to 1875MB the file will only grow in 1875MB chunks.

    So you start at 30GB, then 31.875GB, then 33.75GB, then 35.625GB etc.  The file growth is not a setting for the VLF growth but the actual file.

    If you want to minimize the VLF count, the best growth rate is 8000MB (not 8192MB).

    So shrink it down to 0MB, grow to 8000MB, then 16000MB, then 24000MB etc that way you keep the VLFs down to a minimum level.

    https://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/

  • anthony.green - Thursday, November 9, 2017 1:19 AM

    dmbsseller - Thursday, November 9, 2017 1:08 AM

    Hi Guys!!
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------->8

    During the tests, I set the true size for my transaction log file (its a size after maintenance plan, include rebuild index and update statistics) - 88 GB.
    I calculate the best VLF size and autogrow to my transaction log files.
    I decide set intitial size for my transaction log files = 30 GB and set autogrowth =
    1875 MB

    ALTER DATABASE test
    MODIFY FILE
        (
        NAME = test_log,
        SIZE = 30000MB,
        FILEGROWTH = 1875MB
        )
    GO
    DBCC LOGINFO

    8<-------------------------------------------------------------------------------------------------------------------------------------
    Then i modify my transaction log file i get 
    16 VLF and each VLF have size - 1875 MB, and physical file have size - 30 GB, all fine.
    I thought naively that the the next generation of data, where new portion VLF will have 32 VLF and each VLF have size - 1875 MB, and physical file have size - 60 GB, all fine.
    as you guessed it the third new portion VLF will have 48 VLF and each VLF have size - 1875 MB, and physical file have size - 90 GB, all fine.
    But something  goes wrong! The second portion of VLF have 32 VLF and each VLF have size -
    117 MB but not 1875 MB, therefore physical file have size - 31.8 GB but not 60 GB
    I have no ideas why second portion VLF have autogrow 117 MB but not 1875 MB.

    8<--------------------------------------------------------------------------------------------------------------------------------------
    Who knows how to do it:
    16VLF + 16VLF + 16VLF = 64 VLF, when
    each VLF on each VLF portion have size 1875 MB
    and physical file have size - 90 GB
    PS : I think its the  best  decision for my database and especially my transaction log file.
    8<--------------------------------------------------------------------------------------------------------------------------------------
    Thanks for help!

    As the file growth is set to 1875MB the file will only grow in 1875MB chunks.

    So you start at 30GB, then 31.875GB, then 33.75GB, then 35.625GB etc.  The file growth is not a setting for the VLF growth but the actual file.

    If you want to minimize the VLF count, the best growth rate is 8000MB (not 8192MB).

    So shrink it down to 0MB, grow to 8000MB, then 16000MB, then 24000MB etc that way you keep the VLFs down to a minimum level.

    https://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/

    Thanks for reply, yes its my main desire - minimize VLF count.
    I will restore backup and just try it by your recommendations.
    Is the size of the log can be 0MB? I thought the minimum size is 1 MB

  • As the file growth is set to 1875MB the file will only grow in 1875MB chunks.

    So you start at 30GB, then 31.875GB, then 33.75GB, then 35.625GB etc. The file growth is not a setting for the VLF growth but the actual file.

    If you want to minimize the VLF count, the best growth rate is 8000MB (not 8192MB).

    So shrink it down to 0MB, grow to 8000MB, then 16000MB, then 24000MB etc that way you keep the VLFs down to a minimum level.

    https://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/

    All fine, thanks.but I did a little differently:
    I shrink log file to 1MB, autogrow to  32000MB, then 32000MB, then 32000MB and eventually 16VLF+16VLF+16VLF=48VLF, each VLF have size = 2GB, log transaction physical file have = 96 GB

  • If you actually have a need for a 90GB log file, you're might be doing something wrong even for large systems.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Thursday, November 9, 2017 8:45 AM

    If you actually have a need for a 90GB log file, you're might be doing something wrong even for large systems.

    Maybe you are right! But..
    In this moment my Database have size  - 236 GB Data File and 88 GB Transaction Log File.
    I did Transaction Log Files - 2 GB and all fine, because also i do Transaction Log backup each 30 minutes.
    But in  Sunday  i have MaintenancePlan Reorganize/Rebuild Index and Update Statistics, this task work during 2 hours and records 88 GB in Transaction Log.
    PS:  I  did not do Transaction Log Backup during Sunday, maybe problem in this.

  • dmbsseller - Thursday, November 9, 2017 10:58 PM

    PS:  I  did not do Transaction Log Backup during Sunday, maybe problem in this.

    Definitely problem in that.  Why would you not have your log backups scheduled day and night, seven days a week, especially if you're undertaking log-heavy work such as index maintenance?

    John

  • John Mitchell-245523 - Friday, November 10, 2017 1:10 AM

    dmbsseller - Thursday, November 9, 2017 10:58 PM

    PS:  I  did not do Transaction Log Backup during Sunday, maybe problem in this.

    Definitely problem in that.  Why would you not have your log backups scheduled day and night, seven days a week, especially if you're undertaking log-heavy work such as index maintenance?

    John

    Heh... on that note, I'll state that doing index maintenance is counter productive and the actual source of many performance problems.  I've not done any index maintenance on my production boxes since 17 Jan 2016 (coming up on 2 years in a couple of months) and performance got better in the first 3 months of doing nothing and hasn't degraded since.  It also got rid of the "Monday Morning Blocking Episodes".

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • dmbsseller - Thursday, November 9, 2017 10:58 PM

    Jeff Moden - Thursday, November 9, 2017 8:45 AM

    If you actually have a need for a 90GB log file, you're might be doing something wrong even for large systems.

    Maybe you are right! But..
    In this moment my Database have size  - 236 GB Data File and 88 GB Transaction Log File.
    I did Transaction Log Files - 2 GB and all fine, because also i do Transaction Log backup each 30 minutes.
    But in  Sunday  i have MaintenancePlan Reorganize/Rebuild Index and Update Statistics, this task work during 2 hours and records 88 GB in Transaction Log.
    PS:  I  did not do Transaction Log Backup during Sunday, maybe problem in this.

    That's the reason why you have such a large transaction log.  You need to be doing log file backups on a regular basis every day.  And, as I indicated above, you're actually wasting time and resources by doing index maintenance.  The kind of fragmentation (logical) that you're fixing doesn't matter and the fix is worse than the cure.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Friday, November 10, 2017 11:58 AM

    dmbsseller - Thursday, November 9, 2017 10:58 PM

    Jeff Moden - Thursday, November 9, 2017 8:45 AM

    If you actually have a need for a 90GB log file, you're might be doing something wrong even for large systems.

    Maybe you are right! But..
    In this moment my Database have size  - 236 GB Data File and 88 GB Transaction Log File.
    I did Transaction Log Files - 2 GB and all fine, because also i do Transaction Log backup each 30 minutes.
    But in  Sunday  i have MaintenancePlan Reorganize/Rebuild Index and Update Statistics, this task work during 2 hours and records 88 GB in Transaction Log.
    PS:  I  did not do Transaction Log Backup during Sunday, maybe problem in this.

    That's the reason why you have such a large transaction log.  You need to be doing log file backups on a regular basis every day.  And, as I indicated above, you're actually wasting time and resources by doing index maintenance.  The kind of fragmentation (logical) that you're fixing doesn't matter and the fix is worse than the cure.

    Thanks for advice guys!
    Eventually, if I understood you correctly.
    1) Now i have 908 vlf and 88 gb ldf physical file. Im shrinking my physical file until 1 MB, set autogrowth = 1000 mb, get new 8 vlf then each vlf have size 1000 Mb(exclude primary vlf have size < 1 Mb), get physical file = 8 gb.
    2)Do transaction log backup every day one per hour or one per half an hour.
    3) disabled maintenace plan of rebuild/reorganize indexes and update statistics and monitoring situation. (Actually i used ola hallangren scripts for automation detection reorganize or rebuild indexes)?

  • dmbsseller - Friday, November 10, 2017 11:06 PM

    3) disabled maintenace plan of rebuild/reorganize indexes and update statistics and monitoring situation. (Actually i used ola hallangren scripts for automation detection reorganize or rebuild indexes)?

    No, no, no!!!! Do NOT disable statistics maintenance unless you want some horrible performance problems very soon.

    Some people don't do index maintenance, and if that works for their system, great. Disabling statistics maintenance however is a really, really bad idea. The automatic stats updates are not usually sufficient.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster - Saturday, November 11, 2017 10:54 AM

    dmbsseller - Friday, November 10, 2017 11:06 PM

    3) disabled maintenace plan of rebuild/reorganize indexes and update statistics and monitoring situation. (Actually i used ola hallangren scripts for automation detection reorganize or rebuild indexes)?

    No, no, no!!!! Do NOT disable statistics maintenance unless you want some horrible performance problems very soon.

    Some people don't do index maintenance, and if that works for their system, great. Disabling statistics maintenance however is a really, really bad idea. The automatic stats updates are not usually sufficient.

    Persuaded, I will not do it 🙂

  • dmbsseller - Sunday, November 12, 2017 12:45 AM

    GilaMonster - Saturday, November 11, 2017 10:54 AM

    dmbsseller - Friday, November 10, 2017 11:06 PM

    3) disabled maintenace plan of rebuild/reorganize indexes and update statistics and monitoring situation. (Actually i used ola hallangren scripts for automation detection reorganize or rebuild indexes)?

    No, no, no!!!! Do NOT disable statistics maintenance unless you want some horrible performance problems very soon.

    Some people don't do index maintenance, and if that works for their system, great. Disabling statistics maintenance however is a really, really bad idea. The automatic stats updates are not usually sufficient.

    Persuaded, I will not do it 🙂

    Just to add my 2 cents... Gail is absolutely correct.  Never disable statistics maintenance.  If anything, you might want to be a bit more aggressive about statistics maintenance on indexes whose leading column is based on an ever increasing value such as a date, IDENTITY attribute, of sequential GUID (for example).

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply