FlushCache Cleaned Up Buffs

  • Shayn Thomas

    SSCertifiable

    Points: 5603

    Good Afternoon,

    I have recently been seeing temporary "drops" in connectivity to one of my SQL Server 2016 instances and when checking the error log i am seeing the below 3 lines,

    FlushCache: cleaned up 3660 bufs with 2427 writes in 99016 ms (avoided 143 new dirty bufs) for db 6:0

    average writes per second: 24.51 writes/sec average throughput: 0.28 MB/sec, I/O saturation: 610, context switches 392

    last target outstanding: 9760, avgWriteLatency 95

    The numbers differ but the dbID stays the same (6:0),

    I have been looking into this for a few days now and i cannot seem to find the reason for this and what is the best thing to do to stop it occurring and crashing the client application,

    Any help would be greatly appreciated and if i can provide more information to assist please let me know,

    Regards

    Shayn Thomas

    ---------------------------------------------------------------------------------------
    The more you know, the more you know that you dont know

  • Sue_H

    SSC Guru

    Points: 90673

    The message is from long checkpoints. One of the first things you would want to check is the disk subsystem as that's often (not always) the issue. The following is an article on the message which mostly explains the same thing:

    How It Works: When is the FlushCache message added to SQL Server Error Log?

    Sue

  • Shayn Thomas

    SSCertifiable

    Points: 5603

    Hi Sue,

    Thanks for the link,

    The numbers in my results/errors don't seem too high, at least not in comparison to the ones mentioned in the link, but i may be understanding it wrong,

    Would you expect to see errors/spikes in the subsystem CPU/Disk reads during the time that i am getting the errors?

    If there is anything else i can do or run to check what might be causing this it would be greatly appreciated,

    Thanks again

    Shayn

    ---------------------------------------------------------------------------------------
    The more you know, the more you know that you dont know

  • Sue_H

    SSC Guru

    Points: 90673

    I would probably capture what's going with the IO from a SQL Server perspective querying sys.dm_io_virtual_file_stats. You can see the results by database id which should really help since your seeing the issues with one particular database. SQLSkills has a great post for capturing the information for a period of time - it just runs the same two queries based on the time delay. You could just query the DMV without the timed capture as well and that may give you some information as well. Here is the script for capturing for a period of time:

    Capturing IO latencies for a period of time

    Sue

  • Shayn Thomas

    SSCertifiable

    Points: 5603

    Hi Sue,

    My apologies for not replying sooner i have been out of the office,

    I will have a look into this and see what i can find,

    Thanks for again for your assistance so far,

    I will update you when i have any further information,

    Regards

    Shayn

    ---------------------------------------------------------------------------------------
    The more you know, the more you know that you dont know

  • Shayn Thomas

    SSCertifiable

    Points: 5603

    Hi All,

    I have flagged this with our infrastructure team who seem to think that the storage subsystem was not set up correctly to optimize the use of SQL

    With this and looking into sys.dm_os_memory_cache_counters and sys.dm_exec_cached_plans i think i am in a good position to get this sorted,

    Thanks again Sue your assistance was much appreciated,

    Regards

    Shayn

    ---------------------------------------------------------------------------------------
    The more you know, the more you know that you dont know

  • Sue_H

    SSC Guru

    Points: 90673

    Appreciate the update - thanks for posting back!

    Sue

Viewing 7 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply