• GilaMonster (1/24/2013)


    ben.brugman (1/24/2013)


    I'd personally ignore PLE during CheckDB execution. Because it reads the entire database and because it marks pages brought in as 'disfavoured', meaning they'll be the first to be thrown out of cache when space is needed, PLE will be very, it's kinda expected.

    So even if the PLE goes extremely low during CheckDB it doesn't mean that the cache gets completely 'refreshed', so effectively pages which are touched a lot remain in the cache ?

    I am stil wondering how PLE is calculated.

    Suppose four pages.

    Where the second number is the number of seconds the page has not been referred.

    1. 200

    2. 300

    3. 400

    4 8000

    The average page life (since referral) is 2225.

    After reading a new page.

    1. 200

    2. 300

    3. 400

    5 0

    The average page life (since referral) is 225.

    So this is not used for calculating the PLE. If the PLE is calculated of pages leaving the memory (overwriting them). Then with a PLE of 7 seconds and 8Gb of memory. There must be an awfull lot of overwriting.

    At the moment we are looking into the PLE as a preventive measure, not because we have problems, but we want to be able to signal problems before they occure.

    As for improving one of my suggestions was Clustering on the most important key, for some tables. Within those tables the number of pages read would be 1/5 th of the pages now with the heap.

    Yes more memory would help, and rewriting the queries would help as wel. Rewriting the queries would be very expensive. Clustering and optimising for that would be less expensive. (But I am not making the decisions :crying:)

    Thanks for your explenation about DBcheck,

    Ben