• fkeuris (7/29/2014)


    If the indexes are created and/or it is still that high amount of reads, it could also meant that the Statistics is out of date, so the optimizer is not picking the most cost effective plan

    It could also be "normal". The 2 Billion reads is trivial on large systems especially if there's a lot of batch processing. SQL Server can't do a thing with data unless it's in memory. Some folks' goal is to have enough memory so that the whole database can live in memory which would mean that almost everything would be logical reads and very little would be physical.

    Of course, it could also be because of the things you say or it could be just crap code. It could also be from regular rebuilds or reorgs of large tables, which is a "normal" thing, as well. The overall number of reads means very little unless you can identify the source and then the cause of those reads.

    I'll emphasize the crap code possibility because I've regularly run into single procs that generate tens of billions (read that as tens of Terabytes of I/O) of reads either in a single run or in multiple very short runs that occur 10,000 times in short periods of time. All of those can be fixed.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)