• Indy Mike (2/15/2016)


    I've come to understand that tables below a certain size will not use the indexes assigned to them, because the table is more efficiently joined by loading it entirely into memory. ..... Is there any way to determine on my system's unique configuration of memory and processors what that cutoff should really be?

    There are a number of different variables that will effect whether or not the optimiser decides to use an index or not, but I really doubt that the size of the RAM comes into play. The optimiser is going to be making a number of assumptions in the interest of producing a plan quickly, and I'm sure the details of the optimiser can and will change between versions.

    A rule of thumb (and rules of thumb are usual wrong) is you're unlikely to see a non-covering index used unless you are selecting less than 2% of the table, and possibly as little as 0.5%. I have a feeling that one of Itzik Ben-Gan's "Inside SQL Server" books covers this in good detail.

    EDIT:- I've had a look at my old copy of "Inside SQL Server 2005 - T-SQL Querying" and in Chapter 3, page 159, Itzik demonstrates how on his Orders table the selectivity of the query had to be as low as 0.72% before the index was used. He doesn't demonstrate the calculation and/or the assumptions made by the optimiser in it's decision, but a trial and error test is quicker and way more reliable. If you really want how the optimiser is making the decision you may be able to find something by Kimberly Tripp about it.