• peter.cox (2/14/2014)


    funnily enough I've been thinking along the same lines, but the joins are so complex, with the tables at the bottom of the script joining four or five others further up, it may be tricky.

    What I've thought this afternoon is the insert bit of the script. Trying to insert 6m rows onto a table that has had indexes (indicies?) rebuilt that step before is ridiculous; one of which had a fillfactor of 0/100. The longer serving members of staff insist they're needed, as the main table reference for the select part of the insert is the table that gets inserted into!

    I'll do some checking next week, with index hints, to see whether the degradation of performance experienced by removing the index is gained back by inserting into a heap with none.

    It is likely because the query is so complex that you really need to break it down. Small anomalies in row count estimates can get compounded into gross mis-matches, leading to suboptimal plans and significantly longer runtimes and total effort.

    I just posted to another thread on SSC.com about how very few times I have found indexes on temp tables to be more efficient for the entire process than having them without. I also question, like you do, having the indexes on the table in the first place before putting in the millions of rows of data.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service