Elimination of Table spools

  • Whats the best way to eliminateLazy table spools as I have two in a particular query that I'm trying to optimise.

    Looking at the XML node data for one of the Lazy Spool has an Estimated cost of 30555. and an EstimateRewinds of 298,457,000 the other is even worse with an estimated cost of 63531.8 and estimated rewinds of 620,791,000.

    I don't think indexing is going to solve the problem as the steps preceding this are doing Clustered index seeks.

    I think the problem is that the culprits are LEFT OUTER JOINS on the a Derived Table that has a union all.

    I was thinking along the lines of using the

    I've attached the Estimated plan, and the Query.

    _________________________________________________________________________
    SSC Guide to Posting and Best Practices

  • Update statistics on all tables referenced by this query.

    Add an index to the table [section] - a unique clustered index on SectionID perhaps?

    Run the query, capture and post the actual plan (not the estimate).

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • Thanks Chris,

    I was looking at an NC index on SectionId, with an includes of the InceptionDate, but the benefit is minimal (based on an Index benefit analysis).

    a little bit of background the database this is on is a static dataset and is being used to test the ETL load process and tune it by looking at missing indexes.

    The Database this runs against as had all the indexes rebuilt, though I've not run an sp_updatestats with a fullscan against the entire database.

    We also clear down all the system generated stats and look at what the system is generating, then try and eliminate them, with indexes or suggested rewrites of queries to the main development team.

    I should be able to get the execution plan sometime after lunch as I have a data load running at the moment, but from memory the actual plan is just as bad as the Estimate, with these two accounting for about 70% of the overall plan.

    _________________________________________________________________________
    SSC Guide to Posting and Best Practices

  • No worries mate. There's a lot to do here...

    Indexing on Policy.SectionLimitExcess - no indexes at all, start with a single-column unique clustered index if possible and relevant. How about SectionID? What's the distribution like?

    ---------------------------------------------

    -- used by both queries in UNION: set up as a #temp table

    -- convert FX.ValidPeriod to match S.InceptionDate

    -- which might mean a value range

    left join

    (

    select FX.ExchangeRate, FX.SourceCurrencyId, FX.ValidPeriod

    from Common.ExchangeRates FX

    inner join Common.Currency C3

    on FX.DestinationCurrencyId = C3.CurrencyId

    and C3.Code = 'GBP' AND C3.IsConformed = 1

    and FX.RateTypeId = 60

    ) FX

    on C1.ParentCurrencyId = FX.SourceCurrencyId

    and FX.ValidPeriod = convert(int, left(convert(varchar, S.InceptionDate, 112),6))

    ------------------------------------------------

    Tidy up your output by putting repeated row-level calculations into CROSS APPLY in the from list:

    CROSS APPLY (

    SELECT CalcResult = isnull(

    case

    when isnull(C1.Code, 'UNDEF') = 'GBP' then 1 --GBP->GBP Conversion

    when isnull(C2.Code, 'UNDEF') = 'GBP' and ISNULL(SLE.RateOfExchange, 0) > 0 then SLE.RateOfExchange --use SLE.RateOfExchange as first preference

    else FX.ExchangeRate --use FX.ExchangeRate as second preference

    end, 0)

    ) x

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • I know, and this isn't the worst query, one of them had a 1.2 million Sub-tree cost at the root.

    This particular query is used in a view that's is itself referenced by another view, which adds to the problems as SQL doesn't always play nice with such things.

    Unfortunately I'm in the position of being told I cant change too many things in terms of the queries, as there's a 'costly' integration and retesting phase that would need to be done.

    In terms of the underlying database its mainly heaps (1.3 TB, yes I know!!) with the odd clustered index and a number of the heaps are on Partitioned tables.

    I'll look at the cross apply that you've suggested and see what happens with the plan then pass the advice on.

    _________________________________________________________________________
    SSC Guide to Posting and Best Practices

  • Jason-299789 (4/24/2013)


    I know, and this isn't the worst query, one of them had a 1.2 million Sub-tree cost at the root.

    This particular query is used in a view that's is itself referenced by another view, which adds to the problems as SQL doesn't always play nice with such things.

    Unfortunately I'm in the position of being told I cant change too many things in terms of the queries, as there's a 'costly' integration and retesting phase that would need to be done.

    In terms of the underlying database its mainly heaps (1.3 TB, yes I know!!) with the odd clustered index and a number of the heaps are on Partitioned tables.

    I'll look at the cross apply that you've suggested and see what happens with the plan then pass the advice on.

    Indexing is the real issue here, Jason - and statistics. The table spool estimated costs are so high because the stats are out. There are many hash joins because of the heaps. Even a little index tweaking could go a very long way, you may not need any query changes at all.

    Most important of all - update the stats then get an actual plan.

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • I completely Agree about the indexes being the real issue I've been looking at this script to see what the Engine says it needs, then looking at creating covering indexes for several cases which has resulted in an additional 40 indexes.

    SELECT user_seeks * avg_total_user_cost * ( avg_user_impact * 0.01 ) AS [index_advantage] ,

    dbmigs.last_user_seek ,

    dbmid.[statement] AS [Database.Schema.Table] ,

    dbmid.equality_columns ,

    dbmid.inequality_columns ,

    dbmid.included_columns ,

    dbmigs.unique_compiles ,

    dbmigs.user_seeks ,

    dbmigs.user_scans ,

    dbmigs.avg_total_user_cost ,

    dbmigs.avg_user_impact

    FROM sys.dm_db_missing_index_group_stats AS dbmigs WITH ( NOLOCK )

    INNER JOIN sys.dm_db_missing_index_groups AS dbmig WITH ( NOLOCK )

    ON dbmigs.group_handle = dbmig.index_group_handle

    INNER JOIN sys.dm_db_missing_index_details AS dbmid WITH ( NOLOCK )

    ON dbmig.index_handle = dbmid.index_handle

    WHERE dbmid.[database_id] = DB_ID()

    ORDER BY index_advantage DESC ;

    My general advice in terms of the structure is that it the DB needs to have Clustered indexes added, but that side is controlled and supported by a third party.

    _________________________________________________________________________
    SSC Guide to Posting and Best Practices

  • Your third party should read this!

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply