Estimated Number of Rows 4,056,100,000,000,000,000,000,000 !

  • I inherited a long convoluted query that rebuilds a summary table every morning. It recently started taking much longer than usual. Looking at the execution plan, there is a string of "Nested Loops" and "Compute Scalar" and the estimated # of rows shows crazy increases with each one.

    There have not been any code changes recently, just gradual increases in data from normal business.

    What possesses SQL to calculate such crazy estimates ?

     

    • This topic was modified 5 years ago by  homebrew01.
  • Could it be that the statistics are not up to date?

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • Auto Create Statistics & Auto Update statistics are both set to "True"

  • That may not be enough.  You should try to update statistics manually and then see what how that affects your query. The symptoms you describe are a classic example of out of date statistics.

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Hi

    As all the other replies, it seems like a classic bad cardinallity problem. I might add to update statistics, to check the execution plan. Perhaps you have more than one execution plan for your query, if this is the case, you can remove it from the memory and then when you run it again it will be created from scratch.

    You can also check with qyery hint rebuild, to force to create a fresh execution plan.

    Finally, if you reestart the instance, all the execution plans will be created an the stats will be updated. Of course you can done all this manually by updating all the involved tables and executing DBCC FREEPROCCACHE (options), to clean the memory of you instance.

    Hope this helps.

  • mig28mx wrote:

    Finally, if you reestart the instance, all the execution plans will be created an the stats will be updated.

    Statistics are not updated when  you re-start an instance.

    And, re-starting SQL Server and/or running DBCC FREEPROCCACHE is not a solution to a performance problem.  It is a band-aid, and does not "fix" anything, it only masks the issue.

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • We really need to see the execution plan.  Estimated is probably good enough, although actual is better.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • Love to see the execution plan for this. Especially if it's estimating that many rows but choosing to do nested loops instead of hash or merge. It's estimating that many rows because the statistics in support of the query thinks that's what you're getting. It's a combination of the objects in your database, statistics on those objects, and the code. When we talk about the objects in the database, it's not just indexes and columns. You also have to talk about unique constraints, foreign keys and any other rules or constraints. The use of WITH CHECK affects row counts. All sorts of stuff impacts this. Without seeing the structure, the code, or the execution plan, all I can say is, check all the things.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I have come across this with complex SELECT statements using views that call other views that use scalar functions. One example I experienced during an upgrade was code that ran in just a few seconds on SQL Server 2000 and was about 1000 times slower when we upgraded to SQL Server 2008. I think we fixed it by making sure there were indexes on the joined columns and adding OPTION (MERGE JOIN) on the join conditions (or the end of the query).

    As wiser folks said above, you need to look at the plan and find the bottlenecks, and take it from there.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply