Select statement of view stays 5 hours to return results......

  • Are the stats up to date and the indexes defragged (or rebuilt)?

  • Craig Farrell (1/10/2011)


    *nearly sprays his tea all over the monitor upon opening the execution plan*

    Yikes. Wow. :w00t:

    Now, THAT'S a Table Spool. Just the estimate, I know, but a 101million row spool? No wonder those distincts are crying.

    Really, this is the kind of thing you usually hire specialized consultants for. Is this view eventually used for some kind of data warehouse, or business intelligence? It's practically screaming that it should be going to SSAS for final reporting.

    Really??? Even if the base DB is only 350 mb?

    I have many dbs with 100GB + of data and I don't need no SSAS nor any precompile of any sorts. Just clean design and cleaner queries.

  • Craig Farrell (1/10/2011)


    Grant Fritchey (1/10/2011)


    This results in a merge join on just a few records. From what I can see, I'd try an index on the PastExperience table on the column IdAuthor and probably would add the Title column to the INCLUDE list.

    Grant,

    I'm suddenly worried that once again a core item to my understanding is invalid. Merge Joins are supposed to be the fastest. It's the "card-shuffle" join, one pass on both sides of the data and bam, done. Why would you go out of your way to avoid that?

    It really depends on the data set. In general, not always, I look at the merge join on smaller data sets, as an indication of a missing index. Merge can work really well on larger data sets, when the data is ordered. But when the data isn't ordered, it has to order it, and then merge becomes problematic.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • JMSM (1/10/2011)


    Hello again,

    I'm really but really sorry, but i'm completely lost and hope that anyone explain me what should i do like if i was a donkey :crying:

    Sorry but i really dont know what is wrong or what is right.. :sick:

    Thnaks and regards,

    JMSM 😉

    The main problem for us helping you is, this is really large and complicated and we're just volunteers here. I spent about five minutes looking through the query & execution plan and came up with a couple of suggestions. Others are coming up with more. But it's just not likely that anyone is going to sit down & rework the whole thing for you.

    In general, on a query this large, I'd suggest taking it very small bites. Set up so that you can measure performance. It's not my favorite way, but you can turn on "Include Client Statistics" in a query window to gather performance metrics of the query. Then, make small changes and measure the result and examine the execution plan. For example, add an index on the foreign key of a table and then see how the query behaved. Eliminate a DISTINCT (if you can) and see how the query behaves. And just work through it, slowly and carefully, recording what you're doing, what had a positive impact and what was negative. Don't do this on the production system, but instead on a test machine, preferably one that you can isolate so that you're not running into contention with other users.

    Hopefully that helps a bit.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Ninja's_RGR'us (1/10/2011)


    Craig Farrell (1/10/2011)


    *nearly sprays his tea all over the monitor upon opening the execution plan*

    Yikes. Wow. :w00t:

    Now, THAT'S a Table Spool. Just the estimate, I know, but a 101million row spool? No wonder those distincts are crying.

    Really, this is the kind of thing you usually hire specialized consultants for. Is this view eventually used for some kind of data warehouse, or business intelligence? It's practically screaming that it should be going to SSAS for final reporting.

    Really??? Even if the base DB is only 350 mb?

    I have many dbs with 100GB + of data and I don't need no SSAS nor any precompile of any sorts. Just clean design and cleaner queries.

    I should have been more clear. It's not the data volume that's screaming SSAS, it's the manipulation that's occuring. This looks (at first glance) like he's trying to do heavy handed business intelligence metrics via SQL real time.

    Usually I've tried to steer clients to using the right tool for the job. At the very least, a dedicated warehouse server with indexes to support this kind of work. Otherwise, you *can* run SSAS on the same server with some overnight overhead to build out your cubes.

    I should have said *wants to be in SSAS*, not going to.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Grant Fritchey (1/10/2011)


    Craig Farrell (1/10/2011)


    Grant Fritchey (1/10/2011)


    This results in a merge join on just a few records. From what I can see, I'd try an index on the PastExperience table on the column IdAuthor and probably would add the Title column to the INCLUDE list.

    Grant,

    I'm suddenly worried that once again a core item to my understanding is invalid. Merge Joins are supposed to be the fastest. It's the "card-shuffle" join, one pass on both sides of the data and bam, done. Why would you go out of your way to avoid that?

    It really depends on the data set. In general, not always, I look at the merge join on smaller data sets, as an indication of a missing index. Merge can work really well on larger data sets, when the data is ordered. But when the data isn't ordered, it has to order it, and then merge becomes problematic.

    Hm, I've always looked at it inversely, as when you've got a large set on one side and a small on the other, sorting the large set can be prohibitive compared to just doing a nested loop join. Resorting a few small sets seemed faster to me when I goofed around with it directly.

    I have more food for thought now, thanks Grant.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Now, THAT'S a Table Spool. Just the estimate, I know, but a 101million row spool? No wonder those distincts are crying.

    That's your basic accidental cross join usually caused by a lack of join criteria which causes the many-to-many join (essentially, a cross join). The lack of criteria can be simply from a lack of understanding of the data but, usually, it's due to an, ummm.... "improper" database design.

    If you can't repair the DB design, then the usual workaround is to divide and conquer through the use of interim results stored in Temp Tables. Of course, that will shoot the idea of actually using a VIEW in the head.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 7 posts - 16 through 21 (of 21 total)

You must be logged in to reply to this topic. Login to reply