Long runing query

  • yes server /hardware are same and when you say slow qry temp table is large may i know how did you find that because technically it should be small as the database is 300gb when comapred to the fast qry database which is 1.5TB.

  • Here is what i found, the tem table #revdate has only half million records but in the execution plan is that it is scanning more than 250 million records, how is this possible and how to troubleshoot this?

    Here is query that is causing problem...

    select distinct a.empnumber, filldate,empcode

    into #tmp_calc

    from Vwrevenue as a inner join #revdates as b on a.empnumber = b.empnumber, #daterange

    where (filldate between empstart and empend) and

    (filldate between effectivedate and expirationdate) and

    (empcode like '5687%')

    from the above query if put in actual dates instead of joning the #daterange table the query is faster and this is where i think it is scanning so many recrods instead of just half million.

  • do you see any problem in the above query and why it is scannig more records than required ?

  • a

  • same issue that i saw in the other thread i think;

    the mix of old vs new style joins is causing a cross join and increasing the rows processed:

    select distinct a.empnumber, filldate,empcode

    into #tmp_calc

    from Vwrevenue as a inner join #revdates as b on a.empnumber = b.empnumber, #daterangewhere (filldate between empstart and empend) and

    (filldate between effectivedate and expirationdate) and

    (empcode like '5687%')c

    changing that to a true inner join will fix the issue.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • The reason you see it reporting 2.5b rows is because it is doing a loop join on that table. It looks like there are 5527 rows in one table and 454080 rows in the other table. With a loop join each row in the outer is compared to the inner and that is where you're getting the 2509700160 rows in total. I think that somewhere either your statistics are off or you need indexes. You should be able to copy the statistics from the other machine and see if you can get to the same query plan. You can also work with query hints just to prove the point. Either using inner hash join or option (hash join)

    Give it a try and let us know.

    Tom

  • Auto Updates Statistics is true ON and indexes are properly built all the databases including the problem database and i dont blame the qry as the same qry runs faster on bery large database except the database in isssue. please shed some light...

Viewing 7 posts - 16 through 21 (of 21 total)

You must be logged in to reply to this topic. Login to reply