Urgent help

  • hi this proc was taking 4 hrs i hav created indexes but still it is taking 1 : 30 mins to complete..

    Plz any suggestions on this.

  • Hmmm, over 3500 lines. Nice.

    Please see this article on how to get better results from this forum: http://www.sqlservercentral.com/articles/Best+Practices/61537/

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • Waiting for reply.....

  • Could you please provide the schema script?

  • samsql (9/10/2008)


    Waiting for reply.....

    And we are waiting for you to provide the requested table definitions and sample data. No one can reasonably tune 3500 lines of SQL without some way to test it.

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • The first thing I would do is break the proc up into smaller procs and a master proc that calls each smaller proc in sequence. That will make the whole thing easier to deal with.

    The second thing I would do is find out which part of the proc is the slowest. Judging by the code, it's probably all of it, but it's worth a look to see if there is any one part that is exceptionally slow.

    My first guess as to what's making it slow is this kind of thing:

    Delete From #Collection

    Insert into #Collection(Amount,Date,AccountID)

    select Distinct b.Amount,b.Date,b.AccountID from t_advanceworkflow c

    join t_advances a on c.ApplicationID=a.ApplicationID

    and a.LoanSeries=(Select Max(LoanSeries) from t_advances where AccountID=a.AccountID

    and DisbursedOn is not null) and c.ProductId Between @FromProductID and @ToProductID

    and c.ProductID in (Select Description from t_usercodedetails where id in ('207'))

    and a.DisbursedOn is not null Join t_advanceInstallments d on d.AccountID=a.AccountID

    and a.LoanSeries=d.LoanSeries and d.PaidDate is not null join t_AccountTransactions b

    on a.RepaymentAccountID=b.AccountID and b.ProductID like '%R%' And Crdb='C'

    and b.Date Between @FirstYearDate and @WorkingDate and d.InstallmentDueOn

    Between @FirstYearDate and @WorkingDate and d.installmentdueon > d.PaidDate

    and b.OurBranchID in (Select BranchID From #TempBranchId)

    select @TotSYTDAccounts = Isnull(Sum(Amount),0) from #Collection

    Delete From #Collection

    Why insert it into a temp table, then assign to a variable, then delete from the temp table? You're hammering tempdb and its log file like crazy for no good reason.

    I'd replace all of those (there are at least dozens of that construct), with things like this:

    select @TotSYTDAccounts = Isnull(Sum(distinct Amount),0)

    from t_advanceworkflow c

    join t_advances a

    on c.ApplicationID=a.ApplicationID

    and a.LoanSeries=

    (Select Max(LoanSeries)

    from t_advances

    where AccountID=a.AccountID

    and DisbursedOn is not null)

    and c.ProductId Between

    @FromProductID

    and

    @ToProductID

    and c.ProductID in

    (Select [Description]

    from t_usercodedetails

    where id in ('207'))

    and a.DisbursedOn is not null

    Join t_advanceInstallments d

    on d.AccountID=a.AccountID

    and a.LoanSeries=d.LoanSeries

    and d.PaidDate is not null

    join t_AccountTransactions b

    on a.RepaymentAccountID=b.AccountID

    and b.ProductID like '%R%'

    And Crdb='C'

    and b.Date Between

    @FirstYearDate

    and

    @WorkingDate

    and d.InstallmentDueOn Between

    @FirstYearDate

    and

    @WorkingDate

    and d.installmentdueon > d.PaidDate

    and b.OurBranchID in

    (Select BranchID

    From #TempBranchId);

    First, it will take less processing to run and should be at least marginally faster. Second, it will be much easier to read if you ever need to debug/refactor it, if you change the layout as indicated. Of the two changes, the layout is the less important, but it is still quite important.

    Next, your joins in this query involve correlated, inline sub-queries in the join clause. Like this:

    and a.LoanSeries=(Select Max(LoanSeries) from t_advances where AccountID=a.AccountID

    and DisbursedOn is not null)

    Without your table structure and a little data to understand how/why this set up this way, I can't give you a full solution to replace that, but it seems to me that you could pre-calculate that value once and then use it in all the queries. It looks to me like that particular calculation is done 15 times.

    Next, with the change to selecting a variable directly from the tables, instead of populating a temp table then selecting from that, you can probably also consolidate most/all of those 15 selects into a single select. The queries aren't written in such a way that I can see the differences in their select critia easily, but they all look pretty much the same to me. So, check them carefully, and you might be able to consolidate them so you aren't pulling the data multiple times.

    If you really need to pull the data 15 times, what you might do is take the parts that don't change, dump those into a temp table ONE time, and then join to that using the variable criteria. That will certainly result in improved performance.

    All of those ideas are pretty obvious from the query and will, if done correctly, give you some performance improvements. Basically, the query is doing the same work over and over and over again, and it is having to do a lot of disk IO because of all the overuse of temp tables (the log files on those require disk IO at the very least). Fix those two things and you should see a huge performance improvement.

    Beyond that, can't say any more without more data, like table and index definitions and some sample data, and an idea of how many rows of real data it is running on.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Nice reply. GSquared !!

    Wouldn't simply replacing the DELETE with TRUNCATE help a tiny bit ?

  • thknz GSquared plz provide more suggestions

  • These are not scripts.

    I am done here, this one is all yours Gus. 🙂

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • homebrew01 (9/10/2008)


    Nice reply. GSquared !!

    Wouldn't simply replacing the DELETE with TRUNCATE help a tiny bit ?

    It would reduce the logging a bit. Might gain a couple of seconds out of that. But out of an hour and a half, a few seconds of truncate vs delete isn't going to be enough.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • The spreadsheet would allow me to build some tables. Create scripts would be better, but that could work.

    Have you tried the suggestions I already gave you? What result did they have?

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • I'll bet that at least one of these inequalities is causing a cartesian that is then being distincted away:

    select Distinct

    ... and c.ProductId Between

    ... and c.ProductID in

    ... and b.ProductID like '%R%'

    ... and b.Date Between

    ... and d.InstallmentDueOn Between

    ... and d.installmentdueon > d.PaidDate

    ... and b.OurBranchID in

    Any one of them that returns even 2 for 1 will murder the performance. The query works as is only because it runs all the way through and distincts out the duplicates. I bet the plan shows some of the steps with huge numbers of anticipated rows. The query needs to be rewritten without these. Use derived tables that get a distinct value and join those tables to the main query.

  • GSquared----- If you really need to pull the data 15 times, what you might do is take the parts that don't change, dump those into a temp table ONE time, and then join to that using the variable criteria. That will certainly result in improved performance.

    i m going with each and every suggestion of urz...........

    ................. wat abut cases query in this proc how do i improve tht .. GSquared...

    thkz ..

  • samsql (9/12/2008)


    GSquared----- If you really need to pull the data 15 times, what you might do is take the parts that don't change, dump those into a temp table ONE time, and then join to that using the variable criteria. That will certainly result in improved performance.

    i m going with each and every suggestion of urz...........

    ................. wat abut cases query in this proc how do i improve tht .. GSquared...

    thkz ..

    I'm not clear on what you're asking.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • I will have to agree with the other posters,

    a. Proc way too long, makes it really hard to pin point the bottlenecks in it, especially since we don't know the data in those tables

    b. break this proc down in few smaller ones, then check how these smaller proc are performing, it's a great way to make the whole thing more readable and also to pin point which parts are slower

    If you want save the execution plan and post it here, if you are lucky the plan will highlight few obvious tweaks... but I suspect the plan will be huuuuuuge ...

    you can also try to stick a bunch of

    print 'Step XYZ start time: ' + convert(varchar(100), getdate())

    or make usage of SET STATISTICS TIME and then run the proc, once its done running you will have an idea which chunks of code took the longest to run

Viewing 15 posts - 1 through 15 (of 24 total)

You must be logged in to reply to this topic. Login to reply