October 4, 2006 at 12:29 pm
The code *refer to program abstract at the bottom* is used to pull out data from different tables. The outer SELECT statement is used to extract two parameters based on certian conditions (and these parameters are used by the queries inside the scope of this select statement).
The cursor is used to fetch the next line item in the result set for the subqueries to use.
The sub query consists of multiple inner join/joins to other tables which might be taxing the processor a little.
But with increasing number of sub queries in the main fetch/begin loop, the time taken to execute increases almost exponentially and the queries by themselves take more or less the same time to retrieve the data when compared against each other.
Consider a brief explanation below:
WIth regards to the program abtract below, when only the query assigned to Variable 1 or Variable 2 is included in the list, the time to execute is say 40 seconds, if I include two, its 2 minutes, if I add any more queries, I get a total execution time of upto 15 minutes. Is there any way to suggest an improvement ?
Can someone refer me to an example of a SET BASED SOLUTION.
_____________________________________________________
Delcare variables, cursor
Select x,y for the condition ( ) -- OUTER SELECT STATEMENT
Begin
Variable1 = Sub query 1 --Includes join statements
Variable2 = Sub query 2 --Includes join statements
.
.
.
.
Variable n = Sub query n --Includes join statements
Insert into TABLE(field1, field2 ) VALUES(Variable1, Variable2...
Fetch the next cursor
End
_____________________________________________________
Thanks folks
October 4, 2006 at 12:57 pm
The absolute best way to improve your performance is to nix the cursor and come up with a set based solution for your problem. If you post your code, you may get more help.
October 4, 2006 at 1:53 pm
And Add to that sample DDL and Data.
Help us help you
Cheers,
* Noel
October 4, 2006 at 2:07 pm
HI folks,
Here you go, my code is similar to what is shown below.
The subquery inside is just one variable (in this case @j-2) that is being extracted. The entire program has 9 such subqueries and 9 such variables to extract which is causing the problem. The number of records it parses through are about 9000 in number. The interesting thing to note is, without the subquery, the whole process takes just 14 seconds or so. Every additional sub-query will just increase the execution time Exponentially.
DELETE FROM TABLEM
DECLARE @ITEM AS VARCHAR(20), @DIM AS VARCHAR(20), @j-2 AS REAL, @TEMP AS REAL,
DECLARE @ITEM_CURSOR CURSOR
SET @ITEM_CURSOR = CURSOR
FOR SELECT t1.ItemId, t1.InventDimId FROM table1 t1
JOIN table2 t2 ON t2.InventDimId = t1.InventDimId WHERE t2.wmsLocationId != The Room' ORDER BY t1.ItemId, t1.InventDimId
OPEN @ITEM_CURSOR
SET @I = 0
WHILE @@FETCH_STATUS = 0
BEGIN
SET @DIM = '%' + @DIM
-------------------------------------------------------------------------------------------------------------
--This is for Journal AMounts
SET @TEMP = (SELECT SUM(b.var1) FROM bTable b
JOIN slTable sl ON sl.var3 = b.var4
JOIN stTable st ON (st.var5 = sl.var5)
WHERE (st.stype = 0) AND ((st.sStatus = 1) OR (st. sStatus = 8)
OR (st. sStatus = 12) OR (st. sStatus = 13) OR (st. sStatus = 0)) AND (st.DataAreaId = 'xxx’) AND (sl.DataAreaId = 'xxx') AND (b.DataAreaId = 'xxx')
AND (b.ItemId = @Item) AND (b.InventDimId LIKE @DIM))
SET @j-2 = CASE
WHEN @TEMP IS NULL THEN 0.00
ELSE @TEMP
END
INSERT INTO TABLEM(ItemId, jAmount) VALUES(@ITEM, @j-2)
-------------------------------------------------------------------------------------------------------------
FETCH NEXT FROM @ITEM_CURSOR INTO @ITEM, @DIM
END
CLOSE @ITEM_CURSOR
DEALLOCATE @ITEM_CURSOR
October 5, 2006 at 5:13 am
Though it's an incomplete example, for this particular item, it looks like you could scrap the cursor and just write a
SELECT itemID, SUM(b.var1)
FROM yadayada
WHERE ...
GROUP BY itemID
If the other eight variables is to be counted against the same group (eg itemId), then it's likely you could do all nine in a single select.
The 'LIKE '%...' criteria will cause a scan due to the leading wildcard, but I don't see that as a big issue on something as small as 9k rows only.
Anyway, so far, it doesn't look like the cursor is necessary.
/Kenneth
October 5, 2006 at 7:07 am
In addition, while you are in the process of re-writing this, you may want to consider changing your JOIN on the != operator to an ANSI OUTER JOIN as the != operator is no longer supported as of SQL Server 2005.
October 5, 2006 at 7:40 am
If I'm not mistaken,
JOIN a.col1 != b.col1
..is the same as
JOIN a.col1 <> b.col1
The legacy outer join operator is *= (the one noone should use anymore)
/Kenneth
October 5, 2006 at 7:45 am
Ah, you are correct. I appologize for the incorrect info, that's what I get for comming in early today!!
October 6, 2006 at 2:54 pm
Does something like this do what you want?
INSERT TABLEM(ItemId, jAmount)
SELECT t1.Item_Id, ISNULL(SUM(b.Var1), 0.00)
FROM table1 t1
JOIN table2 t2 ON t2.InventDimId = t1.InventDimId
AND t2.wmsLocationId 'The Room'
JOINbTable bON b.ItemId = t1.Item_Id
AND b.DataAreaId = 'xxx'
AND b.InventDimId LIKE '%' + t1.InventDimId
JOIN slTable sl ON sl.var3 = b.var4
AND sl.DataAreaId = 'xxx'
JOIN stTable st ON st.var5 = sl.var5
AND st.stype = 0
AND st.sStatus IN (0, 1, 8, 12, 13)
AND st.DataAreaId = 'xxx'
Viewing 9 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply