Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12345»»»

Row-By-Row Processing Without Cursor Expand / Collapse
Author
Message
Posted Thursday, December 7, 2006 7:09 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Thursday, November 6, 2014 3:26 AM
Points: 84, Visits: 199
I don't think the string processing is a problem, but the query would a) not work properly or b) fall over due to the 8K varchar limit (as the author in fact mentioned) which only allows 1.6 characters per insert so the dynamic sql would be incorrect.

So the advantage of using cursors is that they would work.

Cursors get a bad press for some reason, but they're much nicer than setting the rowcount to 1 and doing some arcane while loop just to get one row back at a time which I had to resort to in SQL 4.2 and earlier. I think a lot of this is due to them being used where they shouldn't be - I saw one example where someone had used a cursor to get some variables instead of a singleton select.

Cursors are essential in circumstances where row by row processing is necessary. I often have to use cursors on the inserted and deleted tables to update higher level totals.

This avoids the use of multiple subquery statements to sum quantities to the higher levels since transact SQL only returns a single variable from a subquery (a feature like oracle to return several variables would be useful) and the update statements got too complex for the compiler besides being very long winded.

Cursors are also useful where different updates are required depending on the row data or different calculations may have to be used before updating.

I'm not an expert, but cursors on the inserted and deleted tables should perform well since most of the trigger tables should be in memory.

Table variables are an option, but the while loop on a table variable seems pretty similar to using a cursor and there's the overhead to create and populate the table variable, so I'm not sure that they offer a significant performance advantage. From a maintenance/development point of view, it's easier to skip the table creation by using the cursor definition as there's less chance of error.

I'll keep on using them and liking it.
Post #328736
Posted Thursday, December 7, 2006 7:13 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Today @ 6:23 AM
Points: 35,772, Visits: 32,445

Here, here, Kevin!  Any process other than single record GUI support that uses RBAR (pronounced "ree-bar" and is a "Modenism" for "Row By Agonizing Row") either has an incorrect database design or someone simply doesn't understand the methods used to make a process setbased.  And, if you write things to handle more than one row at a time, you'll be amazed at how fast just one row will process and the kind of "concurrency" you can get out of your GUI procs, as well.



--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #328739
Posted Thursday, December 7, 2006 7:49 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Friday, June 12, 2009 7:37 AM
Points: 125, Visits: 115

Actually, in 2005, the solution would probably work if a VARCHAR(MAX) data type was used, which supports up to 2GB of data.

Post #328760
Posted Thursday, December 7, 2006 8:01 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Wednesday, December 10, 2014 9:01 AM
Points: 35, Visits: 307

I had speed example of using cursor and select:

In my previous project "data migration". using cursor would spend more then TWO days, with select, it just 30 min. There were about 200K total customer records

Post #328774
Posted Thursday, December 7, 2006 8:18 AM


SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Yesterday @ 8:17 AM
Points: 150, Visits: 661

I would need to test this in order to verify (not sure if it would work).

 

Perhaps having a timestamp on the table? When a SELECT is initiated this would change. Then again I believe this is only when data is changed.  With that in mind you could incorporate a change to a field (a meaningless field used only for this purpose) which would fire the trigger.  You could also have the trigger fire only for a change to THIS field which means you could have a custom process fire for a query that would not be included in a record update / delete / add.

 

 

With this in mind I was going to post, after reading the article (and still will) the concept of TABLE VARIABLE.  I use these often, when a set based operation is simply not possible.

Upon entering this forum I see the first post is regarding the very concept that I planned on writing about.  I believe I would change the logic just a bit however as the poster only mentions the Identity column. I don't know that this would work as the complete record is not present until the trigger has completed (committed).

That said I would implement the TABLE VARIABLE as follows (keeping the logic in the trigger rather than a trigger calling an external procedure – Eliminates the dynamic portion as well as the parsing of XML, Comma string, ect.).

 

Lets say we are inserting simple customer data (multiple records / bulk insert).  Another assumption is that we are not inserting more than something like 5,000 records at a time.

 

DECLARE     @CustomerID as varchar (20)

,@CustFirstName as varchar (30)

                        ,@CustLastName as varchar (40)

                        ,@CustPhone1 as varchar (15)

                        ,@RecordCount as int

 

 

DECLARE     @ProcessTable as TABLE

 

(

 

                         CustomerID varchar (20)

,CustFirstName varchar (30)

                        ,CustLastName varchar (40)

                        ,CustPhone1 varchar (15)

                        ,Processed bit 0

 

)

 

INSERT INTO @ProcessTable

(CustomerID, CustFirstName, CustLastName, CustPhone1, Processed)

 

SELECT CustomerID, CustFirstName, CustLastName, CustPhone1, Processed

FROM Inserted

 

SELECT @RecordCount = count(*)

FROM @ProcessTable

 

WHILE @Count > 0

BEGIN

 

SELECT TOP 1 @CustomerID = CustomerID, @CustFirstName = CustFirstName

  ,@CustLastName = CustLastName, @CustPhone1 = CustPhone1

FROM @ProcessTable

WHERE Processed = 0

 

[

 

Perform your logic here such as duplicating the record somewhere, checking the data, ect

 

]

 

SET @Count = (@Count – 1)

 

UPDATE @ProcessTable

SET Processed = 1

WHERE CustomerID = @CustomerID

 

CONTINUE

END

 

 

 

 This is a very basic example however I think you see the point.  Now another consideration is that there could be more than one update to a given CustomerID within this bulk insert. With that in mind I would add field to my table variable, lets call it ( InsertedID  int PK). I would then update my processed flag based upon this Identity column so that multiple records affecting the same CustomerID could be handled.

 

 

 

Any thoughts?

 

 

Jeff

 



<hr noshade size='1' width='250' color='#BBC8E5'>


Regards,

Jeffery Williams
http://www.linkedin.com/in/jwilliamsoh
Post #328786
Posted Thursday, December 7, 2006 8:56 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Sunday, August 17, 2014 11:51 AM
Points: 226, Visits: 119
Hey everyone, I haven't posted in a while, so I thought I'd chime in on a good one like this. I've used alot of the suggestions in the past with varying results. It's really hard to design business rules with triggers when there are so many inputs and outputs. This is more and more apparent to me every day in the health industry.

I got an idea reading through this discussion...I want to see what yall think about it

CREATE TRIGGER tr1 ON Books AFTER INSERT AS

UPDATE Books
SET batchUID = NEWID() --batchUID being a uniqueidentifier field
FROM Books
INNER JOIN Inserted
ON Inserted.bookCode = Books.bookCode
GO

CREATE TRIGGER tr2 ON Books AFTER INSERT AS
DECLARE @batchUID uniqueidentifier
SET @batchUID = (SELECT TOP 1 @batchUID FROM Inserted)
EXEC usp_Process @batchUID=@batchUID
GO

Of course this only works if a.) you can alter the table schema and b.) if you can alter stored procedures to handle optional parameters. The only other exceptions I can see if is the SP uses a non-batch API or you need to generate some type of customized ID field.
Post #328810
Posted Thursday, December 7, 2006 10:32 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Friday, June 12, 2009 7:37 AM
Points: 125, Visits: 115
From just looking at it, it seems a bit tricky.  Are you planning on recursive triggers?  Also,  not sure what happens in usp_Process, but it looks like you are only getting the first batchid that is inserted and calling that proc (what if >1 is inserted).  I may be overlooking something because I just gave it an overview, but those are my thoughts.  Does it work like you expect?
Post #328850
Posted Thursday, December 7, 2006 10:56 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Sunday, August 17, 2014 11:51 AM
Points: 226, Visits: 119
correct...no recursive triggers (which is default for SQL Server). The first trigger assigns the same GUID to each record in the insert. So the author of usp_Process has to code based on batchUID instead of bookCode.
Post #328854
Posted Thursday, December 7, 2006 11:01 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Friday, June 12, 2009 7:37 AM
Points: 125, Visits: 115
What if there are >1 record inserted?
Post #328856
Posted Thursday, December 7, 2006 11:07 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Sunday, August 17, 2014 11:51 AM
Points: 226, Visits: 119
--This code affects all the records inserted into books with the same uniqueidentifier

CREATE TRIGGER tr1 ON Books AFTER INSERT AS

UPDATE Books
SET batchUID = NEWID() --batchUID being a uniqueidentifier field
FROM Books
INNER JOIN Inserted
ON Inserted.bookCode = Books.bookCode
GO
Post #328861
« Prev Topic | Next Topic »

Add to briefcase ««12345»»»

Permissions Expand / Collapse