January 7, 2013 at 12:00 am
All,
I have two queries like below.
declare cursor c1...
open c1
fetch c1 into ...
while @@sql_status ..
begin
BEGIN TRAN
insert t1
insert t2
insert t3
update t4
COMMIT
fetch c1 into ...
END
close ...
deallocate
The same query is used and the location of BEGIN TRAN is
changed now.
#2)
BEGIN TRAN
declare c1 cursor...
open c1
fetch c1 into ...
while @@sql.status ..
begin
insert t1
insert t2
insert t3
update t4
fetch c1 into ...
END
close ...
deallocate
COMMIT
which one is good to have? how to measure the usage of
tranaction size for the second one? which one is good if i
have
2000000 records processed within the cursor ?
Note: Cursor is not the right one to process 2000000 records. But the requirement is like that.
karthik
January 7, 2013 at 3:18 am
It depends.
If the insert and updates are not inter-related then I would suggest that you put them into different transactions as this would lead to holding of Transaction Locks for lesser amount of time at one continuous stretch.
If the inserts are inter-related or not inter-related, but some other transactions are supposed to read data from the tables on which the inserts and updates are taking place, then it depends on what Isolation Level you want to implement. From the sample code that you have provided.....I see that you are working on four different tables. You are performing Inserts into three and Updating one. IMHO, you should use different Transactions for all the inserts and updates as it would lead to holding of Locks for a lesser time.
If you use the approach that you have posted then the transaction would hold locks till the entire transaction is completed and that would be really costly.
Following links would be useful:
January 7, 2013 at 4:38 am
There's almost no difference between the two. Both are likely to cause HUGE issues. Updating that much data using a cursor AND through individual INSERT & UPDATE statements is going to be a giant transaction. You'll need to have a very large log file. You'll also probably need a much larger tempdb. Plus, you're going to have a very large number of locks inside your database leading to massive blocking of other processes.
In general, this is a very poor choice. The requirements may say this, but this is where you, as the technical resource, push back against the requirements in order to arrive a technically supportable solution.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 7, 2013 at 5:04 am
Karthik could you brief us upon the requirement a bit??...It won't be that much of an effort......Plus you might get opinions that might make this requirement easier for you cause in any given case this approach is going to kill your performance completely. 😉
PS: Don't forget to set up some sample data and show us a working example. 😀
January 7, 2013 at 1:03 pm
I suggest a hybrid method.
Do something like the first method except that you begin new transactions only at certain record counts, for example, every 2000 values processed:
record #1; record #2001; record #4001; etc.
You can adjust the number up or down as needed to reduce locking.
Every one in a separate transaction is huge overhead in creating and committing trans.
All in one transaction is huge overhead if it has to rollback after doing a lot of inserts/updates.
SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply