July 13, 2006 at 1:31 pm
I've got an application that needs to use data from a detail table to update another table. Then the rows in the detail need to be updated themselves. But, I need to make sure no new records are added that meet the criteria (the group by) in the detail. What I conceptually want to do is lock the detail from update or insert by a criteria. For example:
begin transaction
lock all rows in tableA where vendor = xyz from update or insert
Summarize data (read) in tableA where vendor = xyz
update tableB
update tableA where vendor = xyz
commit transaction
I was hoping someone could give me some ideas. The key here is that I want to insure no inserts are done to tableA for vendor xyz after the summarize is done until the transaction is completed. Currenlty the only way I know to insure no inserts are done is to lock the table.
July 13, 2006 at 3:16 pm
Is the purpose of locking the recordset in TableA so that you ensure that only the rows that have been summarized into TableB get updated or flaged in TableA or is there more to it than this?
July 14, 2006 at 12:15 am
Define an ISOLATION LEVEL at the beginning of your stored procedure.
An isolation level determines the degree of isolation of data between
concurrent transactions. The default SQL Server isolation level is
Read Committed. Here are the other isolation levels (in the ascending
order of isolation): Read Uncommitted, Read Committed, Repeatable
Read, Serializable.
Sintax
SET [SESSION | GLOBAL] TRANSACTION ISOLATION LEVEL
{READ UNCOMMITTED | READ COMMITTED
| REPEATABLE READ | SERIALIZABLE}
July 14, 2006 at 5:51 am
Yes, the idea is that only records that got summarized get updated. With no new inserts that match either.
July 14, 2006 at 6:00 am
What I don't understand is how a transaction isolation level will prevent inserts into tableA unless TableA is completely locked. The best I can do on inserts is assume that inserts would be blocked if I locked index records with certain keys. But I don't know how I guarantee that. I can lock records that are there but haven't ever tried to lock records from insert in a transaction.
At this point I think we either have to rewrite the logic of the processing to be more involved to ensure I'm only summarizing the data that has been updated (in other words do the update first then the summarization) or add some sort of locking mechanism built into the code so that when this process runs, the code that might do the inserts is blocked from executing.
July 14, 2006 at 6:00 am
If problem is excluding new inserts, then I would store the ID's of the summarized rows from table A somewhere (in a temp table for example) or use a status column Process (bit data type - Yes/No; at the start of the processing mark all rows to be processed). Then add a condition that works with this, like:
update tableA where vendor = xyz AND Process = 1
(probably followed by resetting the Process to 0 so that the rows are not processed over and over... but that depends on your DB design... maybe it is just a staging table and rows are deleted when processed)
or
update A
from tableA A
join #summarized s on s.ID=A.ID
where A.vendor = xyz --this could be superfluous now you have the temptable
This way you can ensure that newly inserted rows will not match the criteria for insert/update.
July 14, 2006 at 6:03 am
I think locking is the only way to do exactly what you describe for all SQL statements. But the temp table solution might be better. If you do need to go down this road, and since you need to prevent phantoms (new records which appear only in the second - update - statement) as well as nonrepeatable reads (old records which appear only in the first statement), you can use the WITH (SERIALIZABLE) table hint.
then this hint will cause key-range locks to be taken and held for the duration of the transaction. This will block inserts and updates which set a value in that range, as well as deletes and updates which would move a value out of that range.
If some expression in the where clause is evaluable only by reference to a column on the table and if that column isn't indexed, then coarser-grained locks will be used to achieve the required level of isolation. The coarser-grained locks (which I believe would have to be table locks) will result in lower concurrency.
Tim Wilkinson
"If it doesn't work in practice, you're using the wrong theory"
- Immanuel Kant
July 14, 2006 at 9:07 am
Both Vladan and Tim have good suggeestions. The reason I asked my first question is because I had a couple of solutions in mind. One being the temp-table solution that Vladan mentions. Store your key values from TableA in a temp-table and then, after updating TableB, update TableA and INNER JOIN to the temp-table. A second solution would be to update TableB and then update the rows in TableA and INNER JOIN to TableB WHERE TableB.<Column/s> = Summary Values. Does this make sense? In other words, you already know what the summary values are as you have updated TableB so only update TableA rows WHERE TableB's summary rows have the new values in them.
July 14, 2006 at 10:59 am
Thank you for all the ideas.
I considered the locking based on an index, but I just find that a little too brittle. If indexes are changed, then the app might have data integrity errors. It's not as apparent that you may have issues such as dropping a foreign key or something like that would be. I liked what it did - it was what I wanted.
So, I was ready to go with a more elaborate update that involved saving keys or changing the way updates were done and actually having to read TableA more than once, but the developer insists that for this application, locking the table will be sufficient. He doesn't want to go to the trouble to write the code required to make it scalable because he believes this app doesn't require it. If the table is busy and the exclusive lock can't be granted, the client trying to do this one process will time-out. I don't like the idea because it is extremely not scalable, but for this app, it might be sufficient. At least I know what the options are when it causes issues and has to be rewritten.
July 14, 2006 at 11:06 am
>If indexes are changed, then the app might have data integrity errors
Not having a usable index will just result in table locks rather than key range locks. It won't decrease the isolation level.
Tim Wilkinson
"If it doesn't work in practice, you're using the wrong theory"
- Immanuel Kant
July 17, 2006 at 8:16 pm
If I understand the problem correctly... this might be a good job for a trigger.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply