Row Level Versioning

  • Comments posted to this topic are about the item Row Level Versioning

    -Roy

  • Good article. I was getting a bit lost following your examples (committed selects?) but I think the basic ideas were well covered. If people wish to know more they should look up transaction isolation levels in general to see how locking has always been done and they can then gain an appreciation of the potential problems solved (and introduced) by row level versioning.

    I'm hoping to turn on read committed snapshot on in one of our installations at a customer's site because they have a reasonably (once a minute) continuous data import that needs to scan our tables (about 10 of them) each having 20-60k rows whilst other users are performing occasional transactions against the system. Even though the import is broken up into many small transactions it can still cause a pause of 5 or so seconds in our client application...

    I wasn't aware that row level versions were used for the deleted table in triggers but now that I think about it that certainly makes sense 🙂

  • I believe that there is a typo when talking about committing the transactions. The line that reads:

    Just execute the commented statement. Commit tran b.

    Should be:

    Just execute the commented statement. Commit tran a.

  • Let me get this straight.

    If a field has a value of, say $0 and it is in the process of being commited to $1000000000, then using row versioning will not display older inaccurate value until it is commited.

    So are we saying then that a feature like this would only be used in situations like reporting, where it does not matter if the data is to-the-point accurate?

  • Actually if the committed value was 0 and it is going to be updated to 10000000, then when you select the row, you will get 0.

    You will get the most latest Committed value. That is the true value. This will reduce Wait for resource and lessens the load on the server.

    -Roy

  • Yes sorry, a typo. I meant to say "then using row versioning WILL display older...".

    I understand how it works (great article), but I thought the whole idea of locks etc was for atomicity, isolation and general data integrity. Could this then not be a dangerous feature, otherise it would be on by default?

  • On the contrary it helps the Acid property. As long as you get the right Data, you are doing good. You are actually getting the last committed data. From the example I am providing below we can argue both ways.

    Let us look at it this way. The last updated data was 0 for a row and it is in the process of being updated to 1000000. At this point, the most accurate data is 0. Why? Because it is the last committed data. What if the update is being Rolled back. Even after the transaction was completed, the correct value is 0.

    It all depends on how you are using the data or how your operation is in your company. Even though it is a cool feature, I have reservation of this. Now I am going to counter argue my own posting...:D

    If I am trying to retrieve data for a report that is not the latest, I would go actually go Read Uncommitted. The main reason is I dont want to wait for Resources. I will not be doing any locking either. Not even a shared lock.

    Now I have defended the article and countered it by myself...LOL

    -Roy

  • I think this is a great article. I like that you explain the concept and then explain the pros and cons.

    I understand the difference between the subject of the article and using NO LOCK hints. (At least I think I do.) But I wonder if it might have made the article a bit stronger to add a paragraph on how this is a different thing entirely than NO LOCKS and why. But maybe that would be considered "beyond the scope of the article". I'm just throwing out a thought.

    Thanks for the article.

  • Hi JJ, Thanks for the compliments. Maybe I could write it as a supplement of this article. I really did not think about it. Thanks for pointing it out.

    -Roy

  • Good article. The basics are well written. 🙂

  • Thanks Anirban:). I tried. I did miss couple of things like others have pointed out. I have to keep in mind to cover all bases when I write my next article.

    -Roy

  • Did anyone else try this and not get the expected results? When I tried this, and ran tran a without committing, the value in the select statement was 2, not 1212121.

  • Paul, I tried the set up three times and the three times it worked just fine.

    -Roy

  • Thanks Roy, for explaining that.

    We have been running into some deadlocks lately, due to one main sort of Lookup form our app uses. The number of rows generally returned has doubled recently due to some data additions, and this is locking the select statement long enough to cause drama for users who are trying to update individual records, some of which appear in the Select.

    I recently changed the Select to use WITH (NOLOCK) much to the chagrin of several DBAs out there who are scared of "dirty reads".

    In the situations where we are using our data, switching the database settings to read-committed row-versioning would probably be a much better long-term solution.

    Although I am still rather concerned about the hit the TempDB would take as a result of this. At peak times, our SQL Server is already chewing through somewhere around 6Gb of memory, and I'm afraid this would push it over the edge.

    Has anyone done any particular load testing on how the TempDB is impacted by such a setting change?

  • Hi Sharon,

    What you have to do or make sure of is to confirm that the TempDB Database files are not on the same drive as that of your Main DB. Also make sure that the TempDB is not in the same drive as the Operating System. 6GB Memory usage is not that drastic. We use way more than that.

    Roy

    -Roy

Viewing 15 posts - 1 through 15 (of 44 total)

You must be logged in to reply to this topic. Login to reply