Read Uncommitted, locks, and transactions

  • Great question!! Thanks for it. I had to deeply analyze the lock modes and the MSDN articles in order to decide which answer was correct, and that helped me learn something new today.

    Thanks again.

    El Jerry.

    "El" Jerry.

    "A watt of Ottawa" - Gerardo Galvan

    To better understand your help request, please follow these best practices.[/url]

  • I'll add another "Great Question" - I like the questions that make you sit and think about an answer, rather than either run a snippet of code or look up a command to answer (not that we do that before answering πŸ˜‰ ).

    J

  • jdamm (2/13/2012)


    I'll add another "Great Question" - I like the questions that make you sit and think about an answer, rather than either run a snippet of code or look up a command to answer (not that we do that before answering πŸ˜‰ ).

    J

    +1 I couldn't have said it better myself. πŸ˜€



    Everything is awesome!

  • To add for a moment to Paul's well answered posts, the Snapshot Isolation won't save you from the necessary locks for Truncate, but will save you from the necessary dirty reads of the original issue with DELETE FROM, which is why you went NOLOCKing in the first place. πŸ™‚


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • jdamm (2/13/2012)


    I'll add another "Great Question" - I like the questions that make you sit and think about an answer, rather than either run a snippet of code or look up a command to answer (not that we do that before answering πŸ˜‰ ).

    J

    These are never bad things, at least to me. They encourage me to do it again and I know not everyone wants to throw bricks at my head for it. :w00t:


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Sean Lange (2/13/2012)


    Great question and excellent scenario setup. I am quite surprised that at the time I am posting there are 417 answers and 16% of them still think truncate is not logged. :w00t:

    ... Especially considering it's only been a week since the last discussion on it for a QotD! :crying:


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Great question, very well presented.

    In the bad old days of SQL 2000 I used to solve this sort of problem using sp_rename.

    Something like

    if exists (select * from sys.objects where type = 'U' and name = 'old_FD')

    drop table old_FD

    begin tran

    --code to create new_FD and populate it wit the replacement data

    exec sp_rename 'FD', 'old_FD', 'OBJECT'

    exec sp_rename 'new_FD','FD','OBJECT'

    commit tran

    The SCH_M lock is held only for the duration of the two rename operations.

    I did it this way because it didn't just work for tables, it worked for databases too - renaming databases is pretty quick; of course in the database, as opposed to table, case the create and populate bit was a transfer in to the server of a database created and validated elsewhere (transferred by DTS or snapshot replication or importing and restoring a backup, depending on the target server and what our access to it was like - internet between UK and 3rd world or developing countries was not always exactly brilliant) not something done in the same transaction as the renaming; some read-only (or read-only except when we wanted to replace them, to be precise) databases were better updated by that method than by updating data piecemeal - take the db describing available media directly off the QA server rather than using a script. And losing the old data only when the current data was out of date was to allow us to roll back to the previous state if the unthinkable happened and we had made an error in one of those releases (a symptom of my paranoia).

    Tom

  • L' Eomot InversΓ© (2/13/2012)


    Great question, very well presented.

    In the bad old days of SQL 2000 I used to solve this sort of problem using sp_rename.

    Something like

    if exists (select * from sys.objects where type = 'U' and name = 'old_FD')

    drop table old_FD

    begin tran

    --code to create new_FD and populate it wit the replacement data

    exec sp_rename 'FD', 'old_FD', 'OBJECT'

    exec sp_rename 'new_FD','FD','OBJECT'

    commit tran

    Yes, that was the way I always did it also. It was very fast but then you had to deal with all those pesky named constraints and foreign keys.

    I agree, nicely worded question and answers that required some thought.

  • Nice question and great discussion!

  • Good Question!!

    Thanks you Kraig.

    Sandip Pani

    http://sqlcommitted.com

  • Cliff Jones (2/13/2012)


    Yes, that was the way I always did it also. It was very fast but then you had to deal with all those pesky named constraints and foreign keys.

    I agree, nicely worded question and answers that required some thought.

    I still do it this way, that's why I asked if there was a better way to do it -- perhaps by using the ALTER VIEW or some other method. So everyone keeps saying that they used to populate tables this way. Which way are you doing it now?

    I only do this method for my raw data sources. The presentation layer (Not a real name for a data layer) is what has all the constraints and foreign keys. The raw data comes from multiple sources so no contraints are at that level. If they were, then you'd have to be extremely careful on which tables you load first and too many tables get populated simultaneously to have to worry about another problem like that.

  • Good question and I am feeling good now as I got it right πŸ™‚

    M&M

  • I have seen saying that using NOLOCK is good sign of the poor design.

    In my current work, we use NOLOCK in all SQL statements, (I know its not good)

    from the given scenario can anyone advice is there a way to bypass the NOLOCK usage and still get the same performance?

    ww; Raghu
    --
    The first and the hardest SQL statement I have wrote- "select * from customers" - and I was happy and felt smart.

  • Great questions and really well laid out. I prefer this type of question to one which just lists code.

    Thanks.

  • Great question, thanks.

    ----------------------------------------------------------------------------
    Sacramento SQL Server users group - http://sac.sqlpass.org
    Follow me on Twitter - @SQLDCH
    ----------------------------------------------------------------------------

    Yeah, well...The Dude abides.

Viewing 15 posts - 31 through 45 (of 53 total)

You must be logged in to reply to this topic. Login to reply