Click here to monitor SSC
SQLServerCentral is supported by Redgate
Log in  ::  Register  ::  Not logged in

Get your favorite SSC scripts directly in SSMS with the free SQL Scripts addin. Search for scripts directly from SSMS, and instantly access any saved scripts in your SSC briefcase from the favorites tab.
Download now (direct download link)

Huge Insert .... Select that not blocking using snapshot isolation

By Jason S Wong,

I have been looking for a query technique like this on the web and could not find any, anywhere, so I wrote one myself to solve this problem.

I have a huge query from business community wanting to run a report querying over 25 tables with many outer joins and returns 5 million records.
The query first runs estimated 5 hours. I added 3 indexes now it runs 10 minutes. However, SQL Server select places "Share Lock" on default Read Committed Isolation Level.
I don't want these source tables to be locked for insert update for 10 minutes.

So I use SNAPSHOT Isolation Level by doing the following command

I also don't want the destination table appRpt1.TableInsert to be on Exclusive Lock for 10 minutes during insert. I wrote a recursive loop to insert incremental number of rows "@rIncnum" like the following:
You can put a waitfor command in between to allow time for other queries on your busy server or use smaller @rIncnum.

Nobody got blocked. All queries are happy. The only lockes SNAPSHOT Isolation Level obtains is "Schema Stability" and on some locks on SQL internal objects. That has no effect to queries but makes sure schema is locked for no changes.
I welcome all comments and share of your experiences.


To understand the mechanism:

1) do understand how SQLServer does Read Committed with a shared lock (dirty read, phantom read, repeatable read) before you read on. Read books by Thomas Kyte.
2) CTE is captured in memory which is much faster if you have enough memory (you should, unless budget, inheritance issues).
3) the 5 million records came from 25 tables joins, you cannot figure out which ones have changed in any time interval. Also I stated "from business community". As DBA does not get into business requirement background was, (BSA and developer should do so) DBA fixes inefficient queries and tells them the right things to do. You made a point, there is only so much DBA can push back business community.
4) Ideally yes, the best practice is reporting server is a different one from transactional server. Too many companies directly report on transactional data (for budget, refresh, timing or various reasons).

Select ... with (nolock) = Read Uncommitted came along with side effects of dirty read, phantom read, repeatable read. If that meets your business requirements, you can do so.

Too many people are having this problem but they did not understand. This script illustrate the core of the issue (isolation level, any users can run query all day long, they should block as brief as possible, they should make transaction short and fast). Snapshot isolation actually reduces lock/blocking or deadlock.

Snapshot isolation does row versioning which use tempdb so you have to clear the way ahead making sure tempdb is on a fast drive and has enough space.

Insert uses exclusive lock. Insert in small pieces is to allow rowlock not escalated, which the transaction is completed fast and providing breathing room to allow other queries to go thru in between (or block yours).

Total article views: 3798 | Views in the last 30 days: 3
Related Articles

Blocking Issue:Insert blocking select statements

Insert blocking select statements





Feedback from Connect : Allow Snapshot Backups

I posted a note to allow snapshots to be backed up in SQL Server. The inability to do so, and...


Monitoring Blocks

SQL Server excels at quickly acquiring and releasing locks to allow as much concurrency as possible ...


Insert query

Bulk Insert query