tempdb full because of row versioning

  • Msg 3958, Level 16, State 1, Procedure <>, Line 12

    Transaction aborted when accessing versioned row in table 'dbo.deleted' in database 'FINRISK'. Requested versioned row was not found. Your tempdb is probably out of space. Please refer to BOL on how to configure tempdb for versioning.

    We are getting tempdb full issue because of row versioning. We have READ COMMITTED isolation level in our user database. And have snapshot_isolation_state = 1 for user database and 0 for tempdb database.

    How can we nail down which process is consuming lion's share of memory.

  • Triggers use tempdb to materialise the inserted and deleted tables. From the message (dbo.deleted) that's what's happening. Probably a trigger on a huge data modification and a tempDB that's too small for the need.

    You have the procedure name. Go through all the data modifications done in there, check for triggers. And check and reconfigure TempDB. It probably needs to be bigger.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • We are facing tempdb space issue again. After observing through perfmon graphs, I found that version cleanup rate is zero while generation rate is over 1000 Kb/s at times. How should this situation be handled.

  • We are using READ_COMMITTED_SNAPSHOT isolation on database level. Can we still force some stored procedure within the same database to not use row versioning ?

  • Sure. Use SET TRANSACTION ISOLATION LEVEL to set whatever isolation you want.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks Gail, It means if we have READ_COMMITTED_SNAPSHOT set to ON and we use SET TRANSACTION ISOLATION LEVEL READ COMMITTED, then it will disable READ_COMMITTED_SNAPSHOT for that session and not create row versions.

  • we have pinpoint the process which is creating tempdb space issue. While we are purging the tables, the records are going to audit tables. For big tables it is creating huge version sets which is causing issue. Though we have process like say for a table with 10000 records to be deleted, we are deleting it 1000 at a time. Our begin tran and commit tran exists for each 1000 records. So technically the versions should be cleared at the end of each page. Thats not happen actually.

  • That's not going to be fixed with an isolation level change. As I mentioned earlier...

    GilaMonster (5/31/2011)


    Triggers use tempdb to materialise the inserted and deleted tables. From the message (dbo.deleted) that's what's happening. Probably a trigger on a huge data modification and a tempDB that's too small for the need.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Pretty interesting. So thats how trigger always work ? Can't we force trigger to not use the version based operation ?

  • Yes. No.

    The pseudo-tables are materialised from the version store in SQL 2005+

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks Gail. I'll look into the detailed functionality of triggers. However this creates one simple question in my mind. Suppose we have one table TableA with 100000 records and our logic works like this:

    while()

    begin

    begin tran

    delete top 20000 from TableA

    where <Condition>

    commit tran

    end

    In this case, the versions should be created and destroyed for each iteration. However in our case it seems it's not destroying them. As we have seen the cases where version clearance speed is 0 ps while creation is 800 to 1000 ps.

  • Are there any transactions concurrently reading from tableA?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • There might be possiblity. How its going to effect in this scenario ?

  • With normal row versions the version can't be removed until all queries that could possibly need it have finished. May be similar with triggers, I'm not sure.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (6/4/2011)


    With normal row versions the version can't be removed until all queries that could possibly need it have finished. May be similar with triggers, I'm not sure.

    I don't see how it could be any different with triggers. The only difference I see is that you only have 1 procedure to wait on with triggers instead of possibly infinity for "normal" row version.

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply