Updating an Entire Column in a 10 GB table

  • Hi Folks,

    I have a very big table with size around 10 GB(Data+NC Indexes). There is a need to update an entire column of this table with some new values. It is taking very very long time to do that in QA environment- Around 5 hours to be precise.

    The DEV team has sent the code to us(DBAs) but this much downtime might not be acceptable when it goes through various channels. So I tried looking at some alternative approach to this problem.

    One method I thought was to put the database into bulk logged mode and perform a bulk operation to create a new table with the rest of the columns in a same fashion but the column to be updated will take a value depending upon the joins and the criteria. This sounds good to me at this pointed because I could create a brand new table in 4 minutes.

    However, the doubts have crept in. I used select into command that does not give me control over the DDL so I could not set primary key which is an identity in the beginning.

    Has anybody used a similar approach or can you suggest something to be seen which I am not able to see or visualize.

    Thanks

    Chandan

  • There are several options:

    * recreating the table like you did before, but not with SELECT INTO, and using the simple recovery model.

    * updating the table in batches: do the update statement for e.g. 10000 rows at a time.

    * maybe you can temporarily disable/drop the indexes and recreate them after the update statement.

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • Koen Verbeeck (12/27/2012)


    There are several options:

    * recreating the table like you did before, but not with SELECT INTO, and using the simple recovery model.

    * updating the table in batches: do the update statement for e.g. 10000 rows at a time.

    * maybe you can temporarily disable/drop the indexes and recreate them after the update statement.

    Thanks for writing.

    -If I create the table in advance with primary keys and identity as tried earlier, then how am I going to put all the data in the table and taking the use of minimal logged operations at the same time. Select into command gave me that flexibility but without using this how am I going to proceed.

    - I tried updating my table in batches using set rowcount 10000 option. However in my update statement, the condition is such that the column may get changed from 1 to 2 and next time when update happens that set of data with value '2' is going to get updated again. My join condition forces it. So it looks like I am going to modify a large number of rows many times. And Update statement is fully logged as per my understanding

  • Ah yes, update is fully logged. But when using simple recovery model you have (somewhat) less worries about the transaction log eating all your disk space.

    Maybe you can write your update so that a row gets updated only once?

    Regarding the indexes: I suggested in the third point that you could add them later.

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • Yes:-) When I am dealing with fully logged situation, it goes very very low. Reason could be my I\O storage where log file is kept which unfortunately cannot be changed. But still I was surprised to use some bulk operation as a new table was ready in 5 minutes but with no primary key and no clustered index.

    Removing non clustered index does give me benefit of 40 min-1 hour but then when I create them it will compensate for the gain that I got:-)

    Thanks

    Chandan

  • chandan_jha18 (12/27/2012)


    Removing non clustered index does give me benefit of 40 min-1 hour but then when I create them it will compensate for the gain that I got:-)

    Thanks

    Chandan

    Yes, but you also have the gain of having an index with low fragmentation.

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • If the primary key is an identity and its the clustered index, you could loop through and update records 1 at a time. Each update the where clause would = <identity number>. This would cause SQL to take a row level lock rather than a page or table lock. Row locks likely would not be noticed by users. It is less efficient to update 1 row at a time, but it will be faster than you think and the impact will be non-existent.

    Also, if you change recovery mode to simple make sure and perform a full backup after the job is complete and you have switched it back to full recovery mode.

    If it was me and i had the disk space, i would create a duplicate table and then use sp_rename to bring the new table online. Down time would be <1 second

  • Chandan,

    Would like to know what did you did in this scenario?

  • ngreene (12/28/2012)


    If the primary key is an identity and its the clustered index, you could loop through and update records 1 at a time. Each update the where clause would = <identity number>. This would cause SQL to take a row level lock rather than a page or table lock. Row locks likely would not be noticed by users. It is less efficient to update 1 row at a time, but it will be faster than you think and the impact will be non-existent.

    Also, if you change recovery mode to simple make sure and perform a full backup after the job is complete and you have switched it back to full recovery mode.

    If it was me and i had the disk space, i would create a duplicate table and then use sp_rename to bring the new table online. Down time would be <1 second

    You idea of having just a row level lock sounds nice. Could you please elaborate a little to update the rows one at a time. Do you mean to use set row count 1 or something else.

    Thanks

    Chandan Jha

  • Shadab Shah (12/31/2012)


    Chandan,

    Would like to know what did you did in this scenario?

    Due to year end vacations, things have slowed down a bit. You can watch this thread as I expect it to yield some conclusions for me.

    Happy new year!

    Thanks

    Chandan

  • chandan_jha18 (12/27/2012)


    in my update statement, the condition is such that the column may get changed from 1 to 2 and next time when update happens that set of data with value '2' is going to get updated again.

    batch approach always give leverage to handle the resources like memory , disk space and also we can schedule it for off-peak hours. and to handle your "adhoc updation hiccups" ,

    1) you can set a update trigger so that you can catch the records which are being getting update during batch process

    2) and save them in any temp table

    3) once the batch got completed pick those values from that temp table and update the new column.

    4) drop the temp table and trigger once the new column populated completely.

    -------Bhuvnesh----------
    I work only to learn Sql Server...though my company pays me for getting their stuff done;-)

  • Hope this helps

    /*Example table to be updated (e.g. your 10GB table) **************/

    --DROP TABLE table1

    CREATE TABLE table1(tableID INT IDENTITY(1,1) Primary Key, field1 varchar(50),UpdateField varchar(50))

    GO

    INSERT INTO table1(field1)

    SELECT newid()

    GO 50

    SELECT * FROM table1

    /****************************************************/

    /******explanation of my post***********/

    IF OBJECT_ID('tempdb..#looptemp') IS NOT NULL

    BEGIN

    DROP TABLE #looptemp

    END

    CREATE TABLE #looptemp(pkid INT IDENTITY,tableid INT)--tableid will hold the tableid from the above table

    INSERT INTO #looptemp(tableid)

    SELECT tableid FROM table1

    DECLARE @counter INT

    DECLARE @tableid INT

    SET @counter=1

    WHILE @counter <= (SELECT MAX(pkid) FROM #looptemp)

    BEGIN

    SET @tableid=(SELECT tableid FROM #looptemp WHERE pkid = @counter)

    --HERE IS THE UPDATE TO YOUR TABLE

    --YOU SPECIFY THE CLUSTERED INDEX ON THE IDENTITY INT

    --THIS SHOULD ONLY CAUSE A ROW LOCK

    UPDATE table1

    set UpdateField=getdate()

    WHERE tableID=@tableid

    SET @counter=@counter+1

    END

    /****************************************/

    GO

    SELECT * FROM table1

  • Thanks NGreene. Will try that very soon and post the results of execution times.

    Wish you and your family a very happy new year!!!

    Regards

    Chandan Jha

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply