Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

Updating an Entire Column in a 10 GB table Expand / Collapse
Author
Message
Posted Thursday, December 27, 2012 3:48 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Tuesday, September 16, 2014 3:03 AM
Points: 466, Visits: 1,920
Hi Folks,

I have a very big table with size around 10 GB(Data+NC Indexes). There is a need to update an entire column of this table with some new values. It is taking very very long time to do that in QA environment- Around 5 hours to be precise.

The DEV team has sent the code to us(DBAs) but this much downtime might not be acceptable when it goes through various channels. So I tried looking at some alternative approach to this problem.

One method I thought was to put the database into bulk logged mode and perform a bulk operation to create a new table with the rest of the columns in a same fashion but the column to be updated will take a value depending upon the joins and the criteria. This sounds good to me at this pointed because I could create a brand new table in 4 minutes.

However, the doubts have crept in. I used select into command that does not give me control over the DDL so I could not set primary key which is an identity in the beginning.

Has anybody used a similar approach or can you suggest something to be seen which I am not able to see or visualize.

Thanks
Chandan
Post #1400542
Posted Thursday, December 27, 2012 4:49 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 3:29 PM
Points: 12,966, Visits: 10,742
There are several options:

* recreating the table like you did before, but not with SELECT INTO, and using the simple recovery model.

* updating the table in batches: do the update statement for e.g. 10000 rows at a time.

* maybe you can temporarily disable/drop the indexes and recreate them after the update statement.




How to post forum questions.
Need an answer? No, you need a question.
What’s the deal with Excel & SSIS?

Member of LinkedIn. My blog at LessThanDot.

MCSA SQL Server 2012 - MCSE Business Intelligence
Post #1400561
Posted Thursday, December 27, 2012 5:15 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Tuesday, September 16, 2014 3:03 AM
Points: 466, Visits: 1,920
Koen Verbeeck (12/27/2012)
There are several options:

* recreating the table like you did before, but not with SELECT INTO, and using the simple recovery model.

* updating the table in batches: do the update statement for e.g. 10000 rows at a time.

* maybe you can temporarily disable/drop the indexes and recreate them after the update statement.


Thanks for writing.

-If I create the table in advance with primary keys and identity as tried earlier, then how am I going to put all the data in the table and taking the use of minimal logged operations at the same time. Select into command gave me that flexibility but without using this how am I going to proceed.

- I tried updating my table in batches using set rowcount 10000 option. However in my update statement, the condition is such that the column may get changed from 1 to 2 and next time when update happens that set of data with value '2' is going to get updated again. My join condition forces it. So it looks like I am going to modify a large number of rows many times. And Update statement is fully logged as per my understanding
Post #1400571
Posted Thursday, December 27, 2012 5:19 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 3:29 PM
Points: 12,966, Visits: 10,742
Ah yes, update is fully logged. But when using simple recovery model you have (somewhat) less worries about the transaction log eating all your disk space.

Maybe you can write your update so that a row gets updated only once?

Regarding the indexes: I suggested in the third point that you could add them later.




How to post forum questions.
Need an answer? No, you need a question.
What’s the deal with Excel & SSIS?

Member of LinkedIn. My blog at LessThanDot.

MCSA SQL Server 2012 - MCSE Business Intelligence
Post #1400572
Posted Thursday, December 27, 2012 5:49 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Tuesday, September 16, 2014 3:03 AM
Points: 466, Visits: 1,920
Yes When I am dealing with fully logged situation, it goes very very low. Reason could be my I\O storage where log file is kept which unfortunately cannot be changed. But still I was surprised to use some bulk operation as a new table was ready in 5 minutes but with no primary key and no clustered index.


Removing non clustered index does give me benefit of 40 min-1 hour but then when I create them it will compensate for the gain that I got

Thanks
Chandan
Post #1400579
Posted Thursday, December 27, 2012 6:58 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 3:29 PM
Points: 12,966, Visits: 10,742
chandan_jha18 (12/27/2012)

Removing non clustered index does give me benefit of 40 min-1 hour but then when I create them it will compensate for the gain that I got

Thanks
Chandan


Yes, but you also have the gain of having an index with low fragmentation.




How to post forum questions.
Need an answer? No, you need a question.
What’s the deal with Excel & SSIS?

Member of LinkedIn. My blog at LessThanDot.

MCSA SQL Server 2012 - MCSE Business Intelligence
Post #1400604
Posted Friday, December 28, 2012 1:17 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Wednesday, May 21, 2014 10:49 AM
Points: 21, Visits: 312
If the primary key is an identity and its the clustered index, you could loop through and update records 1 at a time. Each update the where clause would = <identity number>. This would cause SQL to take a row level lock rather than a page or table lock. Row locks likely would not be noticed by users. It is less efficient to update 1 row at a time, but it will be faster than you think and the impact will be non-existent.

Also, if you change recovery mode to simple make sure and perform a full backup after the job is complete and you have switched it back to full recovery mode.

If it was me and i had the disk space, i would create a duplicate table and then use sp_rename to bring the new table online. Down time would be <1 second
Post #1401035
Posted Monday, December 31, 2012 3:03 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Today @ 3:55 AM
Points: 267, Visits: 661
Chandan,

Would like to know what did you did in this scenario?
Post #1401331
Posted Monday, December 31, 2012 3:20 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Tuesday, September 16, 2014 3:03 AM
Points: 466, Visits: 1,920
ngreene (12/28/2012)
If the primary key is an identity and its the clustered index, you could loop through and update records 1 at a time. Each update the where clause would = <identity number>. This would cause SQL to take a row level lock rather than a page or table lock. Row locks likely would not be noticed by users. It is less efficient to update 1 row at a time, but it will be faster than you think and the impact will be non-existent.

Also, if you change recovery mode to simple make sure and perform a full backup after the job is complete and you have switched it back to full recovery mode.

If it was me and i had the disk space, i would create a duplicate table and then use sp_rename to bring the new table online. Down time would be <1 second


You idea of having just a row level lock sounds nice. Could you please elaborate a little to update the rows one at a time. Do you mean to use set row count 1 or something else.

Thanks
Chandan Jha
Post #1401340
Posted Monday, December 31, 2012 3:28 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Tuesday, September 16, 2014 3:03 AM
Points: 466, Visits: 1,920
Shadab Shah (12/31/2012)
Chandan,

Would like to know what did you did in this scenario?


Due to year end vacations, things have slowed down a bit. You can watch this thread as I expect it to yield some conclusions for me.

Happy new year!

Thanks
Chandan
Post #1401341
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse