Siqing LU wrote:
The client need to update a big table. the UPDATE will be done in the application and the application is using cursor to update the table. I am worrying about it will fill up the transaction log. will the backup log help?
I agree with SQL_Hacker and to emphasize a bit...
It doesn't matter if it's done in bulk or by row using a cursor nor where the cursor is instigated from, the transaction log is going to record every update for every updated attribute of every updated row, period. Of course, you knew that but I had to say it out loud.
You say the "client need to update a big table" and that they're going to use an application to do so. You've not specifically stated it but you made it sound like they going to do an update to very large number of rows in the big table.
While I think that's going to totally suck for performance if they do it using RBAR (Row By Agonizing Row), it will also be the easiest on the log file. In fact, doing smaller groups of updates (say, 1000 rows at a time) is one of the methods folks recommend to do monster sized updates of large tables just so the log table doesn't suffer massive unwanted growth. In such situations, increasing the frequency of transaction log file backups will definitely help keep the size of the log file down.
Just to ask the questions...
- Is this a "one-off" effort or something that's going to occur on a regular basis?
- Regardless of which is true above, what will a typical total row count be?
is pronounced "ree-bar
" and is a "Modenism
" for R
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
"Change is inevitable... change for the better is not".
"If "pre-optimization" is the root of all evil, then what does the resulting no optimization lead to?"
How to post code problems
How to Post Performance Problems
Create a Tally Function (fnTally)