August 18, 2009 at 10:30 am
Hello:
I want to avoid losing data when shrinking the logfile in sql server 2005.
The situation is there are no user changing (no t-ql, stored procedures hitting the tables) at the time of shrinking.
However I want to guarantee that all the changes in the transaction log have gone to the data file.
How do I do this?
I believe that doing a back up first should be the answer.
The second part of the question is does the back up work for both
Simple and Full recovery models?
Simple : BACKUP DATABASE
Full : BACKUP LOG
Thanks For your help.
August 18, 2009 at 10:52 am
You don't lose data by shrinking the log file. You can lose your recovery path if you truncate it, but you won't lose data by shrinking it.
The main problem with shrinking the log file is that it usually just ends up having to grow again. And that can end up with serious file fragmentation, which can slow the whole server down.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
August 18, 2009 at 11:40 am
The question I have would be why do you feel it is necessary to shrink the log file?
In most cases, this is due to not having regular transaction log backups in place for a database in full recovery model. Not sure this is your issue though.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
August 18, 2009 at 12:01 pm
Thanks for your reply.
The situation is:
1) we are using sql express 2005
2) most of our databases are in simple recovery mode, which means the log file can not be backed up, however over time the log file has grown large.
3) I am creating an end of day job to back up the databases and shrink the log file
3) Some-one reported that they shrank the log file and they lost data. At first I didn't think that was possible but then I read:
http://articles.techrepublic.com.com/5100-10878_11-5173108.html
SQL Server keeps a buffer of all of the changes to data for performance reasons. It writes items to the transaction log immediately, but does not write changes to the data file immediately. A checkpoint is written to the transaction log to indicate that a particular transaction has been completely written from the buffer to the data file.
So now I'm thinking what would force the transaction log to complete its data file update.
Thanks
August 18, 2009 at 12:16 pm
russ (8/18/2009)
2) most of our databases are in simple recovery mode, which means the log file can not be backed up, however over time the log file has grown large.3) I am creating an end of day job to back up the databases and shrink the log file
Have you figured out why the log is growing? Repeated log shrinks are not a good idea. When the log grows it will slow down performance, plus by repeatedly shrinking and growing, you're introducing file-level fragmentation.
What's wrong with leaving the log the size it needs to be for the operation of your database?
3) Some-one reported that they shrank the log file and they lost data.
Can't happen.
SQL Server keeps a buffer of all of the changes to data for performance reasons. It writes items to the transaction log immediately, but does not write changes to the data file immediately. A checkpoint is written to the transaction log to indicate that a particular transaction has been completely written from the buffer to the data file.
Correct, except that it's the checkpoint operation that writes the data to the data file.
So now I'm thinking what would force the transaction log to complete its data file update.
CHECKPOINT, but there's no need. A log cannot be shrunk past it's active portion and the active portion is where the records are for active transactions and dirty data pages.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 18, 2009 at 12:19 pm
russ (8/18/2009)
The second part of the question is does the back up work for bothSimple and Full recovery models?
Simple : BACKUP DATABASE TO DISK = ''
Full : BACKUP LOG TO DISK = ''
In full (or bulk-logged) you need both database and log backups. In simple only database backups are necessary.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 19, 2009 at 7:03 am
3) Some-one reported that they shrank the log file and they lost data. At first I didn't think that was possible but then I read:
http://articles.techrepublic.com.com/5100-10878_11-5173108.html
I just read that article, and I don't see where data loss is mentioned. What am I missing?
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
August 19, 2009 at 7:21 am
GSquared (8/19/2009)
I just read that article, and I don't see where data loss is mentioned. What am I missing?
Likewise.
It mentions that Shrink truncates the log which might be true in simple (not sure, haven't tested or researched) but is not true in full or bulk-logged recovery.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 24, 2009 at 7:29 am
The issue I am concerned with is table changes first go to the log file then the data file. I just want to be sure before shrinking the log file, that no information will be lost. However, if the "uncommitted" remains on the log file through the shrinking then I should be OK.
Thanks
August 24, 2009 at 8:06 am
russ (8/24/2009)
The issue I am concerned with is table changes first go to the log file then the data file.
Mostly correct. More correct would be to say that the changes are made in memory first, then hardened into the transaction log when the transaction is committed. Later the modified page in memory will be written to the disk.
I just want to be sure before shrinking the log file, that no information will be lost.
Shrink removes unused space from transaction log. Nothing more.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 24, 2009 at 8:35 am
I'll add that it depends on where the free space is. If it's wrapped by active segments in the log, it might not shrink out all the free space, however, as Gail mentioned, really think if you need to do this. Depending on the activity, you might need to have that size of your log. The size ought to be set to the peak amount needed between backups.
August 25, 2009 at 3:17 pm
In fact, the SHRINKFILE is so safely coded, you may have to do another log backup and shrinkfile before you actually see the full shrinkage. This is because, unlike the TechRepublic article indicates, the shrinkfile will not truncate the log or move log records. So when the active part of the log is at the physical end of the file, the shrinkfile will not shrink into an active "virtual log file". It will signal SQL Server to start using the beginning of the physical log file, so the next log backup will hopefully make all the active parts of the log at the beginning so the second shrinkfile works.
If you still have problems with log size, check for excessively large and long-running transactions. If you have a hung process with an open transaction from last week, that sets the beginning of the active part of the log, and the log will just continue to grow (in any recovery mode) until you kill that process. Replication problems can also affect log size.
Typically, its better to only shrink logs after unusual or infrequent high-activity maintenance, load and purge operations. If you rebuild all the indexes in the database and you're in the full recovery model, you can easily create a log file that's bigger than the actual data file. If you do this every week I would leave the log files alone, but if you do it once a year then maybe shrink the log files afterward to their normal size. Same with an annual data purge process. (I wouldn't recommend weekly reindexing; BOL has guidelines and sample scripts for doing this as needed.)
Remember, if your T-Log fills up and can't grow, it will stall that entire database until you fix the problem. Also, doing a restore will try to make the T-Log file the same size as it was when the backup was made; insufficient space will cause the restore to fail. So don't shrink the log space too much if at all, unless you can guarantee you can free up disk space if needed.
David Lathrop
DBA
WA Dept of Health
Viewing 12 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy