February 23, 2007 at 1:49 pm
Hello
I am new to SQL Server. Till two days ago the db size was 3+GB but today the log becames 117GB;
I made the properties to increment in multiples of 1MB (earlier it was auto grow) and now i
have put a limit on the maxsize. The physical file size on disk shows 120,665,152KB and has
not grown any better. I looked into the following links
<links>
http://msdn2.microsoft.com/en-us/library/aa258824(sql.80).aspx
http://sqlserver2000.databases.aspfaq.com/how-do-i-reclaim-space-in-sql-server.html
http://msdn2.microsoft.com/en-us/library/aa174524(SQL.80).aspx
Shrinking Log Files (*.LDF) Another problem you might be having is that the transaction log is filling up too quickly
, or staying far too large. See KB #110139 to understand why the transaction log might fill up. See KB #317375 for information about full or unexpectedly growing log files. See KB #873235 for information about preventing unexpected log file growth
</links>
I found two scripts in forums for finding out which table is using max size using sp_spaceused
but they gave runtime error and i couldn't fix
i tried the script against a different db using "use dbname" command
I tried DBCC SQLPERF (LOGSPACE) but it does not show anything unusual.
The log file sizes are only less than 28MB (status=0) for all. As of now
i have turned of auto growth and also turned on auto shrink.
I would need to reduce the logfile size ; We do not have that much freespace on the disk
to complete backup.
At this, stage i have the following questions:
1. I am not planning to use any of the options of shrinkfile. Just give the filename and target size.
2. How do i mine the log? The physical file size is 100+GB;
3. How safe is dbcc?
Any inputs on this would be appreciated.
Thanks
Govindan
February 23, 2007 at 1:58 pm
Also, i would appreciate if the following can be clarified too?
- Is it a must that i must backup the log file prior to doing DBCC? I don't have that much freespace to backup.
- Also is there any way to shrink the data file / log file during the backup?
Thanks
February 23, 2007 at 4:50 pm
First things first....
1. what recovery mode is your database set for? Use Enterprise Manager, right click on the database, select properties, go to the options tab.
If it is set for FULL, you MUST do transaction log backups. Otherwise it will never grow smaller.
If you don't need to be able to restore to a point in time, and don't care if you lose any data since your last full backup, switch the mode to SIMPLE. Then you don't need to do transaction log backups and SQL Server will take care of the log.
2. Once you decide the recovery mode and either do transaction log backups (FULL MODE) or don't (SIMPLE MODE), then you need to shrink the files. Look up the syntax for DBCC SHRINKFILE.
-SQLBill
February 23, 2007 at 4:53 pm
I just read the part about being out of space and can't backup the log file.
Okay, I'm assuming this, but would bet I'm right. Your database is set for Full Recovery Mode and you have not been doing transaction log backups.
Set the recovery mode to SIMPLE.
Then run the DBCC SHRINKFILE command on the log file.
Then Run:
BACKUP LOG dbname WITH TRUNCATE_ONLY
(Change dbname to your database name).
That won't backup the log, but it will clean out the data.
Then do a full backup.
-SQLBill
February 26, 2007 at 1:36 pm
Hello SQLBill
Thanks for the suggestion. Since my work exp with SQL-Server is only few days i would require assistance from you experts.
My database was in FULL mode. I set the max file size to 120GB initially (it was 117GB) and enabled Auto Shrink.
Backed up the db (but it backed up only without the LOG)
Carried out the following in single user mode:
use PServer
select fileid,groupid,size,maxsize,status,name from dbo.sysfiles;
BACKUP LOG PServer WITH TRUNCATE_ONLY
DBCC SHRINKFILE (2,28)
DBCC SHRINKDATABASE (PServer,10) <-- took 10 mins 55 secs
The file was shrunk from 117GB to 1GB; I have now put an upper limit of 5GB of my logfile ;
I am reading the following links:
http://msdn2.microsoft.com/en-us/library/aa173551(SQL.80).aspx <- Transaction Log
http://msdn2.microsoft.com/en-us/library/aa173678(SQL.80).aspx <- Recovery Mode
In the case of Oracle (hmm. i am a Oracle DBA fulltime, now asked to handle SQL Server too..i like it.. a new venture), i can duplex the log file ; How do i do that in SQL Server? Also, if the log fills up and there is no auto extend i suppose it will halt the process? Is that so?
How do i know if the log file is backed up (other than looking at the message given by the Legato Client?) OR it is configured for backup? I understand the above link says it does if the db is in FULL mode. I would need to confirm it somehow.
Your inputs on this would be appreciated.
Thanks
February 26, 2007 at 2:11 pm
Does sql server release blocks from the logfile once it writes to the database?
OR
does it write to the database file only when the db is backed up?
OR
when the BACKUP LOG command is given?
I presume it must be writing periodically every few seconds or so. In other words if i lose my log file how do i recover from my earlier backup? (Assuming the log file is not duplexed)?
Thanks
February 26, 2007 at 5:55 pm
Using the "Database Maintenance Plans" GUI i scheduled the full backup once a day and the transaction log once every 4 hours. The GUI wizard shows provision for auto shrink once a specific size is reached but there does not seem to be any option for auto shrinking the logfile. My next scheduled backup is 3 hrs away for the trn . log; I will observe it then; But if not would i need to run DBCC SHRINKFILE as a cron job? Any inputs on this would be appreciated.
February 26, 2007 at 6:39 pm
You should run transaction log backups a lot more often. We do them at least every 15 min 24x7, and sometimes as often as every 5 minutes on really active databases.
I do not recommend limiting the size of the transaction log. I normally set database and log file to auto grow by a reasonable amount, say 100 MB to 200 MB, after setting the initial size to my best guess for the amout of space they will need. You should turn auto-shrink off.
SQL Server writes to the log file first and then to the data files. This is explained in SQL Server Books Online. If you are new to SQL Server you should start making SQL Server Books Online your primary source of information.
From SQL Server Books Online
"SQL Server Architecture
Write-Ahead Transaction Log
Microsoft SQL Server 2000, like many relational databases, uses a write-ahead log. A write-ahead log ensures that no data modifications are written to disk before the associated log record.
SQL Server maintains a buffer cache into which it reads data pages when data must be retrieved. Data modifications are not made directly to disk, but are instead made to the copy of the page in the buffer cache. The modification is not written to disk until either the database is checkpointed, or the modifications must be written to disk so the buffer can be used to hold a new page. Writing a modified data page from the buffer cache to disk is called flushing the page. A page modified in the cache but not yet written to disk is called a dirty page.
At the time a modification is made to a page in the buffer, a log record is built in the log cache recording the modification. This log record must be written to disk before the associated dirty page is flushed from the buffer cache to disk. If the dirty page were flushed before the log record, it would create a modification on disk that could not be rolled back if the server failed before the log record were written to disk. SQL Server has logic that prevents a dirty page from being flushed before the associated log record. Because log records are always written ahead of the associated data pages, the log is called a write-ahead log."
February 27, 2007 at 6:01 pm
The backup is working fine and the transaction log is getting shrunk. However, i would need to lift the limit on the Trn.log upper limit.
Thanks to all for the suggestions. I am "google"ing for replication and disaster recovery scenarios, duplication of db / restoring on to a different server. I would appreciate if any of you can point me to a URL wherein these scenarios are tested ; That way i would avoid re-inventing.
Thanks
Viewing 9 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply