Deleting Primary Log file, how?

  • Hi,

    I have a dts process which is doing quite alot of work, and it seems that its bloating the log file (600mb) for the database. Now I understand that the log file is used in the dts process for transaction roll backs - but this is not needed.

    I want to clear out the log file, but cant find out where. I can see where I can delete it, but SQL server wont let me as its the primary log.

     

    Anyone?

    Thanks

  • take the backup of the transaction log with truncate_only transaction

    command is like this:

    BACKUP LOG <database_name> WITH TRUNCATE_ONLY

     

     

  • Hi,

    I tried this, but it doesn't do anything!

    Regards

  • I generally use BACKUP LOG dbnamehere WITH NO_LOG.

    If you want to get rid of the LOG file i.e. DELETE it....

    BACKUP your DATABASE.  DETACH it.  Go to the directory where the LDF log file lives and rename it.

    RE-ATTACH the database and it should complain that the LOG is missing and that it may create a new one.  Or is it RESTORE the database???  I forget which, either way a NEW log will be created...

    I am just providing this information assuming this is a development system and that you have already received approval for a production system outage....



    Good Hunting!

    AJ Ahrens


    webmaster@kritter.net

  • Thanks

    That worked like a charm!

    One question is that:

    Now i have re created the log file - i have restricted the growth of the file to 30 meg. Will this reduce performance of the database? because any logs that are created will have to be truncated (as it gets to the end)

    Regards

  • Restricting the growth of the log file will not adversely affect performance.  However, you have to walk a fine line between the frequency of the log backups and the size/frequency of transactions.  If the log file fills up, the database will stop responding. 

    In my experience, it is ususally better to let the log file grow automatically, just in case, but create it to the anticipated size initially.  This avoids most dynamic file growth (an expensive operaton) and still avoids a total outage if you experience an unexpected peak in activity etc...

    /*****************

    If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek

    *****************/

  • Hi

    Thanks for that

    So you are saying if i restrict the size of the log file as soon as its hits the maximum size the database itself will stop responding?

    If so it looks like the only way I can limit these huge log files is to delete them periodically.

    Regards

  • John,

    A transaction log file is like a bucket.  If the bucket gets full, you don't have to throw it away and get a new one, you just empty it.  If the contents of the bucket are important to you, you dump into another receptacle where you can save the contents.  When a transaction log gets full, if you want to save the contents, you do a transaction log backup. That saves the contents while emptying the transaction log, allowing it to hold more transactions.  There is no need to delete the file or even reduce the size of the file if it is not unreasonable.  If you find that it is growing unreasonably, then you need to take more frequent backups, which will help to keep the file small.  In your case, it sounds like a DTS job is what is filling the log and causing it to grow.  You can either allow the log to grow to the size needed to run the job (you might want to take a log backup before and after the job), or if its not important to you to keep a copy of the transactions (for recovery), take a log backup before the job, change the recovery model to simple, run the job, change the recovery model back to full, then take a full backup.  In this way, the only time frame for which you do not have point in time recoverability is for when the DTS job was executing.

    Steve

  • "If so it looks like the only way I can limit these huge log files is to delete them periodically."

    If you don't want log to be growed too large, make the large transaction in your DTS job into mutiple small ones as short as you can.

  • Another thing to do is set your database recovery mode to SIMPLE.

    -SQLBill

  • SQLBill has it.  Maybe the full recovery model isn't for you?  Could you bcp-ify the portion of the DTS package that is causing the tlog growth?  That way you could use the bulk-logged recovery model and still maintain a point in time recovery strategy.  Check BOL

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply