What''s a reasonable log file allocation

  • I'm techno-challenged and trying to administer an out-of-the-box portal system on my web site.  What's a reasonable allocation for the log file?  My ISP recommends 50%, but the log file seems to fill up faster than the database.  The ISP says the log file is cleared nightly but keeps the maximum "size" as the highest point it has reached.  If I'm not making a lot of changes to the portal/database, why would the max for my log file keep increasing?  It keeps bumping into the ceiling.

    Any ideas for the non-expert and desperate?

    Thanks,

    Jim


    Best Regards,

    Jim Vernon

  • >>If I'm not making a lot of changes to the portal/database, why would the max for my log file keep increasing?<<

     

    I suspect some index rebuild is happening (probably after hours). If that is the case you should backup the log after the indexing!!

    Do not shrink it unless you have to (no more space)

     

    Cheers,

     


    * Noel

  • Thanks, Noel.  I'll ask my ISP about this.  They take care of nightly backup, so they would know, I hope, if they have it set to index automatically as well.

    They offer database admin access, but currently I can't seem to get in there to poke around.

    Best regards,

    Jim


    Best Regards,

    Jim Vernon

  • I am curious as to why the recommendation to not shrink the log file. I am in a similar situation, I have a databse which is about 4GB and the logfile is 2.5GB. I would like to shrink the log file but am curious as to why your recommendation is to not shrink it. Thanks.

  • It seems to be 'standard practice to have the log file and the data file at the same magnitude.  I tried to set up my database with 10% log file and 90% data, because we don't have frequent changes, but that obviously didn't work.

    So, I too am curious, but my curiosity can be summed up as:  how the hell does a log file get so big when it supposedly is cleared out daily?  In other words, how can the magnitude of daily changes equaly the magnitude of the total amount of data??

    (Note:  my situation would be considered trivial compared to Bill's; I have ~20 MB of data ... but it's still an issue in terms of planning for future space requirements.)

    Thanks,

    Jim


    Best Regards,

    Jim Vernon

  • Bill,

    The reason you don't want to shrink your log file is because it is expensive to grow again.  If your log file is 2.5 gig and you shrink it to 100mg with a job, and it grows again to 2.5 gig.  That growth is very expensive i/o on the server.  It is better to let it be than to keep the cycle of shrinking/growing.  I have seen similar with databases in full recovery and never taking or taking few log backups during the day.  If my db is in full recovery I personally take log backups every 2 hours to keep the size manageable.

    As for log growth it could be the index builds or once again the frequency of the backups of the logs.  If you have 20mg worth of data and you make 40mg worth of changes during the course of a normal workday, your log will be 40mg or so.  Let's say that the 40mg is changed equally hourly throughout an 8 hour work day.  5mg and hour.  If the logs were backed up every hour the log file size would be 5mg give or take.  If you never backup the log it will continue to grow until all disk space is used.  When I came to my current job 3 years ago they were ready to replace 2 sql machines because of disk space.  I backed up the logs and shrank the databases and were still on the same servers. (not sure if that's good)

    Hope that helps.

    Tom

  • Tom, Thank you for the explanation. I have one additional folllowup question. When the database was created backups were configured using a maintenance plan and done once a day. This once daily backup includes the database and the transaction logs. Would you offer the same advice if I were to change the maintenance plan to back up the database once per day and back up the transaction logs every two hours. My suspicion is that the transaction log has grown so large because I am only backing it up once a day so if I were to modify the maintenance plan and shrink the log file that it would grow again but hopefully not nearly to it's current size. Yes, I would experience the performance hit initially but the log file would not reach a high water mark any where near it's current size. Granted it might still get so big again. I am curious as to whether this reasoning makes sense. Thank you.

  • I really appreciate the additional insights, but I still don't understand a basic issue:

    Wouldn't 40 MB of changes on a 20 MB database imply that the database is essentially being 'replaced' twice over, or at least something equivalent to that in terms of the magnitude of the changes?  My database isn't undergoing anywhere near that level of change on a daily basis, so I just don't see how the log file is growing.

    What am I missing?

    Thanks,

    Jim


    Best Regards,

    Jim Vernon

  • Bill,

    Personally, I don't care for the maintenance plans.  I have 2 jobs that run that backup the database.  1 full and 1 transaction logs.  In the script I check for new databases and create the respected backup devices so I never have to alter a maintenance plan to get all databases backed up.  To answer your question, changing the maintenance plan to backup every 2 hours should be sufficient.

    Jim,

    There's is more than just changed data being written out.  The log timestamps each transaction.  It stores the changes, along with enough information to rollback the changes.  It logs the allocation and deallocation of database pages.(Index fragmentation) Finally, the commit/rollback status of the transaction.  Once again, I'll state that I don't know your database structure, but your index fill factor is 100 percent.  You make a change.  Page splits. All data on the page is reorganized, and info is written to log about new data page allocation.  Or in a single transaction many things are changed. An example that I come accross a lot is when I DTS import into the database.  On initial setup you can't put commit batch size in.  If I have a large file I will save the import as a dts package so I can adjust the commit batch size in the transformation task, so my log file won't grow to enormous. Just to say that more can be changing than looks on the surface.

    Hope that helps.

    Tom

  • Thanks, Tom.  Yes, that does help explain the situation.


    Best Regards,

    Jim Vernon

Viewing 10 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply