Best practice for full log file during schema expansion?

  • I have a customer who wants to expand one of their fields, which is something we typically do with no problem, however they have a large database (I am not sure how many records) and not much space. Their database is in full recovery mode, and they only have 24 gb free when they have "cleared out" their log file. I am not sure if they're just backing it up or if they're truncating it, I'm going to guess they're truncating it.

    The problem is I'm not sure we have a possible way to expand this field within their limitations. It's not a large expansion, but it's an existing field with data in it. He thinks his only option is to add a new drive and migrate the whole database.

    At present I'm thinking he should go with the new drive and just add a new log file there. I'm considering simple recovery mode but don't know what, if any, impact it would really have. I'm guessing, based on our usual configuration, that his log is set to auto-grow without a maximum size specification.

    How have others dealt with this situation when an alter table command is filling up the log beyond functionality?

  • If the column is varchar, expanding it wont require additional space unless you update those columns with larger amount of data.

    once you alter the table, you can update the table in batches. and take log backup in between batches. this way you can control growth of the log file, however data file will definately increase.



    Pradeep Singh

  • It's a char field, we're just expanding it from 10 to 64. It's not being updated, there's an index being dropped to expand it, and then the index is added back afterward but they didn't get that far. The field has data in it already, and defaults to a blank.

    The database file is around 32gb, is it logical to expect it to potentially take that much space in the log file for this? That would make sense, then, if he's only got 24 gb free.

  • If you are modifying char(10) to char(64) then each row will expand by 54 characters. It depends on the number of rows your table is currently holding. does the 32 gb data file consists of this table only?



    Pradeep Singh

  • No, the .mdf contains the entire database, and there's a single log file. If a new log file were created in a location with more space, I'm thinking that might be all it needs.

  • Creating an extra log file will be good if you suspect log might get full. However, it will depend on the table size/number of rows in it.

    If you have a test server, create a dummy database, copy

    the table to that server, do the expansion and see how much log space it requires, just to be sure.

    Edit - Fixed spellings



    Pradeep Singh

  • Can it be varchar instead of char ?

    Look at how much space the table is currently using to get an idea of the impact.

    Are the database and log files on the same drive ? Move the log file to a separate drive for more space & better performance.

Viewing 7 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply