log space difference between SQL2000 and 2005

  • Hi,

    I am currently migrating a load from SQL2000 to another server running 2005 sp2. The log space for the DB is currently 70GB and i am trying to copy a table from another app server. The table size is +-35GB but it keeps on running out of log space. The log is clear(4% used) everytime before the table starts to copy(via SSIS - copy table from a SQL server to a SQL server) and it fill up the log everytime and falls over.

    the same load exist on our current prod environment running SQL2000 sp4. The table structure is exactly the same and the log file is 60GB... and it runs smoothly in 2000 on a daily basis! Same data! Same table; smaller server, older version of SQL(2000 vs 2005)...

    Why does SQL2005 use more log space than 2000? And why more than double the table size? How can i fix this?

    PLEASE HELP!!

  • Try using the BULK RECOVERY backup model. It sounds like when you are coping all of the data over, it is writing each of the rows to the log. Bulk Recovery would prevent this. Just make sure you have a good backup before you start, and if everything goes correctly, to transfer back to the original Recovery model your business requires.

    Fraggle.

    P.S. I would ask why you are coping using SSIS to move the data from 2000 to 2005. Have you tried detach and attach or just recovering a backup from 2000 in 2005?

  • Thanks!

    Currently the recovery model is set to simple. It is in our staging area so i am not so much worried about losing data as most of the tables are cleared before every load. We basically only need back-ups of the structure of the DB. So i can set the recovery model to which ever works best... it's up to me to decide for that DB.

    As for your question about why i am using SSIS... maybe i should have explained a bit better. I am DBA for a datamart in a bank in SA and we source data from various systems within the bank for MIS reporting etc. We are in the process of upgrading to 2005 and i am converting all our old DTS packages to SSIS. The server that i am copying the data from, is one of our in-house source systems. I am basically replicating and optimizing the loads as i migrate/convert them. The 2000 box that i am talking about, is my current prod server which will be replaced by the one that i am having the issues with.

    Now as i said, the DB is exactly the same as on my 2000 box, recovery model and all... only difference is that on the new server, the log space is bigger and the file groups(data files) are split over more mountpoints/drives for Disk I/O reduction...

  • Are you using the SSIS Compatibility Execute DTS Task, or are you building the package from scratch in SSIS? If you built it from scratch in SSIS, what kind of task are you using to move the data, and what kind is the Destination type of the task?

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

  • I converted it with the DTS Import Wizzard to SSIS. Currently it uses a simple data flow task with an OLE DB source to a OLE DB destination inside. No conversion tasks or anything funny! Just a straight table to table copy.

  • It seems you copy the data in one transaction. If your recovery model is SIMPLE this whole transaction is registered in your logfile (needed by SQL for recovery/rollback in case of a failure). After your commit, this logspace is marked as "can be overwritten", but it's still occupying space.

    Try chop this action in smaller parts (using commits after x rows), or try BULK_COPY (haven't experience with that recovery model)

    Wilfred
    The best things in life are the simple things

  • When you do an update, SQL writes this to the log. SQL also makes an estimate of the amount of log space that would be required to backout the update. The backout will need at least the same amount of log space as the original update, sometimes more.

    Therefore when you copy a table of 35GB in a single transaction, by the time you are copying the last row SQL will require at least 70GB of log space to complete the transaction. If the actual free space in the original table differs from the target amount this will mean your target table will end up a different size to the source table, often bigger. Also if page splits occur when loading the target table then more table and log space will be needed. So, even though SQL may have written only about 35GB log, if less than 70GB log space is available (because SQL needs to allow for a potential backout) the transaction will fail.

    If you want to reduce the amount of log space required, split your table copy into multiple transactions. A few people have run some tests to work out an optimal transaction size, and 10,000 rows is often a good amount as this minimises both the log space and total elapsed time required. Using a batch size greater than 10,000 rows tends to give only a marginal improvement in overall time required. If you have significantly smaller batches then the time spent in doing many COMMITs will increase the time required.

    The other suggestion that has been given about putting the database into bulk-logged recovery model will also reduce the log space needed.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • Be sure to review "Prerequisites for Minimal Logging in Bulk Import" in the 2005 BOL. There are additional requirements to achieve minimal logging than just setting the recovery model to 'bulk-logged' or 'simple'.

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply