Increase load speed

  • Foxpro data is moved into intermediate sql tables that the web-based application never see. Then a stored procedure is used to create events in "event" tables. Finallly C# code generates inserts from these events which moves the data into the final tables. I'm thinking that if we estimate the sql data file growth and size it up front we can avoid autogrowths altogether, increasing load speed.

    Same for the transaction log file ldf . I noticed one of our dev sql servers had the conversion test database set to simple recovery and I'm informing the developers that won't work for the actual production conversion since we have log shipping in place. I can't imagine hosing log shipping during conversions since there are hundreds of other clients' data in the same production database which would leave them without backups or log shipping for the whole weekend of the conversion.

    Now I'm wondering if simple recovery is making their conversion speed estimates a bit optimistic.

  • I think you hit the nail on the head when you were describing the issue at hand. It's not so much that the processing time is going to change - it's that you need to plan to have a lot more logging to be retained. The performance sinker is likely any auto-growth periods needed.

    Ultimately - Simple and Full log the same amount of information. The only difference is how long you hold onto what's in the log. Simple recycles whatever commits, and Full doesn't. I'd say - size the log BIG, and get aggressive with T-log backups and ships so that you can avoid the log growing during this load process, and you really shouldn't see a huge increase.

    You could certainly look at bulk loading data if that could apply, since that would decrease the amount being logged, but that's a process change.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Yes I'll need to estimate expected growth in both the data file and the tlog file and size those up in advance of the conversion. I don't have a good feel for how long that may take so probably on a Sunday I'll start increasing them in 10GB increments. If that occurs fairly quickly on our SAN then maybe try 50 or 100GB increments.

    In the test environment, with simple recovery the tlog of course isn't growing but the data file is set to autogrow in 1mb increments -- I guess that is moot if we've upszied in advance but normally that seems like a very small auto increment -- quick but going to happen a lot.

    So far the database being used for test conversion is much smaller than production so I'm not sure they're getting a realistic conversion speed. With the production data file at about 210GB, comprising several hundred tables and indexes, new inserts should take longer I would think.

  • ... 10GB increments....

    sounds like *A LOT* for increments 😉


    * Noel

  • I didn't mean 10GB increments for auto growth. I meant large "manual" increases in file sizes done during a maintenance window to get plenty of free space in the files for a large conversion. If I need to increase the file sizes 100GB each for data and tlog, I would be forever doing it in small 100mb increments.

Viewing 5 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply