osql /Export

  • Hi,

    I am using a stored procedure with osql and xp_cmdshell to generate inserts of all data in all tables in my database and dumps it into text files. I then use osql again to import these inserts into another database.

    The problem comes when I load large text files (150 000 inserts +), osql will hang and eventually time out.

    Is there a way I can split these exports into specified sizes or number of rows per text file on creation with osql?

    Thanks

  • Several ideas come to mind:

    - why not create a linked server and load it from that?

    - you could use BCP. This will allow you to control the batch size (-b which is the unit of transaction) or if you want to keep the files smaller the first and last row (-F -L). BCP would be the fastest method, certainly if you don't have indexes on the tables being loaded. Certainly if you set the sp_dboption 'select into/bulkcopy' on like in a staging db.

    - using osql you can also specify the query timeout -t

    - probably the least convenient option (because less transparent) would be to split-up using TOP or RANK functions.

    I would go for the -t option in osql or the batch size in bcp.

    Jan

  • Thanks,

    I have been using the timeout option with osql but I am having instances where it is still running 48 hours after I set it in motion whereby I guess it finally times out.

    Here's a part of my script in its simplest form working with one table only.

    declare @DirWhereInsertScriptsWillBeCreated nvarchar(300),

    @sp-2 nvarchar(200),

    @cmd nvarchar(1000),

    @FileName nvarchar(100),

    @TableName nvarchar(50),

    @ServerFrom nvarchar(50),

    @DbaseFrom nvarchar(50)

    select @DirWhereInsertScriptsWillBeCreated = 'C:\Temp\'

    select @ServerFrom = serverA’

    select @DbaseFrom = 'databaseA'

    select @TableName = 'TableA'

    select @FileName = 'TableA'

    set @sp-2 = 'GenerateInserts @table_name = ' + @TableName + ' , @disable_constraints = 1,@disable_triggers = 1,

    @ommit_computed_cols = 1, @include_column_list = 0'

    select @cmd ='oSQL -S' + @ServerFrom + ' -d' + @DbaseFrom + ' -Usa -Ppassword -Q " ' + @sp-2

    + ' " -n >>"' +@DirWhereInsertScriptsWillBeCreated + @FileName + '.sql " -h-1 -w10000 '

    exec master..xp_cmdshell @cmd

    and to import into another db:

    select @cmd = 'OSQL -q -h-1 –a8156 -t900 -Usa -Ppassword -S' + @ServerTo + ' -d' + @DbaseTo + ' -n -i' + @FileName

    I have been playing around with the timeout and I still have no luck. You mentioned indexes which might point to my timeout problem?

    Regards,

  • Well, 48 hours seems to be an enourmous amount of time, even for 150.000+ records.

    I suppose 'GenerateInserts' generates records with INSERT table(col, ...) VALUES (...,...)? Without any GO between them?

    If this is the case, it is one long transaction. True, indexes will have to be maintained; but even that doesn't justify this amount of time. Are there other users working concurrently and holding locks? Is your database and tlog big enough for this kind of transactions or growing with small increments at the time?

    Jan

  • thanks, Yes generateinserts creates inserts with INSERT table(Values), you can omit the column names but it creates the inserts per table with GO after every table.

    It times out when it comes across a table with 150.000+ records, it is like you say it sees it as one transaction.

    This is on a dedicated server which makes it even stranger.

    The database and log file is set to auto grow and recovery model is simple.

  • I don't have enough details to know what is realy going on.

    But even with recovery mode set to simple the transaction log will grow hughly during the transaction. Simple means 'truncate log on checkpoint', but it truncates the log only up until the last active transaction. And if the log is growing with small increments and shrinked again afterwards; it will generate a lot of slow file allocation calls on the os. I suggest talking a look at the growth increments and not growing and shrinking unless you need to.

    Could you also not consider changing "generateinserts" something like if primarykey % 1000 = 0 then you add a "GO", effectively creating batches of 1000 rows? Of course, to keep it generic, you would need a pk of int on each table. If you are using SQL 2005 the ROW_NUMBER() % 1000 = 0 might be handy.

    Jan

  • Thanks for the help Jan, I have a few things now that I wam going to try.

  • If you use BCP along with the "Simple" Recovery Mode on a staging table with no indexes and no triggers, the load of 150,000 rows should take no more than 15 seconds and will usually only take 2 if the batch size is set very high especially in the presence of a format file.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply