Performance problem with Text data type

  • We need to move 14 million rows out to an archive database. This table contain 6 columns with a Text data type.  Our initial testing shows (on 9000 rows) that it is taking ~5 minutes per 3000 rows. The results are the same using the COPY SQL OBJECTS task or the Data TRansformation task. Does anyone know of a way to tweak the package to increase performance?

    MG

    "There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies."
    Tony Hoare

    "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.

  • You can use SQL BULK INSERT statement, bcp. The T-SQL BULK INSERT statement can potentially perform faster than the others because BULK INSERT executes completely within the SQL Server process.  DTS is utility programs that execute in their own processes, must bear the overhead of interprocess communications to pass data from the source text file to SQL Server.


    Kind Regards,

    Chintak Chhapia

  • Is this a direct SQL to SQL datapump? If so, do you have any indexes on the Archive table? If not try adding a clustered index, there has been documented cases where this has improved inserts.

    Another maybe transferring the records a bit at a time. First transfer some of the record across as an insert, then transfer the remaining parts of the record as updates.

     

    --------------------
    Colt 45 - the original point and click interface

  • Make sure your archive database is large enough for the new records, or make sure it is growing in large chunks. Lots of little growths will slow down the copy. Also, make sure the archive database has its recovery model set to 'simple'. Have you tried dropping all indexes from the destination table (before trying Phill's suggestion)?

    Peter

  • Thank you all for your suggestions. We sized the archive database for 2 years of data and the initial load will be for 1 year. I completely blanked out on the Bulk Insert. We'll test that option also. It seems though, that the biggest culprit is the text column data type. We created another table with varchar instead of text and the same 9000 rows load in less than 10 seconds.

    MG

    "There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies."
    Tony Hoare

    "If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.

  • Peter: Yes good catch on the indexes. Forgot the reverse of my argument And good pointers on the recovery model and database growth.

    MG: Well that's another reason I don't like text columns. I'm sure you'll have your problems exporting and importing using BULK INSERT. You do realise you have to export the data to a text file before you BULK INSERT it. Roll on SQL Server 2005 with varchar(max)

    Are you able to perform partial record transfers, say transferring 1-2 text columns at a time?

     

    --------------------
    Colt 45 - the original point and click interface

  • You sort of hint that you are creating a new archive database and it doesn't already have records in. In which case another idea might be to restore a backup of your main database into the archive database and then delete the records you don't want. You would have a bit of tidying up to do afterwards though,  such as rebuild indexes, shrink the archive database, etc.

    Peter

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply