Importing data from txt file (fixed length fields)

  • Thanks for all your helpful posts! I knew I would get good answers from this gang 🙂

    You were all correct -- the prefix values needed to be zeros (From a closer reading of BOL, I now gather this is because the file was created externally and prefix values never figured into its creation)

    Also, the dates must be in ODBC format for SQL 2000, which I'm using. As I need to import them into a working table, from which point I later parcel out the records, I'm able to leave the text file as is (I can't change it anyway) and import the dates as character values.

    On my subsequent INSERT INTO ... SELECT ...

    to parcel out the data from the working table, I found that cast(StartDate, smalldatetime) in the SELECT works great (in my case, the StartDate field in the text file is guaranteed to have a valid date)

    Thanks again for all your help, guys!

    Best regards,

    SteveR

  • I am currently importing fixed-length and variable length text files as well.

    I have created a Schema.ini file in which I place all the information regarding the file Column names, lengths, types, etc...

    I then set-up a linked-server to point at the directory where the Schema.ini (Which has to be in the same directory) and file(s) are.

    Once you have set this up you can then use the following to pull in the data from the file

    SELECT Fields FROM LinkedServerName...TextFile#txt

    A Sample Schema.ini is below:

    This file contains a CSV delimited, special character delimited and a fixed-length file.

    [SampleSpecialDelimiter.txt]

    Format=Delimited(|)

    ColNameHeader=False

    MaxScanRows=10

    Col1=Corp CHAR WIDTH 5

    Col2=CusId CHAR WIDTH 15

    Col3=AMTSEL_14 INTEGER

    Col4=AMTSEL_15 INTEGER

    Col5=AMTSEL_16 INTEGER

    Col6=AMTSEL_17 INTEGER

    Col7=AMTSEL_18 INTEGER

    Col8=AMTSEL_19 INTEGER

    Col9=AMTSEL_20 INTEGER

    Col10=AMTSEL_21 INTEGER

    Col11=CustomerName CHAR WIDTH 52

    Col12=Address1 CHAR WIDTH 24

    Col13=Address2 CHAR WIDTH 24

    Col14=Address3 CHAR WIDTH 24

    Col15=Address4 CHAR WIDTH 24

    Col16=Address5 CHAR WIDTH 24

    Col17=CityName CHAR WIDTH 13

    Col18=State CHAR WIDTH 2

    Col19=ZipCode CHAR WIDTH 5

    Col20=Zip4 CHAR WIDTH 4

    [SampleCSVFormat.txt]

    Format=CSVDelimited

    ColNameHeader=False

    MaxScanRows=10

    Col1=ISOCurrencyCode CHAR WIDTH 3

    Col2=ISOCurrencyDesc CHAR WIDTH 255

    [SampleFixedLength.txt]

    FORMAT=FixedLength

    ColNameHeader=False

    MaxScanRows=10

    Col1=ISOCurrencyCode CHAR WIDTH 3

    Col2=Spacer1 CHAR WIDTH 1

    Col3=BuyRate CHAR WIDTH 18

    Col4=Spacer1 CHAR WIDTH 1

    Col5=SellRate CHAR WIDTH 18

    Col6=Spacer1 CHAR WIDTH 1

    Col7=RateEffectiveDate CHAR WIDTH 11

    You can have as many different files in this one ini file.

    I use this because I have compared file loads using DTS and the above method are only changes to the load process. DTS take 50 minutes to 1 hour 15 minutes. Schema.ini process takes 5 minutes 20 seconds.

    Just my personal experience in one process as an example.

    Good Luck!

    AJ Ahrens

    SQL DBA

    Custom Billing AT&T Labs



    Good Hunting!

    AJ Ahrens


    webmaster@kritter.net

  • I have used this method before but could never get it to work with fields greater than 255 chars, even when following the documentation!

    Far away is close at hand in the images of elsewhere.
    Anon.

  • Sounds about right. They probably are still working out the bugs to get the varchar/char working greater than 255.

    Still am sorry to hear that. Luckily, for the applications I'm using it for the data is coming from an MVS type mainframe and the largest data element is 255.

    Thanks for the feedback on the field-size limiter =)

    AJ Ahrens

    SQL DBA

    Custom Billing AT&T Labs



    Good Hunting!

    AJ Ahrens


    webmaster@kritter.net

  • Sorry if I sound stupid, but here is another simple solutions.

    Export from Access to Excel, the Excel to SQL.

    paul


    paul

  • I had a similar project that daily import data from a flat text format with fixed 80 char length. Here is summary of what I have done:

    1. Create a Temp table, and build the string to contents the source file path and name, file name change everyday related date format,

    2. Bulk insert from text file into Temp table based on the string built,

    3. Read record from Temp table into different column based on the position of the record and do the proper conversion,

    4. Insert the converted data into destination tables.

    5. Remember to verify the text file existing.

    6. Doing some duplication checking to provent the same file imported more than one time.

    I have done this project on SQL Server 7.0 and has been running for years since then without major problem. I hope that it helps someone.

    Jie Ma


    Jie Ma

  • the format file "\r" is slash r slash n

    8.0

    7

    1 SQLCHAR 0 5 "" 2 GroupCode ""

    2 SQLCHAR 0 1 "" 0 Star ""

    3 SQLCHAR 0 9 "" 1 ESSN ""

    4 SQLCHAR 0 1 "" 0 Bogus ""

    5 SQLCHAR 0 2 "" 6 DependentNo ""

    6 SQLCHAR 0 1 "" 3 Gender ""

    7 SQLCHAR 0 8 "\r" 7 StartDate ""

    the imported data is:

    ESSNGroupCodeGenderIntColumnSmallIntColumnDependentNoStartDatePaidAmount

    00857376411517FNULLNULL102001-08-08 00:00:00.000NULL

    00857376411517FNULLNULL102001-08-08 00:00:00.000NULL

    00857547611517FNULLNULL102001-11-05 00:00:00.000NULL

    00906985811517FNULLNULL102002-05-06 00:00:00.000NULL

    03456587311517FNULLNULL102002-05-29 00:00:00.000NULL

    39584848211517FNULLNULL102002-05-29 00:00:00.000NULL

Viewing 7 posts - 16 through 21 (of 21 total)

You must be logged in to reply to this topic. Login to reply