SSIS Mapping (Audit)

  • I have an inquiry based on how SSIS handles mappings for a certain number of columns that increase in time.

    A client has a file that started off with say 10 columns. The columns have increased over time to say 100 columns.

    The package is mapped so that it is mapped to the maximum amount of columns. If a file comes in with less then the 100 columns everything should work properply?

    Basically rows are being ingested incorrectly.

    Marshall

  • Jonathan Marshall (8/27/2012)


    I have an inquiry based on how SSIS handles mappings for a certain number of columns that increase in time.

    A client has a file that started off with say 10 columns. The columns have increased over time to say 100 columns.

    The package is mapped so that it is mapped to the maximum amount of columns. If a file comes in with less then the 100 columns everything should work properply?

    Basically rows are being ingested incorrectly.

    Marshall

    Are the missing columns off the end of the file?

    IE

    Field1, Field2, Field3, Field4, Field5

    to

    Field1, Field2, Field3

    instead of

    Field1, Field3, Field5

    If it's the 2nd then you will have problems. The way the mapping will end up is Field1-> Field1, Field3->Field2, Field5->Field3. The only solution in that case I can think of is to create multiple connections for the file and branch depending on which version comes in. Or load the file into a single column table (varchar(max)) and parse from there.

    I'm not 100% certain what will happen in the first case but my guess is that it will work fine, leaving NULLs in the affected columns.

    Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]

  • I will look into deeper.

    Basically it seems as if the columns are being add to the same file over the years.

    I mapped to the latest file (ex. File_03312012.txt contains 90 ) (ex. File_06302012.txt contains 99)

    The file with only 90 columns is only importing half of the rows.

    This pattern occurs through the process. Will do deeper analysis to where exactly they are falling.

    Quite tedious at times. Any logic for incoming columns or checking the file before SSIS ingests.

    I would believe that the process would fail but it is not failing. The files are tab delimited

  • If definitely the second case scenario.

    When loading the data manually through the SSMS import wizard all the rows and columns line up.

    So yes its definitely mapping incorrectly.

    Is there any way of auditing files through SSIS?

  • By any chance is the first row the column headings? If not any chance it can be made to? If so there is a setting on the connection manager that will tell it to use the first row for column headings and that should resolve your problem. If that won't do it then you will need some way for the package to tell the difference between the files. Either by name of the file, or by the contents of one of the columns. I've dealt with a file that had 6 different types of rows in it and the first column was lettered a-f to tell me what it was. That isn't to hard to deal with.

    If you can't get the column headings in the file then see if you can tell me how you tell what columns are in the file then we will see what we can do :-).

    Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]

  • Yes each file has column headers.

    I'm retrieving the name of the file.

    Based on the filename it populates the correct table.

    The mapping is based on the file with the most recent columns which is greater to older files.

    Even the order of the columns change within the file.

    The package isn't failing but data is not loading correctly.

    I'm came across this article regarding loading changing files.

    I'm not the strongest programmer but seems I can give it a try to make the package more scalable.

    Let me know you guys thoughts

    http://blog.quasarinc.com/ssis/best-solution-to-load-dynamically-change-csv-file-in-ssis-etl-package/

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply