SSIS : How to pass the value of two arrays into a variable

  • Dear All,

    I have a code (script task inside SSIS) that reads a csv file and gets only the necessary columns and recods from this file.

    Goal is then to be able to load the information of the first array (which has the information of the columns) and of the second array (which as the information of the records related with those tables) into a SSIS variable that can be used as source for my data flow.

    I was already able to create the code that populates the two arrays, now I need to populate this variavle.

    Can someone help me understand which type of variable to I need to declare at the package level and how can I populate it?

    For the time being I have created a object variable, but I don't know how to populate it with the result of my two arrays.

    Below you can find the code (c#) of how do I populate my arrays:

    public void Main()

    {

    try

    {

    String FileName = Dts.Variables["$Package::FilePath"].Value.ToString();

    String[] myArrColumns = new string[1]; String[] myArrRecords = new string[1];

    int i = 0; int z = 0; int ColArrayNum = 0; int SplitedColumnsCounter = 0; int SplitedWordsCounter = 0; int RecordArrayNum = 0; int Resflag = 0; int Position = 0;

    System.IO.StreamReader file = new System.IO.StreamReader(FileName);

    string[] lines = System.IO.File.ReadAllLines(FileName);

    for (int l = 0; l < lines.Length; l++)

    {

    if (l==0)

    {

    string[] Columnname = lines[l].Split(',');

    if (Columnname[20].ToUpper() == "RES")

    {

    Resflag = 1; Position = 20;

    }

    SplitedColumnsCounter = SplitedColumnsCounter + Columnname.Length - Resflag;

    if (myArrColumns.Length < SplitedColumnsCounter)

    {

    Array.Resize(ref myArrColumns, SplitedColumnsCounter);

    }

    for (int e = 0; e <= (Columnname.Length - Resflag); e++)

    {

    if ((e==20 && Columnname[e].ToUpper() != "RES") || e !=20 )

    {

    MessageBox.Show("Column N" + e + " Valor " + Columnname[e]);

    myArrColumns[ColArrayNum] = Columnname[e];

    ColArrayNum++;

    }

    }

    }

    else if (l!=0)

    {

    string[] words = lines[l].Split(',');

    //RecordArrayNum = SplitedWordsCounter;

    SplitedWordsCounter = SplitedWordsCounter + (words.Length - Resflag);

    if (myArrRecords.Length < SplitedWordsCounter)

    {

    Array.Resize(ref myArrRecords, SplitedWordsCounter);

    }

    for (int e = 0; e <= (words.Length - Resflag); e++)

    {

    if (e != Position || Resflag == 0)

    {

    MessageBox.Show("Column N" + e + " Valor " + words[e]);

    myArrRecords[RecordArrayNum] = words[e];

    RecordArrayNum++;

    }

    }

    }

    /*--------------------------------------------------------------------------------------------------------------------------------------------*/

    /*--------------------------------------------------------------------------------------------------------------------------------------------*/

    }

    file.Close();

    Dts.TaskResult = (int)ScriptResults.Success;

    }

    catch (Exception exception)

    {

    using (StreamWriter sw = File.CreateText(Dts.Variables["User::DestinationFolderPath"].Value.ToString()

    + "\\Error\\" + "ErrorLog_" + DateTime.Now.ToString("yyyyMMddHHmmss") + ".log"))

    {

    sw.WriteLine(exception.ToString());

    Dts.TaskResult = (int)ScriptResults.Failure;

    }

    }

    Dts.TaskResult = (int)ScriptResults.Success;

    }

    #region ScriptResults declaration

    /// <summary>

    /// This enum provides a convenient shorthand within the scope of this class for setting the

    /// result of the script.

    ///

    /// This code was generated automatically.

    /// </summary>

    enum ScriptResults

    {

    Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,

    Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure

    };

    #endregion

    }

    }

  • Can I ask why you are doing this rather than loading directly from the file to a table?

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

  • Why are you using a script task to filter your columns and results, when you can easily do so in the DataFlow task itself? You can use a conditioned split transformation to filter to the rows you want, and if you don't want to use certain columns, (for lack of better words) don't select then in your source or destination. It seems like you're overcomplicating the matter by trying to do this in a script (especially a Script Task, rather than a Script Component).

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • Phil Parkin - Monday, April 16, 2018 7:12 AM

    Can I ask why you are doing this rather than loading directly from the file to a table?

    Seems we had the same thought process Phil. 🙂

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • Dear Both,

    Thank you very much for your feedback.

    Problem is this.

    I have a provider (which I cannot request to change the format of files) that sends me the same file some times with column RES and some other times without this column RES.

    This column is not needed.

    When present its column U on the file, so ocupaying position 20.

    This means that sometimes I have 35 columns and sometimes I have only 34 columns.

    But I only need the 34 ones. I don't need the U column.

    Only way I think it is possible to do this is using a script task .

    What do you think?

  • river1 - Monday, April 16, 2018 7:19 AM

    I have a provider (which I cannot request to change the format of files) that sends me the same file some times with column RES and some other times without this column RES.

    I have to ask, but why can't you? I've been told this countless times in my role, but after badgering the person who tells me this for the contact details of the distributor and contact them, I find out that it's a trivial matter. You don't know unless you try.

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • Hello,

    Because this file is generated by a provider that is sending this infromation for free. If we which to have custom, then we have to pay and unfurtanetely it is very, very expensive ...

  • river1 - Monday, April 16, 2018 7:28 AM

    Hello,

    Because this file is generated by a provider that is sending this infromation for free. If we which to have custom, then we have to pay and unfurtanetely it is very, very expensive ...

    There is a way to handle this which is going to be faster and easier to maintain than your proposed solution.

    If you read the file using a Script Component, treating each row as a 'single column' and then break the file into its columns using code within the Script Component, everything should work out just fine.

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

  • Hello,

    Thank you for your reply. Not sure I undertood your answer.

    Maybe what I will do is continue with my approach. but instead of saving the content into a variable a will save it into a csv file. Then another package with read this file.

    Cheers,

  • river1 - Monday, April 16, 2018 7:42 AM

    Hello,

    Thank you for your reply. Not sure I undertood your answer.

    Maybe what I will do is continue with my approach. but instead of saving the content into a variable a will save it into a csv file. Then another package with read this file.

    Cheers,

    OK, have a read of this. It describes my suggestion in detail.

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

  • river1 - Monday, April 16, 2018 7:42 AM

    Hello,

    Thank you for your reply. Not sure I undertood your answer.

    Maybe what I will do is continue with my approach. but instead of saving the content into a variable a will save it into a csv file. Then another package with read this file.

    Cheers,

    There are 2 Script items in SSIS. The first is a Script Task, which is in the Task Flow. The other is a Script Component, which is used within a Data Flow task. A Script Component can be a Source, Transformation or Destination. The advantage for what you want to do here is that you don't need to load your data to an object variable; you can use it as an actual source just like you would with a flat file source.

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • Thank you very much.

    I guess I found a cool solution that I would like to post here.

    I have created two connections, one called "ResColumn" and Other "WithoutResColumn"

    Then I have created for each a data flow.

    Basically when the first one fails (meaning that the file has NO res column ) I send the flow to execute the other data flow.

    So on failure of flo1 move to Flow 2

    😉

    Cheers

  • Now my problem is. I would not like that if Dataflow 1 fails that the package fails. Instead I would like to wait for the result of data flow 2 to give the package as failed.

    Can someone tell me how can I do this?

    Basically:

    If data flow 1 sucess then package sucess
    If data flow 1 fails move to data flow 2 but don't give the package as failed
    if data flow 2 fails give the package as failed otherwise (if sucess), as sucessfull.

    Thanks

  • river1 - Monday, April 16, 2018 9:15 AM

    Now my problem is. I would not like that if Dataflow 1 fails that the package fails. Instead I would like to wait for the result of data flow 2 to give the package as failed.

    Can someone tell me how can I do this?

    Basically:

    If data flow 1 sucess then package sucess
    If data flow 1 fails move to data flow 2 but don't give the package as failed
    if data flow 2 fails give the package as failed otherwise (if sucess), as sucessfull.

    Thanks

    Change the value of the FailPackageOnFailure property on the DataFlow task to False.

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • river1 - Monday, April 16, 2018 9:09 AM

    Thank you very much.

    I guess I found a cool solution that I would like to post here.

    I have created two connections, one called "ResColumn" and Other "WithoutResColumn"

    Then I have created for each a data flow.

    Basically when the first one fails (meaning that the file has NO res column ) I send the flow to execute the other data flow.

    So on failure of flo1 move to Flow 2

    😉

    Cheers

    The better solution (IMO) is to have a first step which checks the file to determine whether it contains the troublesome column & set a Boolean package variable accordingly, and then to use precedence contraints to control which DF gets executed.

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

Viewing 15 posts - 1 through 15 (of 22 total)

You must be logged in to reply to this topic. Login to reply