MY package dont work ??!!! Why

  • My package works always fine and know i get this error!!??

    SSIS package "C:\Visual Studio Projecten\Integration Services Project1\Integration Services Project1\Incident categorie Problem.dtsx" starting.

    Information: 0x4004300A at Data Flow Task, SSIS.Pipeline: Validation phase is beginning.

    Information: 0x4004300A at Data Flow Task, SSIS.Pipeline: Validation phase is beginning.

    Warning: 0x80047076 at Data Flow Task, SSIS.Pipeline: The output column "inc_cat_sc" (184) on output "OLE DB Source Output" (163) and component "OLE DB Source" (152) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.

    Warning: 0x80047076 at Data Flow Task, SSIS.Pipeline: The output column "inc_cat_n" (185) on output "OLE DB Source Output" (163) and component "OLE DB Source" (152) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.

    Information: 0x40043006 at Data Flow Task, SSIS.Pipeline: Prepare for Execute phase is beginning.

    Information: 0x40043007 at Data Flow Task, SSIS.Pipeline: Pre-Execute phase is beginning.

    Information: 0x4004300C at Data Flow Task, SSIS.Pipeline: Execute phase is beginning.

    Error: 0xC0202009 at Data Flow Task, OLE DB Destination [55]: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.

    An OLE DB record is available. Source: "Microsoft SQL Server Native Client 11.0" Hresult: 0x80004005 Description: "The statement has been terminated.".

    An OLE DB record is available. Source: "Microsoft SQL Server Native Client 11.0" Hresult: 0x80004005 Description: "Violation of PRIMARY KEY constraint 'PK_incident_id'. Cannot insert duplicate key in object 'dbo.incident'. The duplicate key value is (129).".

    Error: 0xC0209029 at Data Flow Task, OLE DB Destination [55]: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "OLE DB Destination.Inputs[OLE DB Destination Input]" failed because error code 0xC020907B occurred, and the error row disposition on "OLE DB Destination.Inputs[OLE DB Destination Input]" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.

    Error: 0xC0047022 at Data Flow Task, SSIS.Pipeline: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "OLE DB Destination" (55) failed with error code 0xC0209029 while processing input "OLE DB Destination Input" (68). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.

    Information: 0x40043008 at Data Flow Task, SSIS.Pipeline: Post Execute phase is beginning.

    Information: 0x4004300B at Data Flow Task, SSIS.Pipeline: "OLE DB Destination" wrote 128 rows.

    Information: 0x40043009 at Data Flow Task, SSIS.Pipeline: Cleanup phase is beginning.

    Task failed: Data Flow Task

    Warning: 0x80019002 at Incident categorie Problem: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (3) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.

    SSIS package "C:\Visual Studio Projecten\Integration Services Project1\Integration Services Project1\Incident categorie Problem.dtsx" finished: Failure.

    The program '[8764] DtsDebugHost.exe: DTS' has exited with code 0 (0x0).

  • the error seems pretty clear to me:

    "Violation of PRIMARY KEY constraint 'PK_incident_id'. Cannot insert duplicate key in object 'dbo.incident'. The duplicate key value is (129).".

    does your process expect to insert everything in as new, or should it be updating existing incidents, and adding new incidents?

    are you supposed to truncate the table before you insert the data?

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • The table incident get deleted en will be filled by this package.

  • karim.boulahfa (1/22/2014)


    The table incident get deleted en will be filled by this package.

    What that errors says is that it is trying to insert a row into Incidents with a primary key value of 129 but the table already has a row with that value.

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

  • As Lowell pointed out, you are inserting duplicate primary key values into the table which is not permitted.

    You need to add a 'Sort' component in your Data Flow task.Right click and Edit 'Sort' component and specify by checking all the fields that form your primary key. Ensure to check the option that says "Remove duplicates" at the bottom. eg: If your Primary Key is "EmployeeID", select only "EmployeeID" and check the option to remove duplicates. If your PK is a combination of two or more fields like "FirstName" and "LastName", check both and the option to remove duplicates.

    Good Luck !

  • riyaziq (11/25/2014)


    As Lowell pointed out, you are inserting duplicate primary key values into the table which is not permitted.

    You need to add a 'Sort' component in your Data Flow task.Right click and Edit 'Sort' component and specify by checking all the fields that form your primary key. Ensure to check the option that says "Remove duplicates" at the bottom. eg: If your Primary Key is "EmployeeID", select only "EmployeeID" and check the option to remove duplicates. If your PK is a combination of two or more fields like "FirstName" and "LastName", check both and the option to remove duplicates.

    Good Luck !

    Remember that the sort component is a blocking component and will read all data into memory before producing even one single row of output.

    This is a no-go for large data sets.

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • You OLE DB Source is trying to insert two duplicate id with same value 126 in OLE DB Destination (table incident).

    Need to look into source component.

    Regards
    VG

  • Koen Verbeeck (11/26/2014)


    riyaziq (11/25/2014)


    As Lowell pointed out, you are inserting duplicate primary key values into the table which is not permitted.

    You need to add a 'Sort' component in your Data Flow task.Right click and Edit 'Sort' component and specify by checking all the fields that form your primary key. Ensure to check the option that says "Remove duplicates" at the bottom. eg: If your Primary Key is "EmployeeID", select only "EmployeeID" and check the option to remove duplicates. If your PK is a combination of two or more fields like "FirstName" and "LastName", check both and the option to remove duplicates.

    Good Luck !

    Remember that the sort component is a blocking component and will read all data into memory before producing even one single row of output.

    This is a no-go for large data sets.

    Koen Verbeeck,

    I just implemented a Data Flow task with a Sort component using a large dataset and as you pointed out it's sometimes slow. Could you tell me what's the ideal way of dealing with this situation? I am thinking of this approach: Remove primary key specification so that the table accepts duplicate values and then using a separate Execute SQL Query task remove duplicates :

    ALTER TABLE [Table Name] ADD AUTOID INT IDENTITY(1,1)

    DELETE FROM [Table Name] WHERE AUTOID NOT IN (SELECT min(autoid) FROM [Table Name]

  • riyaziq (11/26/2014)


    Koen Verbeeck (11/26/2014)


    riyaziq (11/25/2014)


    As Lowell pointed out, you are inserting duplicate primary key values into the table which is not permitted.

    You need to add a 'Sort' component in your Data Flow task.Right click and Edit 'Sort' component and specify by checking all the fields that form your primary key. Ensure to check the option that says "Remove duplicates" at the bottom. eg: If your Primary Key is "EmployeeID", select only "EmployeeID" and check the option to remove duplicates. If your PK is a combination of two or more fields like "FirstName" and "LastName", check both and the option to remove duplicates.

    Good Luck !

    Remember that the sort component is a blocking component and will read all data into memory before producing even one single row of output.

    This is a no-go for large data sets.

    Koen Verbeeck,

    I just implemented a Data Flow task with a Sort component using a large dataset and as you pointed out it's sometimes slow. Could you tell me what's the ideal way of dealing with this situation? I am thinking of this approach: Remove primary key specification so that the table accepts duplicate values and then using a separate Execute SQL Query task remove duplicates :

    ALTER TABLE [Table Name] ADD AUTOID INT IDENTITY(1,1)

    DELETE FROM [Table Name] WHERE AUTOID NOT IN (SELECT min(autoid) FROM [Table Name]

    This will delete all rows that are not equal to the smallest identity value. In other words, all rows except the first one.

    This will also not work if there is already an identity on the table.

    The ideal scenario is that your source is a database, and then you can remove duplicates in the source query using ROW_NUMBER() OVER (PARTITION BY keycolumn ORDER BY somecolumn).

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • Thanks Koen for the suggestion.

    The query I had in mind was incomplete. It should be like this:

    ALTER TABLE [Table Name] ADD AUTOID INT IDENTITY(1,1)

    DELETE FROM [Table Name] WHERE AUTOID NOT IN (SELECT min(autoid) FROM [Table Name]

    GROUP BY [Field1],[Field2]

  • riyaziq (11/26/2014)


    Thanks Koen for the suggestion.

    The query I had in mind was incomplete. It should be like this:

    ALTER TABLE [Table Name] ADD AUTOID INT IDENTITY(1,1)

    DELETE FROM [Table Name] WHERE AUTOID NOT IN (SELECT min(autoid) FROM [Table Name]

    GROUP BY [Field1],[Field2]

    That one does make more sense 🙂

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply