Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

MY package dont work ??!!! Why Expand / Collapse
Author
Message
Posted Wednesday, January 22, 2014 8:20 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Wednesday, November 5, 2014 7:40 AM
Points: 81, Visits: 175
My package works always fine and know i get this error!!??

SSIS package "C:\Visual Studio Projecten\Integration Services Project1\Integration Services Project1\Incident categorie Problem.dtsx" starting.
Information: 0x4004300A at Data Flow Task, SSIS.Pipeline: Validation phase is beginning.
Information: 0x4004300A at Data Flow Task, SSIS.Pipeline: Validation phase is beginning.
Warning: 0x80047076 at Data Flow Task, SSIS.Pipeline: The output column "inc_cat_sc" (184) on output "OLE DB Source Output" (163) and component "OLE DB Source" (152) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at Data Flow Task, SSIS.Pipeline: The output column "inc_cat_n" (185) on output "OLE DB Source Output" (163) and component "OLE DB Source" (152) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Information: 0x40043006 at Data Flow Task, SSIS.Pipeline: Prepare for Execute phase is beginning.
Information: 0x40043007 at Data Flow Task, SSIS.Pipeline: Pre-Execute phase is beginning.
Information: 0x4004300C at Data Flow Task, SSIS.Pipeline: Execute phase is beginning.
Error: 0xC0202009 at Data Flow Task, OLE DB Destination [55]: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.
An OLE DB record is available. Source: "Microsoft SQL Server Native Client 11.0" Hresult: 0x80004005 Description: "The statement has been terminated.".
An OLE DB record is available. Source: "Microsoft SQL Server Native Client 11.0" Hresult: 0x80004005 Description: "Violation of PRIMARY KEY constraint 'PK_incident_id'. Cannot insert duplicate key in object 'dbo.incident'. The duplicate key value is (129).".
Error: 0xC0209029 at Data Flow Task, OLE DB Destination [55]: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "OLE DB Destination.Inputs[OLE DB Destination Input]" failed because error code 0xC020907B occurred, and the error row disposition on "OLE DB Destination.Inputs[OLE DB Destination Input]" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.
Error: 0xC0047022 at Data Flow Task, SSIS.Pipeline: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "OLE DB Destination" (55) failed with error code 0xC0209029 while processing input "OLE DB Destination Input" (68). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.
Information: 0x40043008 at Data Flow Task, SSIS.Pipeline: Post Execute phase is beginning.
Information: 0x4004300B at Data Flow Task, SSIS.Pipeline: "OLE DB Destination" wrote 128 rows.
Information: 0x40043009 at Data Flow Task, SSIS.Pipeline: Cleanup phase is beginning.
Task failed: Data Flow Task
Warning: 0x80019002 at Incident categorie Problem: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (3) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
SSIS package "C:\Visual Studio Projecten\Integration Services Project1\Integration Services Project1\Incident categorie Problem.dtsx" finished: Failure.
The program '[8764] DtsDebugHost.exe: DTS' has exited with code 0 (0x0).
Post #1533668
Posted Wednesday, January 22, 2014 8:36 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 12:44 PM
Points: 12,951, Visits: 32,470
the error seems pretty clear to me:

"Violation of PRIMARY KEY constraint 'PK_incident_id'. Cannot insert duplicate key in object 'dbo.incident'. The duplicate key value is (129).".

does your process expect to insert everything in as new, or should it be updating existing incidents, and adding new incidents?

are you supposed to truncate the table before you insert the data?


Lowell

--There is no spoon, and there's no default ORDER BY in sql server either.
Actually, Common Sense is so rare, it should be considered a Superpower. --my son
Post #1533692
Posted Wednesday, January 22, 2014 8:57 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Wednesday, November 5, 2014 7:40 AM
Points: 81, Visits: 175
The table incident get deleted en will be filled by this package.
Post #1533702
Posted Wednesday, January 22, 2014 9:04 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 12:36 PM
Points: 13,318, Visits: 12,801
karim.boulahfa (1/22/2014)
The table incident get deleted en will be filled by this package.


What that errors says is that it is trying to insert a row into Incidents with a primary key value of 129 but the table already has a row with that value.


_______________________________________________________________

Need help? Help us help you.

Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

Need to split a string? Try Jeff Moden's splitter.

Cross Tabs and Pivots, Part 1 – Converting Rows to Columns
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs
Understanding and Using APPLY (Part 1)
Understanding and Using APPLY (Part 2)
Post #1533705
Posted Tuesday, November 25, 2014 11:51 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Saturday, December 13, 2014 5:22 AM
Points: 3, Visits: 6
As Lowell pointed out, you are inserting duplicate primary key values into the table which is not permitted.

You need to add a 'Sort' component in your Data Flow task.Right click and Edit 'Sort' component and specify by checking all the fields that form your primary key. Ensure to check the option that says "Remove duplicates" at the bottom. eg: If your Primary Key is "EmployeeID", select only "EmployeeID" and check the option to remove duplicates. If your PK is a combination of two or more fields like "FirstName" and "LastName", check both and the option to remove duplicates.

Good Luck !
Post #1639172
Posted Wednesday, November 26, 2014 1:07 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 10:28 AM
Points: 13,630, Visits: 11,501
riyaziq (11/25/2014)
As Lowell pointed out, you are inserting duplicate primary key values into the table which is not permitted.

You need to add a 'Sort' component in your Data Flow task.Right click and Edit 'Sort' component and specify by checking all the fields that form your primary key. Ensure to check the option that says "Remove duplicates" at the bottom. eg: If your Primary Key is "EmployeeID", select only "EmployeeID" and check the option to remove duplicates. If your PK is a combination of two or more fields like "FirstName" and "LastName", check both and the option to remove duplicates.

Good Luck !


Remember that the sort component is a blocking component and will read all data into memory before producing even one single row of output.
This is a no-go for large data sets.




How to post forum questions.
Need an answer? No, you need a question.
What’s the deal with Excel & SSIS?

Member of LinkedIn. My blog at LessThanDot.

MCSA SQL Server 2012 - MCSE Business Intelligence
Post #1639186
Posted Wednesday, November 26, 2014 1:52 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: 2 days ago @ 9:26 AM
Points: 52, Visits: 46
You OLE DB Source is trying to insert two duplicate id with same value 126 in OLE DB Destination (table incident).
Need to look into source component.
Post #1639221
Posted Wednesday, November 26, 2014 3:59 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Saturday, December 13, 2014 5:22 AM
Points: 3, Visits: 6
Koen Verbeeck (11/26/2014)
riyaziq (11/25/2014)
As Lowell pointed out, you are inserting duplicate primary key values into the table which is not permitted.

You need to add a 'Sort' component in your Data Flow task.Right click and Edit 'Sort' component and specify by checking all the fields that form your primary key. Ensure to check the option that says "Remove duplicates" at the bottom. eg: If your Primary Key is "EmployeeID", select only "EmployeeID" and check the option to remove duplicates. If your PK is a combination of two or more fields like "FirstName" and "LastName", check both and the option to remove duplicates.

Good Luck !


Remember that the sort component is a blocking component and will read all data into memory before producing even one single row of output.
This is a no-go for large data sets.


Koen Verbeeck,

I just implemented a Data Flow task with a Sort component using a large dataset and as you pointed out it's sometimes slow. Could you tell me what's the ideal way of dealing with this situation? I am thinking of this approach: Remove primary key specification so that the table accepts duplicate values and then using a separate Execute SQL Query task remove duplicates :

ALTER TABLE [Table Name] ADD AUTOID INT IDENTITY(1,1)
DELETE FROM [Table Name] WHERE AUTOID NOT IN (SELECT min(autoid) FROM [Table Name]

Post #1639260
Posted Wednesday, November 26, 2014 4:07 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 10:28 AM
Points: 13,630, Visits: 11,501
riyaziq (11/26/2014)
Koen Verbeeck (11/26/2014)
riyaziq (11/25/2014)
As Lowell pointed out, you are inserting duplicate primary key values into the table which is not permitted.

You need to add a 'Sort' component in your Data Flow task.Right click and Edit 'Sort' component and specify by checking all the fields that form your primary key. Ensure to check the option that says "Remove duplicates" at the bottom. eg: If your Primary Key is "EmployeeID", select only "EmployeeID" and check the option to remove duplicates. If your PK is a combination of two or more fields like "FirstName" and "LastName", check both and the option to remove duplicates.

Good Luck !


Remember that the sort component is a blocking component and will read all data into memory before producing even one single row of output.
This is a no-go for large data sets.


Koen Verbeeck,

I just implemented a Data Flow task with a Sort component using a large dataset and as you pointed out it's sometimes slow. Could you tell me what's the ideal way of dealing with this situation? I am thinking of this approach: Remove primary key specification so that the table accepts duplicate values and then using a separate Execute SQL Query task remove duplicates :

ALTER TABLE [Table Name] ADD AUTOID INT IDENTITY(1,1)
DELETE FROM [Table Name] WHERE AUTOID NOT IN (SELECT min(autoid) FROM [Table Name]


This will delete all rows that are not equal to the smallest identity value. In other words, all rows except the first one.
This will also not work if there is already an identity on the table.

The ideal scenario is that your source is a database, and then you can remove duplicates in the source query using ROW_NUMBER() OVER (PARTITION BY keycolumn ORDER BY somecolumn).




How to post forum questions.
Need an answer? No, you need a question.
What’s the deal with Excel & SSIS?

Member of LinkedIn. My blog at LessThanDot.

MCSA SQL Server 2012 - MCSE Business Intelligence
Post #1639265
Posted Wednesday, November 26, 2014 6:47 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Saturday, December 13, 2014 5:22 AM
Points: 3, Visits: 6
Thanks Koen for the suggestion.

The query I had in mind was incomplete. It should be like this:

ALTER TABLE [Table Name] ADD AUTOID INT IDENTITY(1,1)
DELETE FROM [Table Name] WHERE AUTOID NOT IN (SELECT min(autoid) FROM [Table Name]
GROUP BY [Field1],[Field2]

Post #1639307
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse