Click here to monitor SSC
SQLServerCentral is supported by Redgate
 
Log in  ::  Register  ::  Not logged in
 
 
 


Merge with duplicate rows


Merge with duplicate rows

Author
Message
SQL-Squid
SQL-Squid
SSC Rookie
SSC Rookie (42 reputation)SSC Rookie (42 reputation)SSC Rookie (42 reputation)SSC Rookie (42 reputation)SSC Rookie (42 reputation)SSC Rookie (42 reputation)SSC Rookie (42 reputation)SSC Rookie (42 reputation)

Group: General Forum Members
Points: 42 Visits: 306
Tava (11/6/2012)
SQL-Squid,

You're correct in what you said the issue is that your source has duplicates, so based on the below you will always return multiple results, hence the MERGE statement returns that error.

MERGE [AssetClassUDF] AS target
USING [RawData_AssetClassUDF] AS source
ON target.ASSETDETAIL_Id = source.ASSETDETAIL_Id
AND target.ASSET_UDF_DESC = source.ASSET_UDF_DESC


Unfortunatley with the MERGE as the error says you cant update/insert the same record twice... so you only have 2 options.

1. use a GROUP BY in your MERGE so that way it will treat duplicates as One value & this should fix your problem (If this is what you want)

or

2. Clean the SOURCE so it doesn't contain duplicates.

I personally don't like Option 2 - Cleaning the source file as this is how it gets delivered and always good reference point when you want to compare RAW -> STAGING -> LIVE.


Tava,

Please give an example (if you have time) of how to use the group by in the merge. I am been trying to figure this out and I am still lost.

Right now, I am running an update then insert and a cte to remove dups. Its messy and I don't feel its the best way to handle this. I also have a trigger on delete that archives these updates to an archive table to have a full history. With the update & insert routine, I am now having to run the delete dups cte against the archive table. I'm not getting the warm fuzzy feeling this is a correct way to implement all this.

I was told I can request the app designer to include a primary key in the feed I receive, if that does happen, the merge will be a lot better, but if not I need to have something in place to work.
Tava
Tava
SSC-Enthusiastic
SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)

Group: General Forum Members
Points: 156 Visits: 774
SQL-Squid (11/7/2012)
Tava (11/6/2012)
SQL-Squid,

You're correct in what you said the issue is that your source has duplicates, so based on the below you will always return multiple results, hence the MERGE statement returns that error.

MERGE [AssetClassUDF] AS target
USING [RawData_AssetClassUDF] AS source
ON target.ASSETDETAIL_Id = source.ASSETDETAIL_Id
AND target.ASSET_UDF_DESC = source.ASSET_UDF_DESC


Unfortunatley with the MERGE as the error says you cant update/insert the same record twice... so you only have 2 options.

1. use a GROUP BY in your MERGE so that way it will treat duplicates as One value & this should fix your problem (If this is what you want)

or

2. Clean the SOURCE so it doesn't contain duplicates.

I personally don't like Option 2 - Cleaning the source file as this is how it gets delivered and always good reference point when you want to compare RAW -> STAGING -> LIVE.


Tava,

Please give an example (if you have time) of how to use the group by in the merge. I am been trying to figure this out and I am still lost.

Right now, I am running an update then insert and a cte to remove dups. Its messy and I don't feel its the best way to handle this. I also have a trigger on delete that archives these updates to an archive table to have a full history. With the update & insert routine, I am now having to run the delete dups cte against the archive table. I'm not getting the warm fuzzy feeling this is a correct way to implement all this.

I was told I can request the app designer to include a primary key in the feed I receive, if that does happen, the merge will be a lot better, but if not I need to have something in place to work.



I've tried using your code provided and everything you have done is correct... the GROUP BY is right its just the MERGE condition that is not exactly fitting your requirements - if you add that additional check then it will insert a new record as it won't match... The source file is a mess unfortunately and if you could get the primary key that would resolve it - in saying that seperate update/delete statements appear the way to go if its not possible.... you're best to wait for a more experienced developer as i've only started used MERGE myself - the source was terrible but i got them to fix that up with PK for me.


MERGE [AssetClassUDF] AS target
USING
(
SELECT
ASSET_UDF_VALUE,
ASSET_UDF_DESC,
ASSETDETAIL_ID
FROM
[RawData_AssetClassUDF]
GROUP BY
ASSET_UDF_VALUE,
ASSET_UDF_DESC,
ASSETDETAIL_ID
)AS source (ASSET_UDF_VALUE,ASSET_UDF_DESC,ASSETDETAIL_ID)

ON
target.ASSETDETAIL_Id = source.ASSETDETAIL_Id
AND
target.ASSET_UDF_DESC = source.ASSET_UDF_DESC

-- NEEDS THIS CONDITION BUT ITS NOT WHAT YOU REQUIRE OVERALL
--AND
-- target.ASSET_UDF_VALUE = source.ASSET_UDF_VALUE
Tava
Tava
SSC-Enthusiastic
SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)

Group: General Forum Members
Points: 156 Visits: 774
SQL-Squid (11/7/2012)
[quote]Tava (11/6/2012)
SQL-Squid,

I was told I can request the app designer to include a primary key in the feed I receive, if that does happen, the merge will be a lot better, but if not I need to have something in place to work.


To me it's beneficial to fix the main problem and thats the source file... If it's required for you to do what you need just have to ask and explain its importance otherwise there will be these issues ....
adding a PK on their extract will be simple for them unless for some reason its not allowed.

obviously if the source file can't be fixed - then MERGE might not be suited for you (my opinion) & this method you currently have is only way to go.
demonfox
demonfox
Ten Centuries
Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)

Group: General Forum Members
Points: 1219 Visits: 1192
Tava (11/7/2012)
SQL-Squid (11/7/2012)
[quote]Tava (11/6/2012)
SQL-Squid,

I was told I can request the app designer to include a primary key in the feed I receive, if that does happen, the merge will be a lot better, but if not I need to have something in place to work.


To me it's beneficial to fix the main problem and thats the source file... If it's required for you to do what you need just have to ask and explain its importance otherwise there will be these issues ....
adding a PK on their extract will be simple for them unless for some reason its not allowed.

obviously if the source file can't be fixed - then MERGE might not be suited for you (my opinion) & this method you currently have is only way to go.



Uniqueness is required for the merge Updat/Delete ro work;
you better off with separate update ..
Or, if the duplicate row count is pretty less ; then you can filter those rows out of the merge statement and then update later only for these...or clean it before the update in the target; on the basis of design and requirement.

~ demonfox
___________________________________________________________________
Wondering what I would do next , when I am done with this one Ermm
Tava
Tava
SSC-Enthusiastic
SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)SSC-Enthusiastic (156 reputation)

Group: General Forum Members
Points: 156 Visits: 774
demonfox (11/7/2012)
Tava (11/7/2012)
SQL-Squid (11/7/2012)
[quote]Tava (11/6/2012)
SQL-Squid,

I was told I can request the app designer to include a primary key in the feed I receive, if that does happen, the merge will be a lot better, but if not I need to have something in place to work.


To me it's beneficial to fix the main problem and thats the source file... If it's required for you to do what you need just have to ask and explain its importance otherwise there will be these issues ....
adding a PK on their extract will be simple for them unless for some reason its not allowed.

obviously if the source file can't be fixed - then MERGE might not be suited for you (my opinion) & this method you currently have is only way to go.



Uniqueness is required for the merge Updat/Delete ro work;
you better off with separate update ..
Or, if the duplicate row count is pretty less ; then you can filter those rows out of the merge statement and then update later only for these...or clean it before the update in the target; on the basis of design and requirement.


I agree, trying to make a MERGE work where its purpose isnt designed for your requirements... stick to the seperate insert/update/delete.
SQL-Squid
SQL-Squid
SSC Rookie
SSC Rookie (42 reputation)SSC Rookie (42 reputation)SSC Rookie (42 reputation)SSC Rookie (42 reputation)SSC Rookie (42 reputation)SSC Rookie (42 reputation)SSC Rookie (42 reputation)SSC Rookie (42 reputation)

Group: General Forum Members
Points: 42 Visits: 306
I am keeping the insert/update/delete routine in place for now until they give me data with a PK. I agree uniqueness is needed to properly work with the data. Thanks again and have a great day!
hominamad
hominamad
Forum Newbie
Forum Newbie (7 reputation)Forum Newbie (7 reputation)Forum Newbie (7 reputation)Forum Newbie (7 reputation)Forum Newbie (7 reputation)Forum Newbie (7 reputation)Forum Newbie (7 reputation)Forum Newbie (7 reputation)

Group: General Forum Members
Points: 7 Visits: 27
Hi - I have a similar issue to the one discussed here. Wasn't sure if I should create a new thread or not, first I'll try reviving this one.

I also have a situation where my source data has multiple rows - but the behavior I would like to happen is that my target rows get expanded - i.e. rows get inserted for each duplicate.

I basically want it to be have the same behavior as if I was joining the two tables together.

For anyone interested - here is my use case:

My target table is data which originates from a feed we download from an external source. My source data is a table that we control. There is a common identifier in both tables and I am trying to populate IDs and other data from our table into the target table. The thing is, in our source table it is a valid scenario that there could be multiple records with the same identifier. Specifically, some of those records may be "active" and others "inactive" and we need to see all. If my target table has one record that matches 3 records in the source table, I want to end up with 3 records in my target.

Is there a way this can be accomplished using a merge? I can think of other ways to do it, like joining and inserting into a new table, etc. but I'm hoping there is a slicker way to do it - maybe using MERGE.

Thanks,

H
ChrisM@Work
ChrisM@Work
SSCrazy Eights
SSCrazy Eights (9K reputation)SSCrazy Eights (9K reputation)SSCrazy Eights (9K reputation)SSCrazy Eights (9K reputation)SSCrazy Eights (9K reputation)SSCrazy Eights (9K reputation)SSCrazy Eights (9K reputation)SSCrazy Eights (9K reputation)

Group: General Forum Members
Points: 8957 Visits: 19016
You should start a new thread for this, and perhaps include the action you would like to take depending upon the distribution of ID's between the two tables. For instance, define what you plan to do if there's:
One row in each table matching on ID - update, delete & insert, or ignore;
One row in source and two rows in target, matching on ID;
Three rows in source and two rows in target, matching on ID;
No match in target for a source row;
No match in source for a target row.
You're likely to get a better answer if you can provide the structures of the tables with some readily-consumable sample data for folks to code against - if you're not sure how to do this, there's a link in my sig to a forum etiquette article which demonstrates.

“Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
Exploring Recursive CTEs by Example Dwain Camps
Go


Permissions

You can't post new topics.
You can't post topic replies.
You can't post new polls.
You can't post replies to polls.
You can't edit your own topics.
You can't delete your own topics.
You can't edit other topics.
You can't delete other topics.
You can't edit your own posts.
You can't edit other posts.
You can't delete your own posts.
You can't delete other posts.
You can't post events.
You can't edit your own events.
You can't edit other events.
You can't delete your own events.
You can't delete other events.
You can't send private messages.
You can't send emails.
You can read topics.
You can't vote in polls.
You can't upload attachments.
You can download attachments.
You can't post HTML code.
You can't edit HTML code.
You can't post IFCode.
You can't post JavaScript.
You can post emoticons.
You can't post or upload images.

Select a forum

































































































































































SQLServerCentral


Search