SQL Clone
SQLServerCentral is supported by Redgate
 
Log in  ::  Register  ::  Not logged in
 
 
 


Eliminating Duplicate Rows using The PARTITION BY clause


Eliminating Duplicate Rows using The PARTITION BY clause

Author
Message
trubolotta
trubolotta
SSC Rookie
SSC Rookie (33 reputation)SSC Rookie (33 reputation)SSC Rookie (33 reputation)SSC Rookie (33 reputation)SSC Rookie (33 reputation)SSC Rookie (33 reputation)SSC Rookie (33 reputation)SSC Rookie (33 reputation)

Group: General Forum Members
Points: 33 Visits: 120
The code in the article works, but is it enough to leave it at that? If you are stuck with a production database that allows duplicates in tables, you have a problem and should ask how frequently do you need to purge duplicates? Also ask what prevents the data from being corrupted seconds after the purge, and then used to make a critical business decision?

This is the sample table created by the author, renamed ony to illustrate a point:


create table Emp_Details_Raw
(Emp_Name varchar(10)
,Company varchar(15)
,Join_Date datetime
,Resigned_Date datetime
)



Of course the table should look more like this to provide unique rows, again the table name was chosen to illustrate a point:


CREATE TABLE [dbo].[Emp_Details_Unique](
[Emp_Name] [varchar](10) NOT NULL,
[Company] [varchar](15) NOT NULL,
[Join_Date] [datetime] NOT NULL,
[Resigned_Date] [datetime] NOT NULL
) ON [PRIMARY]

GO

CREATE UNIQUE NONCLUSTERED INDEX [IX_Emp_Details_Unique] ON [dbo].[Emp_Details_Unique]
(
[Emp_Name] ASC,
[Company] ASC,
[Join_Date] ASC,
[Resigned_Date] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

GO



Using the author's code to populate the table Emp_Details_Raw pops the duplicates in without any problem because of the missing constraint. This may be similar to the problem you face with third party data in whatever form you receive it:


insert into Emp_Details_Raw (Emp_Name, Company, Join_Date, Resigned_Date)
values ('John', 'Software', '20060101', '20061231')
,('John', 'Software', '20060101', '20061231')
,('John', 'Software', '20060101', '20061231')
,('John', 'SuperSoft', '20070101', '20071231')
,('John', 'UltraSoft', '20070201', '20080131')
,('John', 'ImproSoft', '20080201', '20081231')
,('John', 'ImproSoft', '20080201', '20081231')
,('Mary', 'Software', '20060101', '20081231')
,('Mary', 'SuperSoft', '20090101', '20090531')
,('Mary', 'SuperSoft', '20090101', '20090531')
,('Mary', 'UltraSoft', '20090601', '20100531')
,('Mary', 'UltraSoft', '20090601', '20100531')



I have two choices. First, I can delete duplicates from the Emp_Details_Raw table, but wouldn't it be a good idea to back that data up first? I would just for accountability. Once the duplicates are deleted, I can insert the data into my production table.

My alternative is simply not to insert duplicate records in the production database in the first place:


INSERT dbo.Emp_Details_Unique
(Emp_Name, Company, Join_Date, Resigned_Date)
SELECT DISTINCT
Emp_Name, Company, Join_Date, Resigned_Date
FROM dbo.Emp_Details_Raw



Advantages include less coding, preservation of the original data without creating a duplicate data store and assurance the production table has unique rows at all times. But then again, my solution doesn't offer that delicious complexity some seem to relish or use the whiz-bang new features of SQL Server. To each his own.
GoofyGuy
GoofyGuy
SSChasing Mays
SSChasing Mays (645 reputation)SSChasing Mays (645 reputation)SSChasing Mays (645 reputation)SSChasing Mays (645 reputation)SSChasing Mays (645 reputation)SSChasing Mays (645 reputation)SSChasing Mays (645 reputation)SSChasing Mays (645 reputation)

Group: General Forum Members
Points: 645 Visits: 971
trubolatta wrote:

My alternative is simply not to insert duplicate records in the production database in the first place [sample code follows] ...

Yes, this is a nice method. Very clean, very elegant.

But then again, my solution doesn't offer that delicious complexity some seem to relish or use the whiz-bang new features of SQL Server. To each his own.

As a developer, I always prefer the KISS approach, and your example certainly offers that. Thank you for posting an alternative.
skamath
skamath
Forum Newbie
Forum Newbie (6 reputation)Forum Newbie (6 reputation)Forum Newbie (6 reputation)Forum Newbie (6 reputation)Forum Newbie (6 reputation)Forum Newbie (6 reputation)Forum Newbie (6 reputation)Forum Newbie (6 reputation)

Group: General Forum Members
Points: 6 Visits: 96
thisisfutile (9/22/2010)
@trubolotta Not trying to start an argument here, but hindsight is 20/20 for everyone. Sometimes we find ourselves in a situation like the OP is describing (or something similar) and we need a solution....This appears to be one of those articles that creates a problem based on poor design and purports to correct it using some bloated functionality of SQL Server. If the table were properly designed with uniqueness constraints, the problem would not exist. Allowing duplicate data into the table in the first place is the problem, not fixing it after the fact. The more likely scenario and the one I have seen most often comes from importing data from poorly designed databases or poorly trained users.


I agree that such problems are usually caused by poor design.

But poor design *is* prevalent in the real world.

So, would you agree that if you encounter *exactly* such instances (and you seem to indicate that you do), that unless you have a time machine to travel back and pre-correct the poor design, your choices are:

1. Surrender, stating that the database was poorly designed.
2. Try to correct the mistake.

If you choose 2., what is wrong with using the technique in this article? I do not recall the author suggesting that you should first design poorly and then use his technique to correct it.
tanyauskas
tanyauskas
Forum Newbie
Forum Newbie (3 reputation)Forum Newbie (3 reputation)Forum Newbie (3 reputation)Forum Newbie (3 reputation)Forum Newbie (3 reputation)Forum Newbie (3 reputation)Forum Newbie (3 reputation)Forum Newbie (3 reputation)

Group: General Forum Members
Points: 3 Visits: 6
but what would u do if u have duplicates row not in one table but after joining to view, i was unable to delete duplicate records then , i got an error message
Msg 4405, Level 16, State 1, Line 1
View or function 'a' is not updatable because the modification affects multiple base tables.

delete from a
from
(select v_rpt_Study_details.StudyId
,view1.col1
,view1.col2
,view2.col1
,view2.col2
,ROW_NUMBER() over (partition by view1.col1
,view1.col2
,view2.col1
,view2.col2 order by view1.col1
,view1.col2
,view2.col1
,view2.col2
, ) RowNumber
from view1 inner join view2 on
view1.col1=view2.col1) a
where a.RowNumber > 1
trubolotta
trubolotta
SSC Rookie
SSC Rookie (33 reputation)SSC Rookie (33 reputation)SSC Rookie (33 reputation)SSC Rookie (33 reputation)SSC Rookie (33 reputation)SSC Rookie (33 reputation)SSC Rookie (33 reputation)SSC Rookie (33 reputation)

Group: General Forum Members
Points: 33 Visits: 120
skamath wrote:

So, would you agree that if you encounter *exactly* such instances (and you seem to indicate that you do), that unless you have a time machine to travel back and pre-correct the poor design, your choices are:

1. Surrender, stating that the database was poorly designed.
2. Try to correct the mistake.

If you choose 2., what is wrong with using the technique in this article? I do not recall the author suggesting that you should first design poorly and then use his technique to correct it.


You are quite correct. Having encountered such instances, I explain to my client that while I would love to have his business writing a fix for his database, the real problem is the design and my fix will NOT eliminate any problems with the data beyond the moment it is applied. I may risk a contract doing that, but more often than not the client will ask what exactly is the problem and what is the best correction.

There is nothing wrong with the technique in the article, though I say that with some reservation concerning locks, keys, indexes and already published reports that may have relied on the faulty data. I have also found that most cases involving duplicate rows are not nearly as clean and simple as the sample illustration and need quite a bit of stroking, especially if the offending duplicates are used as foreign keys or indexes. I'm just saying there is much more to it and in the case of the simple example, there are still pitfalls and still more efficient ways to achieve the same end.
Andrew Peterson
Andrew Peterson
Right there with Babe
Right there with Babe (718 reputation)Right there with Babe (718 reputation)Right there with Babe (718 reputation)Right there with Babe (718 reputation)Right there with Babe (718 reputation)Right there with Babe (718 reputation)Right there with Babe (718 reputation)Right there with Babe (718 reputation)

Group: General Forum Members
Points: 718 Visits: 725
Excellent use of Row_Number.

The more you are prepared, the less you need it.
SQLRNNR
SQLRNNR
SSC-Dedicated
SSC-Dedicated (33K reputation)SSC-Dedicated (33K reputation)SSC-Dedicated (33K reputation)SSC-Dedicated (33K reputation)SSC-Dedicated (33K reputation)SSC-Dedicated (33K reputation)SSC-Dedicated (33K reputation)SSC-Dedicated (33K reputation)

Group: General Forum Members
Points: 33276 Visits: 18560
Nice article - well written.



Jason AKA CirqueDeSQLeil
I have given a name to my pain...
MCM SQL Server, MVP


SQL RNNR

Posting Performance Based Questions - Gail Shaw

David Lean
David Lean
SSC Journeyman
SSC Journeyman (98 reputation)SSC Journeyman (98 reputation)SSC Journeyman (98 reputation)SSC Journeyman (98 reputation)SSC Journeyman (98 reputation)SSC Journeyman (98 reputation)SSC Journeyman (98 reputation)SSC Journeyman (98 reputation)

Group: General Forum Members
Points: 98 Visits: 129
So it seems that we are all in agreement.
1. It is best that the system is designed to avoid these issues in the first place. This is similar to every dog owner removing their dog droppings.
2. Sometimes you find youself in the poo. In this case you need to a) Clean the data & b) Change the system to prevent it happening again.

There is often a third part to this issue. One that the "Hey I would just write a perfect database" folks may be ignoring. What happens if your perfect DB now starts to reject these dirty rows? Often systems with poor error handling in the DB also have poor error handling in the App tier.
If one insert in a more complex process fails, will it be contained in a transaction to ensure everything fails cleanly? (Most likely not, few developers use transactions, many prefer WITH (NOLOCK) hints).
If the app does correctly use transactions, will it check the return code from SQL (often not)
If it does check return codes will it present some kind of error to the end-user in a way the User can correct the issue?
And sometimes the User is already gone. eg: Batch systems, Real-Time capture ie: RFID, CEP, Process control, Toll Booth & Speeding Cameras etc.

In short it may be cheap to fix the DB, but you occasionally open the door to a huge rectification project. One that will take a long time to get resourced & funded. Which is why, you will often hear IT mgrs request you to clean the huge mess their DB is in now, & maybe write some scans they can run periodicly, till they get budget to do it right.

Or to go with the dog analogy. If you've just fallen in the sewer. You need to priortise. Perhaps wipe it from your eyes. Shake off the big chunks. And then figure out how to sort out the rest. Having someone nearby tell you, that they wouldn't have fallen in the sewer, is rather redundant.
jswong05
jswong05
SSC Veteran
SSC Veteran (247 reputation)SSC Veteran (247 reputation)SSC Veteran (247 reputation)SSC Veteran (247 reputation)SSC Veteran (247 reputation)SSC Veteran (247 reputation)SSC Veteran (247 reputation)SSC Veteran (247 reputation)

Group: General Forum Members
Points: 247 Visits: 476
The principle is the same, the implementation depends on the nature of your business.
If quick fix is applicable and acceptable without incurring much cost, you can quick-fix it everyday.
By 80-20 rule, if you have to chase down and remedy all child records and parent records and that is too costly, you will have to determine where to cut off, to leave them along or fix at any cost or in between.
First, propose your fix and have business and IT managers sign off. If they don't understand, have them sign off.
In general, after quick-fix, it will cost your organization less in the long run if you can fix the root cause. That is not always the case. For example, if you don't have the source code, don't have a PASCAL programer, don't have the original design, don't have business rules.

The only time you don't have to fix anything and hand it back to your manager is you already found another job.

Jason
http://dbace.us
:-P
yheon_17 66014
yheon_17 66014
Grasshopper
Grasshopper (10 reputation)Grasshopper (10 reputation)Grasshopper (10 reputation)Grasshopper (10 reputation)Grasshopper (10 reputation)Grasshopper (10 reputation)Grasshopper (10 reputation)Grasshopper (10 reputation)

Group: General Forum Members
Points: 10 Visits: 61
Nice. Very useful. Thanks.
Go


Permissions

You can't post new topics.
You can't post topic replies.
You can't post new polls.
You can't post replies to polls.
You can't edit your own topics.
You can't delete your own topics.
You can't edit other topics.
You can't delete other topics.
You can't edit your own posts.
You can't edit other posts.
You can't delete your own posts.
You can't delete other posts.
You can't post events.
You can't edit your own events.
You can't edit other events.
You can't delete your own events.
You can't delete other events.
You can't send private messages.
You can't send emails.
You can read topics.
You can't vote in polls.
You can't upload attachments.
You can download attachments.
You can't post HTML code.
You can't edit HTML code.
You can't post IFCode.
You can't post JavaScript.
You can post emoticons.
You can't post or upload images.

Select a forum

































































































































































SQLServerCentral


Search