Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 

Dedupe Data CTE

As a DBA working with SQL Server, some data issues arise that need to be fixed.  One such data issue is the duplication of data.  Duplicate data can happen for a number of reasons.  One such reason is the absence of constraints such as Primary Keys.

Luckily, there are ways for the Data Professional to clean up such a quagmire.  First, let’s set the stage with a really simple example.

CREATE TABLE DupeDemo (DemoID INT, SomeInt INT)
 
INSERT INTO DupeDemo
SELECT TOP 10000
        DemoID       = ABS(CHECKSUM(NEWID()))%100+1,
		SomeInt      = ABS(CHECKSUM(NEWID()))%50+1
	FROM Master.dbo.SysColumns t1
CROSS JOIN Master.dbo.SysColumns t2

Here, we are simply creating a table and populating the table with two columns of randomized data.  This should give us a significant amount of duplicate data and a good enough test bed to demonstrate how to quickly fix this issue.  Due to the random nature of these inserts, we will get a significant range of duplicates for some values and maybe no duplicates at all for other values.  (Did that make sense?)

Let’s check our work and see how well we did with the duplicate creation.

SELECT *
	FROM DupeDemo
	ORDER BY DemoID,SomeInt

Each time I have run this, I have come up with a good test bed.  It should work relatively well for you as well.  Now, we can work on eliminating those duplicates.  To better illustrate that we have dupes and the distribution, let’s have a look at this query.

SELECT DemoID,SomeInt, COUNT(SomeInt) AS Counted
	FROM DupeDemo
	GROUP BY DemoID,SomeInt
	ORDER BY DemoID,SomeInt

This query will yield results like what we see in the image to the right.

Similarly, I can run a query that would be contained within a CTE and take advantage of the ROW_Number() function to help find where I may have dups.

WITH dedupe AS (
	SELECT DemoID, SomeInt
		,ROW_NUMBER() OVER (partition BY DemoID,SomeInt ORDER BY Someint) AS Rownum
		FROM DupeDemo
		)
SELECT * FROM dedupe
	WHERE Rownum > 1
		ORDER BY DemoID,SomeInt

Now that we know for certain that we have a large amount of dupes, lets finally work on eliminating them.  We can use a query just like the last one and alter it a bit to make it work so we can delete the dups.  Under normal circumstances, I would make absolutely certain that the tables to be affected by these deletes were backed up.  You can accomplish that by either copying the data in the table into a new table or you can run a database backup.

WITH dedupe AS (
	SELECT DemoID, SomeInt
		,ROW_NUMBER() OVER (partition BY DemoID,SomeInt ORDER BY Someint) AS Rownum
		FROM DupeDemo
		)
DELETE dedupe
	WHERE Rownum > 1

Really easy script.  This will delete all source table records that are dupes.  You can validate that by rerunning either of the first two queries used to demonstrate that duplicate data existed.  This is illustrated in the following image.

Since SQL 2005, removing bad duplicate data has become substantially easier.  This example should help to demonstrate that as well as provide a method to create a repeatable test while learning the technique.

Comments

Posted by junk.jjk on 22 August 2011

Jason,

Thank you for the simple, clear explaination of using CTE to handle duplicate records.  As somewhat of a newb, I have a question, does the CTE use logfile space for the db in question or space in tempdb?  I (unfortunately) have a table with over 411 mil records with a high amount of dups.  my server is 8 x 2.8GHz & 12GB RAM, but running a delete script such as above (literally copy/paste and change table/field names) ran for over 1.5 hours.  Upon cancelling (red stop button) it is still "stopping" for over 1.25 hours since then.  Any help appreciated.

Leave a Comment

Please register or log in to leave a comment.