Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

Remove Duplicate Records Expand / Collapse
Author
Message
Posted Saturday, April 26, 2008 7:08 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, October 25, 2011 1:47 PM
Points: 193, Visits: 94
Comments posted to this topic are about the item Remove Duplicate Records


Kindest Regards,

Syed
Sr. SQL Server DBA
Post #491052
Posted Monday, May 26, 2008 6:47 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Tuesday, August 12, 2014 7:55 AM
Points: 23, Visits: 102
clever but on large tables probably inparactical due to lock escalation and the time taken to do disk based delete and reindex operations.

alternatives could be a Unique indexed view based around select Distinct * from PhoneBook.
or a select distinct into Phonebook2 combined with rename Table operations.

You could also stop the duplicates getting in there in the first place with an insert trigger. You can use Binary_checksum(*) function on the inserted table to check aginst the Binary_Checksum(*) of existing Rows
Post #506469
Posted Tuesday, May 27, 2008 5:41 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Tuesday, March 4, 2014 8:50 AM
Points: 13, Visits: 61
Surely adding a constraint when designing the table is the best option.

Stop duplicates at source?


Cheap toilet paper is a false economy, beware!
Post #506825
Posted Tuesday, May 27, 2008 10:50 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Friday, August 1, 2014 5:51 AM
Points: 242, Visits: 940
I would avoid using Checksum or binary or otherwise for uniqueness. It is not guaranteed to be unique as shown by this example I took from somewhere out on the internet (apologies to the author who I did not note).



select binary_checksum('where myval in (7004054,7004055)') a, binary_checksum('where myval in (7003888,7003889)') b 

this gives the same result - 432179860 432179860 for both values

Toni
Post #507090
Posted Tuesday, May 27, 2008 12:27 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Tuesday, August 12, 2014 7:55 AM
Points: 23, Visits: 102
no problem, youve illustrated a fair point. Id like to see the link that adds some context to your example

Post #507171
Posted Tuesday, May 27, 2008 7:08 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Tuesday, October 25, 2011 1:47 PM
Points: 193, Visits: 94
Delete should be based on business logics, as phone book example if you create unique index on phone number column then SQL Server will not allow duplicate phone number. This mean your phone book can not insert husband and wife who shares the same home phone number.

CheckSum and Binary_CheckSum both are not reliable, they were orignally designed to check message integrity when sending secure messages, both parties can detect if message was altered. If you are on SQL Server 2005 then use HashBytes function.

check sum and binary check sum can take the whole row and make things easy as example

Select Binary_CheckSum(*) from table_name

the hashbytes is limited to varchar,nvarchar and varbinary as input. You have to convert your columns to one of the supported data types.

select HashBytes('SHA1', CONVERT(varchar,column_name)) from table_name

use it with caution even a minor change like varchar to nvarchar will change the hashbytes value.

select HashBytes('SHA1', CONVERT(nvarchar,column_name)) from table_name

HashBytes
http://msdn.microsoft.com/en-us/library/ms174415.aspx

Binary_CheckSum
http://msdn.microsoft.com/en-us/library/ms173784.aspx

Duplicate rows are common in systems where you are importing data from other systems. There are few methods to delete duplicate rows in SQL server table.

http://www.sqldba.org/articles/34-find-duplicate-records-to-delete-or-update-in-sql-server.aspx


I hope it helps.



Kindest Regards,

Syed
Sr. SQL Server DBA
Post #507355
Posted Tuesday, April 14, 2009 2:57 PM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Thursday, December 3, 2009 3:15 PM
Points: 26, Visits: 81
Hey guys,

I have a fairly large table in production with almost a million records, and I have duplicate records in that table with about 18,000 duplicate records. I've been trying some sql suggestions to delete duplicates, and I found your example here I was able to (I think) incorporate my field information into. Running this statement against my test database which is only half my production database, i executed the query which is running for some 4+ hours. When i stopped the query from executing, i found i had some 38,000+ records added to my database table... so I'm wondering if i made some error when configuring your sql into my database. Here is the code i created using your example.

SET ROWCOUNT 1
SELECT @@rowcount
WHILE @@rowcount > 0
DELETE pb FROM Expense as pb
INNER JOIN
(SELECT accountCode, expenseDescription, invoiceDate, invoiceNumber, ledgerCode, openItemNumber,
programCode, transactionAmount, vendorName, warrantNumber, lineNumber, collocationCodeID
FROM Expense
where expenseDescription like 'PP%'
GROUP BY accountCode, expenseDescription, invoiceDate, invoiceNumber, ledgerCode, openItemNumber,
programCode, transactionAmount, vendorName, warrantNumber, lineNumber, collocationCodeID HAVING count(*) > 1)
AS c ON c.accountCode = pb.accountCode
and c.expenseDescription = pb.expenseDescription
and c.invoiceDate = pb.invoiceDate
and c.invoiceNumber = pb.invoiceNumber
and c.ledgerCode = pb.ledgerCode
and c.openItemNumber = pb.openItemNumber
and c.programCode = pb.programCode
and c.transactionAmount = pb.transactionAmount
and c.vendorName = pb.vendorName
and c.warrantNumber = pb.warrantNumber
and c.lineNumber = pb.lineNumber
and c.collocationCodeID = pb.collocationCodeID
SET ROWCOUNT 0

SELECT * FROM Expense

DROP TABLE Expense

So, my questions are... Will this sql actually delete what seems like temp records? With this size table, should i expect this sql to execute for hours???

I would appreciate any insight, this is my first time performing a task like this.

Thank you,
Lisa
Post #697067
Posted Monday, June 13, 2011 3:24 PM
SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Monday, October 21, 2013 11:43 PM
Points: 945, Visits: 1,234
Smart technique. But beware SET ROWCOUNT is set to be deprecated in the future releases, instead you could use TOP 1.

An alternate method to remove duplicates is to use ROW_NUMBER() technique, basically ordering the resultset over the column phone number and then deleting row where row_number > 1, this will make sure that only 1 resultset is retained and any other repeating instances are deleted.

Thanks!


Amol Naik
Post #1124633
Posted Monday, June 13, 2011 10:13 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Yesterday @ 11:34 PM
Points: 36,952, Visits: 31,461
AmolNaik (6/13/2011)
Smart technique. But beware SET ROWCOUNT is set to be deprecated in the future releases, instead you could use TOP 1.

An alternate method to remove duplicates is to use ROW_NUMBER() technique, basically ordering the resultset over the column phone number and then deleting row where row_number > 1, this will make sure that only 1 resultset is retained and any other repeating instances are deleted.

Thanks!


It'll also be much faster than the looping method.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1124730
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse