Remove Duplicate Records

  • Comments posted to this topic are about the item Remove Duplicate Records


    Kindest Regards,

    Syed
    Sr. SQL Server DBA

  • clever but on large tables probably inparactical due to lock escalation and the time taken to do disk based delete and reindex operations.

    alternatives could be a Unique indexed view based around select Distinct * from PhoneBook.

    or a select distinct into Phonebook2 combined with rename Table operations.

    You could also stop the duplicates getting in there in the first place with an insert trigger. You can use Binary_checksum(*) function on the inserted table to check aginst the Binary_Checksum(*) of existing Rows

  • Surely adding a constraint when designing the table is the best option.

    Stop duplicates at source?

    Cheap toilet paper is a false economy, beware!

  • I would avoid using Checksum or binary or otherwise for uniqueness. It is not guaranteed to be unique as shown by this example I took from somewhere out on the internet (apologies to the author who I did not note).

    select binary_checksum('where myval in (7004054,7004055)') a, binary_checksum('where myval in (7003888,7003889)') b

    this gives the same result - 432179860432179860 for both values

    Toni

  • no problem, youve illustrated a fair point. Id like to see the link that adds some context to your example

  • Delete should be based on business logics, as phone book example if you create unique index on phone number column then SQL Server will not allow duplicate phone number. This mean your phone book can not insert husband and wife who shares the same home phone number.

    CheckSum and Binary_CheckSum both are not reliable, they were orignally designed to check message integrity when sending secure messages, both parties can detect if message was altered. If you are on SQL Server 2005 then use HashBytes function.

    check sum and binary check sum can take the whole row and make things easy as example

    Select Binary_CheckSum(*) from table_name

    the hashbytes is limited to varchar,nvarchar and varbinary as input. You have to convert your columns to one of the supported data types.

    select HashBytes('SHA1', CONVERT(varchar,column_name)) from table_name

    use it with caution even a minor change like varchar to nvarchar will change the hashbytes value.

    select HashBytes('SHA1', CONVERT(nvarchar,column_name)) from table_name

    HashBytes

    http://msdn.microsoft.com/en-us/library/ms174415.aspx

    Binary_CheckSum

    http://msdn.microsoft.com/en-us/library/ms173784.aspx

    Duplicate rows are common in systems where you are importing data from other systems. There are few methods to delete duplicate rows in SQL server table.

    http://www.sqldba.org/articles/34-find-duplicate-records-to-delete-or-update-in-sql-server.aspx

    I hope it helps.


    Kindest Regards,

    Syed
    Sr. SQL Server DBA

  • Hey guys,

    I have a fairly large table in production with almost a million records, and I have duplicate records in that table with about 18,000 duplicate records. I've been trying some sql suggestions to delete duplicates, and I found your example here I was able to (I think) incorporate my field information into. Running this statement against my test database which is only half my production database, i executed the query which is running for some 4+ hours. When i stopped the query from executing, i found i had some 38,000+ records added to my database table... so I'm wondering if i made some error when configuring your sql into my database. Here is the code i created using your example.

    SET ROWCOUNT 1

    SELECT @@rowcount

    WHILE @@rowcount > 0

    DELETE pb FROM Expense as pb

    INNER JOIN

    (SELECT accountCode, expenseDescription, invoiceDate, invoiceNumber, ledgerCode, openItemNumber,

    programCode, transactionAmount, vendorName, warrantNumber, lineNumber, collocationCodeID

    FROM Expense

    where expenseDescription like 'PP%'

    GROUP BY accountCode, expenseDescription, invoiceDate, invoiceNumber, ledgerCode, openItemNumber,

    programCode, transactionAmount, vendorName, warrantNumber, lineNumber, collocationCodeID HAVING count(*) > 1)

    AS c ON c.accountCode = pb.accountCode

    and c.expenseDescription = pb.expenseDescription

    and c.invoiceDate = pb.invoiceDate

    and c.invoiceNumber = pb.invoiceNumber

    and c.ledgerCode = pb.ledgerCode

    and c.openItemNumber = pb.openItemNumber

    and c.programCode = pb.programCode

    and c.transactionAmount = pb.transactionAmount

    and c.vendorName = pb.vendorName

    and c.warrantNumber = pb.warrantNumber

    and c.lineNumber = pb.lineNumber

    and c.collocationCodeID = pb.collocationCodeID

    SET ROWCOUNT 0

    SELECT * FROM Expense

    DROP TABLE Expense

    So, my questions are... Will this sql actually delete what seems like temp records? With this size table, should i expect this sql to execute for hours???

    I would appreciate any insight, this is my first time performing a task like this.

    Thank you,

    Lisa

  • Smart technique. But beware SET ROWCOUNT is set to be deprecated in the future releases, instead you could use TOP 1.

    An alternate method to remove duplicates is to use ROW_NUMBER() technique, basically ordering the resultset over the column phone number and then deleting row where row_number > 1, this will make sure that only 1 resultset is retained and any other repeating instances are deleted.

    Thanks!

    Amol Naik

  • AmolNaik (6/13/2011)


    Smart technique. But beware SET ROWCOUNT is set to be deprecated in the future releases, instead you could use TOP 1.

    An alternate method to remove duplicates is to use ROW_NUMBER() technique, basically ordering the resultset over the column phone number and then deleting row where row_number > 1, this will make sure that only 1 resultset is retained and any other repeating instances are deleted.

    Thanks!

    It'll also be much faster than the looping method. 🙂

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply