Deleting Duplicate Records

  • Comments posted to this topic are about the item Deleting Duplicate Records

  • Why not just do this?

    Insert into EmployeeCopy

    select * from Employee

    UNION

    select * from Employee

    voila -- duplicates gone! However, they have to be duplicates in every field for this to work, but I suppose they're not really true duplicates if they don't match in every field.

  • In the case where there's no unique ID, I would think another option would be

    select distinct * from Employee

    into EmployeeCopy

    Since I'm typing this before I've had coffee, it could be that you need to replace "*" with a list of all the fields.

  • delete * from employee where id NOT IN (select min(id) from employee group by employeeno,employeeid)

    This will delete all duplicated records and leave one unduplicated record

    RickO

  • Now that I read the article again, I come to these conclusions:

    EmployeeNo and EmployeeID are duplicate fields in the Employees table

    ID field only exists in the EmployeesCopy table because that was added to make the rows non unique (but somehow it's shown in the Employees table, so I'm a little confused)

    I would hope that if there are any other fields in the table, they would match exactly too -- or you've got bigger problems and you shouldn't be doing this to get rid of them

    So using the select distinct into EmployeesCopy or select * UNION select * will remove the duplicates as they fill EmployeeCopy -- thus eliminating the need to look at ID > ID or MIN(ID) with a self join.

    However, if you choose to use the author's solution and you don't remove the duplicates before filling EmployeesCopy, isn't the delete EmployeesCopy where not in (select min(id)) an example of RBAR?

    Would it be better to do it this way?

    delete EmployeesCopy from EmployeesCopy a left join (select min(id) from EmployeesCopy group by EmployeeNo, EmployeeID) on a.id = b.id where b.id is null

    This uses the set-based power of SQL, so shouldn't it be faster?

  • I think the author meant to say that the ID column is unique

    his query is:

    --Find duplicate records

    --The result of this query is all the duplicate records whose Id is greater.

    select a.* from Employees a

    join Employees b on

    a.[EmployeeNo] = b.[EmployeeNo]

    AND a.[EmployeeID]= b.[EmployeeID]

    AND a.Id>b.Id

    so to delete records without creating a new table

    delete * from employee where id NOT IN (select min(id) from employee group by employeeno, employeeid)

    This will leave the min(id) for any duplicated record, correct?

  • OK.

    Still, wouldn't it be better to do this (set-based solution):

    delete employees from employees a left join (select min(id) from employees group by employeeno, employeeid) b on a.id = b.id where b.id is null

    rather than this (RBAR solution):

    delete employees where id NOT IN (select min(id) from employees group by employeeno, employeeid)

    On small tables there probably wouldn't be much difference in performance, but I'm thinking that the set-based solution would be orders of magnitude faster on very large tables.

  • What is RBAR?

    I agree performance may be better....but probably in most cases

    it wouldn't matter on tables with less than 20,000 records

  • Row By Agonizing Row -- I'm surprised that you've never heard of it!

    For each row in Employees you have to run the query (select min(id) from Employees group by employeeid, employeeno)

    The set-based solution does this just one time and joins back to Employees to perform the deletes.

    You are right, you probably wouldn't notice any difference in performance on small tables, but I'd bet 20,000 rows would show a noticeable difference, and the difference in 1,000,000 plus row tables could be counted in minutes, maybe hours.

  • RBAR...didn't know that...!

    I did explain the plan for both and believe it or not the

    delete where NOT IN is less cost

    Is cost the indicator to look at for the execution plan?

  • Cost is usually a good indicator, but it can be misleading -- especially if you only evaluate it on test servers with limited amounts of data. When you move your code into production where the number of rows may be much larger, you can have disasters.

    The cost of the query is based on SQL Server's estimate of the number of rows. You can have a small table (or a small subset of a large table) where the cost for a certain operation is lower than another, but as the table grows (or the constraints of the query change), the reverse is true. This is one of the dangers of stored procedures because their execution plans get cached and may not get recompiled as your data changes. Don't get me wrong, I love stored procedures, and I never use inline code in my applications; I always call stored procedures.

    RBAR can affect all four of the operations (select, insert, update, delete). One thing to always remember is: "RBAR bad, set-based good!"

    Here are a few good articles on RBAR:

    http://www.simple-talk.com/sql/t-sql-programming/rbar--row-by-agonizing-row/

    http://www.sqlservercentral.com/articles/T-SQL/61539/

    http://www.sqlservercentral.com/articles/Performance+Tuning/62278/

  • thanks

  • The query will be composed this way:-

    WITH TempUsers (FirstName,LastName, duplicateRecordCount)

    AS

    (

    SELECT FirstName,LastName,

    ROW_NUMBER()OVER(PARTITIONBY FirstName, LastName ORDERBY FirstName) AS duplicateRecordCount

    FROM dbo.Users

    )

    DELETE

    FROM TempUsers

    WHERE duplicateRecordCount > 1

    GO

    Instead of TempUsers you can give any name. Because this is used only for Temporary purpose.

    Cheers,
    Bijayani
    Proud to be a part of Team Mindfire.

    Mindfire[/url]: India's Only Company to be both Apple Premier & Microsoft Gold certified.

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply