Just For Fun: An Impossible Delete

  • Comments posted to this topic are about the item Just For Fun: An Impossible Delete

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • What an amazing article! Covers most everything from womb-to-tomb for such an "impossible" delete. I remember being amazed when Barry first posted his solution on the original thread. I'm even more amazed that he wove in some of the techniques used in the old days and took the time to explain how each byte of memory was cherished which made these types of methods absolutely necessary.

    Well done, Barry!

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Nice Article. I learned a lot. But for this problem there may be a shorter/ simpler solution :

    --======1 Insert Records We want to have

    Insert into SOURCE(Name, Age, Sex)

    Select Name

    , Age

    , Case Sex When 'M' Then 'N' Else 'G' End

    From SOURCE

    Group By Name, Age, Sex

    Having Count (*) > 1

    --======2: Remove the old rows

    Delete from SOURCE

    Where Sex In ('M','F')

    --======3: Update the inserted records accordingly

    Update SOURCE

    Set Sex = CASE Sex When 'N' Then 'M' Else 'F' End

    Both of us depend on the case that Sex column has one of two values. If that is not the case then solution would be appending a character / string that does not exist in the original data (ex '~~~~~') in first step. Delete the ones that does not have appended string in the second step. Remove appended string in the third step.

  • eerkan - I don't think your solution will give what the OP wanted. It'll collapse all duplicated rows into a single row, whereas the OP wanted to retain all dupes as seperate rows.

    Great article btw - RBarryYoung bringing back the old skool 😛

    Regards,

    Jacob

  • great article!!

    "Keep Trying"

  • GREAT ARTICLE...

    Alot of learning points in it...

    My First thought, I would have gone for cursors to solve the issue. Secondly, I would certainly NOT go for the conversions as used in the article (who cares about bit and bytes nowdays...). But one MUST know the way to accomplish the task using the limited and KNOWN resources whic makes the solution in the article THE BEST in all aspects...

    BRAVO...

    Atif Sheikh

    ----------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Sometimes, winning is not an issue but trying.
    You can check my BLOG
    [font="Arial Black"]here[/font][/url][/right]

  • eerkan... your solution misses one record

    Try again...:D

    Atif Sheikh

    ----------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Sometimes, winning is not an issue but trying.
    You can check my BLOG
    [font="Arial Black"]here[/font][/url][/right]

  • Hmm, my curiousity is piqued... The solution presented is quite interesting, but I believe the purported interest in bits and bytes is a little exaggerated, as the solution is still quite expensive (rewrites an entire column in a single update statement, and rewrites/deletes the bulk of the data if there are many duplicates - needing up to almost double the original storage space to complete). So I'm quite curious:

    What would the most efficient way of addressing this problem be? (using least memory and / or using least I/O, especially on a very large data set; also consider whether the data can still be accessed during the update process)

    This should be quite easy to test, given a few hours to create a large (GB-sized?) data set and try out a few different approaches...

    It seems a problem that many more people are likely to face nowadays - powerful hardware, very robust systems that can remain online as you do things like add columns, but extremely large amounts of data (many gigabytes) and no maintenance time / downtime available!

    http://poorsql.com for T-SQL formatting: free as in speech, free as in beer, free to run in SSMS or on your version control server - free however you want it.

  • I think this calls for Monty Pythons The Four Yorkshiremen skit:

    http://www.phespirit.info/montypython/four_yorkshiremen.htm

  • Surely this is the simplest solution? (be warned of a nasty catch when creating these 'quirky updates'. the order of execution of the update is -variable assignments first then column assignments-both left to right!)

    [font="Courier New"]

    --create the sample table

    DECLARE @Sample TABLE (TheName VARCHAR(4),Identifier INT, sex VARCHAR(1))

    INSERT INTO @Sample (TheName,Identifier,Sex)

       SELECT 'ABC', 24, 'M' UNION ALL

       SELECT 'ABC',  24,'M' UNION ALL

       SELECT 'LMN',  27, 'M' UNION ALL

       SELECT 'LMN',  27, 'M' UNION ALL

       SELECT 'LMN',  27, 'M' UNION ALL

       SELECT 'PQRS',  25, 'F' UNION ALL

       SELECT 'XYZ',  24, 'M' UNION ALL

       SELECT 'XYZ',  25, 'M'

    --@Sex variable only used to force the order of execution of the update

    DECLARE @hash VARCHAR(80), @sex VARCHAR(1)

    UPDATE @sample

       SET

       @sex=Sex = CASE WHEN COALESCE(@hash,'')

                               <>TheName+CONVERT(VARCHAR(5),Identifier)+sex

               THEN 'd' ELSE SEX END,

       @hash= TheName+CONVERT(VARCHAR(5),Identifier)+sex

    DELETE FROM @sample WHERE sex='d'

    SELECT * FROM @sample[/font]

    Best wishes,
    Phil Factor

  • wow, I am in awe (assuming it works 🙂 ). I would venture a guess that this is, indeed, the most efficient solution.

    One fix required here is to separate the identifier from the name in the hash, otherwise "MyName1" with identifier 6 could be confused with "MyName" with identifier 16.

    http://poorsql.com for T-SQL formatting: free as in speech, free as in beer, free to run in SSMS or on your version control server - free however you want it.

  • Aiee! Yes, I tapped it in a bit of a hurry. Any delimiter between the values will help the hash. My only worry is that it found one more row to delete than the original problem-setter said there would be, but it seems to be a genuine candidate for deletion.

    Also it sometimes needs an extra '' in the expresssion to get it to work in SQL Server 2000, due to a bug in SQL Server!

    If you turn the logic round, it selects all the unique rows without a group-by. I wonder if it is quicker than the DISTINCT or GROUP BY?

    Best wishes,
    Phil Factor

  • This was a great article and I really enjoyed reading it. I'm particularly grateful for the excellent explanatory text which meant I could actually follow most of what was going on despite being fairly new to SQL.

    At the risk of being overly picky there is one bit I have to query:

    "Likewise, the F's are stored as "01000110" in upper-case and "01000110" in lower (if you look closely, you can tell which bit makes it upper or lower case)."

    Am I simply not looking closely enough or are these values actually identical? Sneaky typo perhaps?

    Anyway thanks for the great article.

    All the best,

    John

  • Been waiting for it....:D

    Phil could you give a more verbose explanation of your code please?

    Hiding under a desk from SSIS Implemenation Work :crazy:

  • Tao Klerks (8/5/2008)


    Hmm, my curiousity is piqued... The solution presented is quite interesting, but I believe the purported interest in bits and bytes is a little exaggerated, as the solution is still quite expensive (rewrites an entire column in a single update statement, and rewrites/deletes the bulk of the data if there are many duplicates - needing up to almost double the original storage space to complete). So I'm quite curious:

    What would the most efficient way of addressing this problem be? (using least memory and / or using least I/O, especially on a very large data set; also consider whether the data can still be accessed during the update process)

    This should be quite easy to test, given a few hours to create a large (GB-sized?) data set and try out a few different approaches...

    It seems a problem that many more people are likely to face nowadays - powerful hardware, very robust systems that can remain online as you do things like add columns, but extremely large amounts of data (many gigabytes) and no maintenance time / downtime available!

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 1 through 15 (of 156 total)

You must be logged in to reply to this topic. Login to reply