BIG Execution plan from DELETE !!

  • Hi,

    I have a table with almost 1M records and whenever I deleted a record from it it took a lot of time...

    I looked at a trace and saw it was doing over 1M reads... I then look at the execution plan and saw where the reads were from... a table with 15M rows, it was making a scan...

    I created the index and the delete got a lot faster... but the execution plan is still big...

    Has it to do with the FKs? Is SQL checking on the other tables that reference this one for any reference?

    Is this a good reason to build an index on every FK (reverse index, since a unique index on the reference table is mandatory)??

    Thanks,

    Pedro



    If you need to work better, try working less...

  • Foreign key checks, to see if any rows reference this one and hence this one can't be deleted.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (10/17/2013)


    Foreign key checks, to see if any rows reference this one and hence this one can't be deleted.

    Thanks,

    The SW checks for references to alert the user (makes individual selects on each table to determine if the record is being used). What should I do:

    1. Remove the FKs since the SW makes the checks;

    2. Disable the FKs and re-enable WITH CHECK CHECK after delete; (probably takes tooooo long)

    3. Leave it as it is to assure integrity, since someone can just go to the database and delete, insert...

    Either case needs the indexes on the "details" table to check for ID usage, and the index on the main table is the PK...

    Thanks,

    Pedro



    If you need to work better, try working less...

  • 3)

    Show me a database with integrity enforced by the application, I'll show you bad data in it.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (10/18/2013)


    3)

    Show me a database with integrity enforced by the application, I'll show you bad data in it.

    Thanks 🙂

    That's also why we have some triggers on tables too... Avoid bad users that use SSMS...

    Pedro



    If you need to work better, try working less...

  • GilaMonster (10/18/2013)


    3)

    Show me a database with integrity enforced by the application, I'll show you bad data in it.

    +1 You have no idea how many arguments I had with various vendors with some latest and greatest application that used the database as simply a container with no referential integrity built into it. Along with that I'd hear things like "we've built a smart application where data integrity has been enforced in the front end and really doesn't need to be on the database"..... I would simply get up and show them the door...

    Some people don't get it. I certainly will not stand for bad design.... not in my shop!

    Kurt

    Kurt W. Zimmerman
    SR DBA
    Lefrak Organization
    New York, NY

    http://www.linkedin.com/in/kurtwzimmerman

  • The reason I asked about the FKs is that the database has 1300 tables and almost 2000 FKs...

    From the referenced tables we have, mandatory, almost 500 unique indexes...

    Having the reference table indexes means adding 2000 new indexes (probably 400 or 500 are already there on existing indexes, just reorganize columns and voila) but at least 1500 new indexes would be added...

    My plan is to create a job, or regularly check myself, with a query that checks for reference tables on FKs without index but both tables (reference and referenced) must have records and the reference table at least 20 pages... Is this a good approach, instead of creating on the fly the 1500 new indexes?! If the tables don't have any data the indexes will be empty also but no unnecessary indexes on the database....

    Thanks,

    Pedro



    If you need to work better, try working less...

  • Hi,

    I've built a statement that searches in the existing indexes those that are suitable for FK indexes on the primary tables (since the referenced tables already have unique indexes no need to worry about those).

    I've found lots of missing indexes but some could be achieved by switching the columns order on existing indexes... but it's risky, right? An index with colA, colB can be used for both colA and colA + colB searches and nothing grantees that it's only used with colA + colB so I can switch the order to colB, colA so it can used for a constraint check by colB... So is it best o have a new index on colB?

    Also, and this only comes with time and making mistakes, I use the DMV to see what indexes were bad (lots of writes, only 10% or less reads...) and I disabled them..... Big mistake cause some of those were FK constraint checks on primary tables... A DELETE operation took 9s instead o 1ms... Lesson learned: when disabling an index make sure it's not used for FK constraint checks... The one time it'll be used you'll notice it!!!

    Thanks,

    Pedro



    If you need to work better, try working less...

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply