• As Drew shared error-handling is very unlikely to be causing performance issues. You need to expect it to take longer as you increase the volume of data, the goal is to make that change as small as possible. A couple of things are probably happening here that hurt performance:

    1. Your transaction log is probably growing during this operation and that is going to slow things down. You can do a couple of things to help with this:[/li]

    • Pre-size the log file so it does not have to grow.
    • Use Explicit transactions for each table so that log re-use is able to happen when a checkpoint (simple recovery) or a log backup happens (full & Bulk-logged recovery)

    2. Table variables and table-valued parameters do not have statistics so the optimizer estimates 1 row. Once you hit a certain number of rows in a table variable/tvp the nested loop execution plan that is most likely created will NOT be a good choice and performance will get exponentially worse. Some options are:

    • Use a real table to hold the ID's that need to be deleted and you'll get stats. You can truncate and load this table each run, or you could store a client and date in this table to identify when you deleted a client and when. You'd just need to index the table to use the client and date in addition to the ID.
    • Use the TVP to populate a temp table in each procedure that will get stats
    • Use global temporary table instead of a TVP to hold the ID;s to be deleted.