CASCADE - 1

  • Lokesh Vij (3/20/2013)


    db4breakfast (3/20/2013)


    I never use cascade delete on a production db. It seem to be safer by allowing users mark order as "error" instead.

    +1

    same here. i feel it is dangerous to use on delete cascade in production...as a mistake in deleting a parent records will trigger deletes from all the child records :w00t:

    We use triggers to delete the child records when a parent record is deleted. Is that generally considered a bad thing to do?

  • This is a question about table design raised because of this QoD.

    Re this part of the question:

    "CREATE TABLE [dbo].[OrderDetail](

    [OrderDetailID] [int] NOT NULL,

    [OrderID] [int] NULL,

    CONSTRAINT [PK_OrderDetail] PRIMARY KEY CLUSTERED

    ([OrderDetailID] ASC))"

    why would you allow what is going to be the foreign key (OrderID) to be NULL in the table definition? I'm assuming you would not want an OrderDetail record without an Order record so why is OrderID not defined as NOT NULL in the definition of OrderDetail? Or does the setting up of the FK constraint mean it doesn't actually matter?

  • Toreador (3/21/2013)


    Using cascade delete. If the business rule says that is the appropriate behaviour, then why avoid using inbuilt functionality that achieves exactly what you need?

    One possible reason is that you can't control the order of locks being taken with cascading deletes and cascading updates. That makes it harder to avoid deadlock scenarios if you are in a high concurrency scenario.

    Other than that, I see no reason not to use this feature *IF* that is what the business wants. However, in my experience the number of scenarios where the business actually wants to delete dependent rows automatically is very small.


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • marlon.seton (3/21/2013)


    We use triggers to delete the child records when a parent record is deleted. Is that generally considered a bad thing to do?

    Bad? I would not say so (but others will disagree with me).

    But it's not smart either. If you have a business need for deleting child rows, why go to the lengths of building your own solution when the functionality you need is available out of the box?

    Okay, if you run into the issue with multiple cascading paths, you have no choice but to roll your own. And if you get a lot of deadlocks because of cascading updates, using triggers might alleviate that a bit. But other than than, why bother?


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Hi Hugo,

    Thanks for your reply.

    By 'why bother' I guess you're meaning why bother using triggers instead of cascade delete? Because the 4GL in which we write the application can't handle FK constraints (and please don't ask why we use such a 4GL, that's a whole other bag of hurt).

  • marlon.seton (3/21/2013)


    This is a question about table design raised because of this QoD.

    Re this part of the question:

    "CREATE TABLE [dbo].[OrderDetail](

    [OrderDetailID] [int] NOT NULL,

    [OrderID] [int] NULL,

    CONSTRAINT [PK_OrderDetail] PRIMARY KEY CLUSTERED

    ([OrderDetailID] ASC))"

    why would you allow what is going to be the foreign key (OrderID) to be NULL in the table definition? I'm assuming you would not want an OrderDetail record without an Order record so why is OrderID not defined as NOT NULL in the definition of OrderDetail? Or does the setting up of the FK constraint mean it doesn't actually matter?

    In this specific case (**), it does indeed make no sense. I can not think of any scenario where you would want an orderdetail that is not associated with an order.

    But more in general, there definitely are situations where I would use a nullable foreign key. For instance in an Employee table, I might have a column AssignedToDept. Not all employees are assigned to a department, so for some this column is NULL. But for the employees that do have a value in this column, I want to make sure that this is a valid department, so I would add a foreign key constraint between this column and the Departments table.

    (**) Many bad table designs come from mindlessly adding a surrogate key to each table. A proper database design method would first create an entire data model without any surrogate key. Only when ready to actually implement this logical data model into a physical database would surrogate keys be added to tables where this is warranted.

    If that had been done for the use case of this question, the logical primary key of the OrderDetail table would have been a composite key, consisting of the combination of the primary key of the Order table and one extra column (probably LineNumber or DetailNumber). As the primary key, this would have been not nullable. Then, if at implementation time a surrogate key would have been considered appropriate for this table, the corresponding changes would have been made, but that would not have affected the nullability of this column.

    And for full disclosure - the above is not a criticism of the question; I am aware that the question is a very simplified scenario targeted at testing a specific behaviour of foreign key constraints. But I do see mistakes like this in actual tables, and they never would have been made if the designer had not jumped the gun and started with a surrogate key on each and every table.


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • marlon.seton (3/21/2013)


    By 'why bother' I guess you're meaning why bother using triggers instead of cascade delete? Because the 4GL in which we write the application can't handle FK constraints (and please don't ask why we use such a 4GL, that's a whole other bag of hurt).

    That's exactly what I meant. ๐Ÿ˜‰

    Ans you seem to have answerede the question yourself. I already gave two examples where specific SQL Server limitations make it impossible or undesirable to use cascading deletes, so that triggers are the next best option. You give a great example where a limitation of another platform has the same result.


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Hugo Kornelis (3/21/2013)


    Toreador (3/21/2013)


    Using cascade delete. If the business rule says that is the appropriate behaviour, then why avoid using inbuilt functionality that achieves exactly what you need?

    One possible reason is that you can't control the order of locks being taken with cascading deletes and cascading updates. That makes it harder to avoid deadlock scenarios if you are in a high concurrency scenario.

    Other than that, I see no reason not to use this feature *IF* that is what the business wants. However, in my experience the number of scenarios where the business actually wants to delete dependent rows automatically is very small.

    +1 .. I have been, in the past, specifically asked to avoid cascade delete (after I suggested them, that's a different case ๐Ÿ˜€ ).

    if the business say so .. I am not sure ..

    I really can't think of a scenario where business would allow that .. although, I would definitely code it for Police database , if they got a record of mine ๐Ÿ˜€

    ~ demonfox
    ___________________________________________________________________
    Wondering what I would do next , when I am done with this one :ermm:

  • One possible reason is that you can't control the order of locks being taken with cascading deletes and cascading updates. That makes it harder to avoid deadlock scenarios if you are in a high concurrency scenario.

    I didn't know that , I need to research more ..

    thanks for the information Hugo !!!!!

    ~ demonfox
    ___________________________________________________________________
    Wondering what I would do next , when I am done with this one :ermm:

  • demonfox (3/21/2013)


    Dineshbabu (3/20/2013)


    Lokesh Vij (3/20/2013)


    db4breakfast (3/20/2013)


    I never use cascade delete on a production db. It seem to be safer by allowing users mark order as "error" instead.

    +1

    same here. i feel it is dangerous to use on delete cascade in production...as a mistake in deleting a parent records will trigger deletes from all the child records :w00t:

    Why are you allowing a person to delete who is prone to do mistake in production environment?????

    we should always try to avoid manual DML operations in production environment. Everything has to be done through application. If really required can do it with in BEGIN TRAN...

    I am not sure what he meant by that , but sometime because of some junk records , a delete may be carried out as a quick solution ..

    and why there is junk records ? It can happen .. but may be it's an update cascade scenario fix.. may be a delete and reload ...

    Edit :

    To be clear, we don't allow users to delete any record in our application. However there's a function for users to flag a record as error. Instead of cascade delete, the application just flag the record as "error" in the parent table. All data are still there.

    --------------------------------------------------------------------------------------
    Hai Ton
    My Db4Breakfast blog.

  • db4breakfast (3/21/2013)

    To be clear, we don't allow users to delete any record in our application. However there's a function for users to flag a record as error. Instead of cascade delete, the application just flag the record as "error" in the parent table. All data are still there.

    Well, we do allow it .. depends on business scenarios ..

    In our case, it just marks a record for delete followed up by a workflow , background, process that deletes the records and it's associations ..

    ~ demonfox
    ___________________________________________________________________
    Wondering what I would do next , when I am done with this one :ermm:

  • marlon.seton (3/21/2013)


    Lokesh Vij (3/20/2013)


    db4breakfast (3/20/2013)


    I never use cascade delete on a production db. It seem to be safer by allowing users mark order as "error" instead.

    +1

    same here. i feel it is dangerous to use on delete cascade in production...as a mistake in deleting a parent records will trigger deletes from all the child records :w00t:

    We use triggers to delete the child records when a parent record is deleted. Is that generally considered a bad thing to do?

    I don't want to be provocative or offensive here, but I really don't know how to avoid it: writing a trigger to do what "with cascade" would do is pure illogical insanity. If my saying that helps you change your mind, I'll have done you a favour, so I guess being blunt is excusable.

    Tom

  • Hugo Kornelis (3/21/2013)


    Toreador (3/21/2013)


    Using cascade delete. If the business rule says that is the appropriate behaviour, then why avoid using inbuilt functionality that achieves exactly what you need?

    One possible reason is that you can't control the order of locks being taken with cascading deletes and cascading updates. That makes it harder to avoid deadlock scenarios if you are in a high concurrency scenario.

    Other than that, I see no reason not to use this feature *IF* that is what the business wants. However, in my experience the number of scenarios where the business actually wants to delete dependent rows automatically is very small.

    Let's be clear here: under what circumstances is the order indeterminate? If we are talking about a single constraint with cascade, the only order of interest is the order in which the referring rows in the single referring table are addressed - because the exclusive locks on the target table must be taken before what referring rows are affected can be determined. Unless you write row by agonizing row code instead of set-based code, you have no control at all over that order in a trigger. Assuming you don't write such code, the only case of interest must then be when there are multiple referring tables. This is pretty much a minority case, so usually there is no order issue, whether you are in a high concurrency scenarion or not.

    So I stick by my statement that it is usually an error to eschew cascade.

    When there are multiple referring tables, either you deal with all the referring tables in at most 3 triggers or you can't determine the order because you can't specify more than which tables are hit in the first trigger and which in the last, so your triggers may need to be a bit unwieldly if there are more than 3 referring tables and you want to force the order.

    Of course if you are in a situation where you need instead of triggers (for some reason or other) you don't have the option of using cascade - but instead of triggers on tables as opposed to views are pretty rare.

    Tom

  • marlon.seton (3/21/2013)


    Hi Hugo,

    Thanks for your reply.

    By 'why bother' I guess you're meaning why bother using triggers instead of cascade delete? Because the 4GL in which we write the application can't handle FK constraints (and please don't ask why we use such a 4GL, that's a whole other bag of hurt).

    I guess if you are in a situation where you can't use foreign keys, which is what you appear to be saying, the question of using cascade doesn't arise at all - even I, an ardent supporter of cascade, wouldn't dream of suggesting you use cascade in that case; unlike many, I refuse to belive in or advocate impossible things (not even before breakfast).

    edit: typing!!

    Tom

  • L' Eomot Inversรฉ (3/21/2013)


    Let's be clear here: under what circumstances is the order indeterminate?

    I never investigated this myseslf. I have heard this from many people. People I value and trust sufficiently to believe them.

    Is this a problem with a single foreign key? Could be - after all, there are still two tables involved in that case, and you can't control the order in which locks are taken.

    More likely, this is indeed referring to a scenario with either one table referencing two or more other tables, or with a chain of tables. The more tables there are, the higher the chance that the order the optimizer chooses to access the table causes deadlocks with other transactions.

    And for the record, I was indeed referring to the orde in which TABLES are accessed, not the order in whichs ROWS are processed. The common advice for avoiding deadlocks is to always access tables in the same order, and that can only be guaranteed if you wrote code that touches one table at a time.

    When there are multiple referring tables, either you deal with all the referring tables in at most 3 triggers or you can't determine the order because you can't specify more than which tables are hit in the first trigger and which in the last, so your triggers may need to be a bit unwieldly if there are more than 3 referring tables and you want to force the order.

    I never create more than a single trigger per operation on each table. If I have to do a lot of stuff on inserts of table X, then the ON INSERT trigger for table X will be long - but I have full control over the order in which operations are performed.

    (I am in fact so used to writing triggers that way that I often forget that it's possible to have multiple triggers for the same table/operation combination)


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

Viewing 15 posts - 31 through 45 (of 53 total)

You must be logged in to reply to this topic. Login to reply