• IMO, the best way to think about this is using the Fiefdom analogy. Think of a fiefdom as a self-contained system. Messages/data can be sent into the fiefdom and it can send messages/data out. The systems that talk directly to a fiefdom use emissaries to review messages/data to help improve the odds that the messages/data will be accepted. However, when messages/data come into the fiefdom it will re-check the data regardless of who submits it.

     

    Taking this analogy to typical n-layer systems (as opposed to n-tier), you typically have three layers: presentation, middle-layer, and a database. Presentation only talks to middle and middle talks to the database. When data is sent from the presentation layer it is checked before being submitted to the middle layer which checks it again. Why? Multiple applications can submit messages/data to the middle layer. The middle layer cannot assume that those applications have done the proper checks. When the data is submitted to the database, again that information is checked for the same reason the middle layer checks it. The database cannot guarantee that the system sending it data as properly checked its information.

     

    If you can prevent any and all access to the database except through your service layer, then you can indeed put all of your logic, including data integrity rules into your service layer. However, in my experience, this type of guarantee is a fool’s hope. Companies get bought, new systems get built, additional teams build more applications and report generators access the database directly are just some of the scenarios where this lock tight control on the database access is lost. A database should be designed with the assumption that some nutty app developer will try to write crap into the database bypassing any middle layer code.