Application Design Question

  • Hi,

     

    Currently I am building an application that tries to satisfy the following user requirements amongst others.  I have described my strategies to satisfy the requirements.  I will appreciate your feedback for a better strategy. 

     

    Requirements

     

    1.  The application should be useable with or without a network.

     

    2.  The data in the database comprised of data of 13 agencies.  Each agency owns their rows and is allowed to edit them.  Other agencies can query/view the rows they don’t own.

     

    3.  There is gatekeeper to the centralized database who certifies each row for its validity to be part of the database.

     

    My Strategy

     

    1. Install MSDE on individual desktops.  Before using the application in an environment without a network connection, a replicate database is created on user’s MSDE instance using out-of-box SQL Server 2000 tool.  Replication is used again to synchronize the disconnected database with the central database when there are new or edited records in the disconnected database.

     

    2.  Create a table (USERS) that maps each application user to one of the 13 agencies.  Add a “AGENCY” column to each table.  Write a stored-procedure that generates add/edit/delete trigger on each table of the database.  This trigger tracks the logged-user and either allow/disallow them to modify/delete records based on their membership with an agency.

     

    3.  Create a table (MODIFICATIONS) that records table name, row key and action (add/edit/delete) in the database.  Write a stored-procedure that generates add/edit/delete trigger on each table of the database.  This trigger populates the MODIFICATIONS table based on user action.

     

    Is this strategy sound?  If not how will you satisfy the above requirements.  I am not a DBA.  Is it possible to satisfy part or all of the requirements using Audit or Transact logs?

     

    Thank you

  • You've hit the nail on the head with the most the fundamental solution in distributed data: data ownership (by site/database).  You're on the right track and you'll  be ok.

    (So many people make the mistake of believing you can seemlessly allow multiple users on multiple separate databases modify the same piece of data.)

    As for the rest... I tend to simplify even further:

    I usually put a last_update DATETIME column on each row and then one can use this for (simple) replication.  That is, select all rows with last_update > the last time this sync was done, then update/insert these.

    If you're going to allow removal of rows, you have to then either rely on trigger (not my preference) or a 'to-be-deleted' flag on the row (which will get replicated by way of the replication process above).

    I find that making your own simple mechanism like this or as you described is always going to be less troublesome than using the generic replication tools provided by others. 

    CAVEAT:

    When you have business processes triggered by certain data modification (like a status change) that can be overwritten, you need to use a mechanism like you describe that allows each modification to be made in the sequence it originally happened.  If it (/they the processes) needs to happen in those other databases.

    However, 1) it is rare for this kind of business functionality to be required to cross over into databases owning other data, and 2) usually transitionary data that triggers other business processes is recorded in a history type table (like an order_status_history) and if so, that table/process can be used...

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply