SQL Replication - Uncooperative Databases

  • I have a database from a vendor which I need to mirror.  This database, as a product, has tables which are dropped and recreated, truncated regularly, etc.  Many issues which prevent a replication from being set up as I normally have.

    I very much wanted to have a transactional replication of this database as a "read-only" version.  Or at least mirror the data somewhat frequently.

    My problems are that I cannot get a definitive list from the vendor of what tables change regularly.  This is also on a product that is 24 hours without the potential for any regular downtime.

    The interim solution is a manual mirror run by some SQL scripts.

  • No, the better alternative is probably an availability group.

    That will handle dropping and creating table.

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Thank you very much, I had not looked into that before.  Is that available in the SQL Standard edition?  I am reading through the MS documentation now to see about implementation.

    I'm assuming I can prevent any automated fail-over from taking place?

    Thanks again for the quick and good direction provided, much obliged!

     

  • Ahhh, it appears it's available in standard, but only as a non-readable...either need to upgrade SQL or look elsewhere.

    Thanks again for the direction, that provided a lot of good ideas

  • The inactive copy of the database is read-only but you can switch which copy is active.

    You can set failover to manual so it only occurs if you action it.

     

  • What is the purpose of mirroring the database?  In a normal mirror - the database is not accessible.  To get a read-only version of a mirrored database you have to create a database snapshot on the mirrored database, and that has a lot of potential issues that can cause issues with the production instance.

    It appears you want a copy of the database that can be queried and not affect the production/primary instance.  If so - then your choices are going to be:

    1. Transactional replication
    2. Log Shipping
    3. Backup/Restore

    With either 2 or 3 - there will be a lag involved where the data is not going to be current.  For log shipping, you would switch the database to standby mode - users can access and query, switch out of standby mode and apply transaction logs up to current point in time and then switch back to standby mode.

    For backup/restore - you would restore the most recent backup - apply transaction logs up to current point in time - restore with recovery.  Until the next scheduled restore the data is only available up to the point in time you restored.

    If you don't need data to be current - I would recommend the backup/restore option.  This would give you the opportunity to add indexes as needed for reporting to the restored copy and then set the database to read-only.  You could also apply other changes...for example - adding local users to that database that do not exist in production.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Hi Jeffrey Williams 3188,

    A read-only version that can be accessed without affecting production.

    Transactional replication was my first desire, but due to constraints in the product that's not doable without causing some serious issues as tables are dropped and recreated and such.

    My current solution is a scheduled migration but has a bit of a lag.  I can help specific areas to be more current than others, but the data isn't fully current and this isn't fully acceptable for a long term solution.

    All in all, it looks like you came to the same reasoning I did.  I think the AlwaysOn would work great, but that would need to be Enterprise Edition to get a read-only copy of the database.

    Thanks for y'alls responses, it's much appreciated

  • How large, and how frequent, are these tables created?

    I'm thinking that you set up transactional replication, and then automate the creation of a subscription and initialization of the subscription for these new tables.

     

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • The table drops and recreates and so forth are controlled by a vendor application.  Size varies, though not large in affected areas, I believe.  The tables may be the same tables dropped and recreated at some times while other times they are truncated or new tables are created on the fly.

    My initial desire was to turn on replication and when the error popped up, add that table to an exclusion in the replication (to be managed elsewise).  Unfortunately that option wasn't desirable to the powers that be.

    The vendor is unable or unwilling to provide a list of table objects I could exclude, their response is just that their product doesn't support replication.  Their structure doesn't follow a lot of good practices, lol...

    I have thought of utilizing temporal tables if I can off-shoot the actual tables elsewhere or else shadow copied tables in a subsequent database from which I could mirror and utilize views which would display only current data...that way it would be transactional and more current..

  • My next question would be - is the requirement for near real-time reporting data actually needed?  In almost every case I have dealt with - the reality was that the requester believed they needed the data to be current but never actually reported on current data.

    I have also found that in most cases - the near real-time reporting requirements could be handled directly on the production system with minimal impact because those reports only deal with the current data.  Those queries generally run in sub-second time and have little to no impact on the application.

    To minimize the little impact those reports do have - you can look to enabling snapshot isolation for those queries.  Turning that feature on does is not the same as turning on RCSI.  This only turns on the capability to set the transaction isolation level and only for those specific reporting queries that require it.

    If that is a possibility - it can remove the requirement for a near real-time replica and allow for a backup/restore process to provide for all other reporting requirements.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • @Jeffrey Williams 3188...real time (or close-to-current) data is needed in a couple of places.  Of course, most users begin by saying yes they need real-time access, lol.  I believe we could suffer a 5-minute delay in the areas it really is needed to be "most current".

    Unfortunately there isn't a "report" that's running per se.  Data is gathered by multiple means and users have direct table access who really shouldn't.  My goal is to remove all direct access from those who don't need to have it and implement an isolated area for those users which will not affect production adversely.

  • here's a thought, could it be possible to script up something  that re-initialises the subscriptions and runs the snapshot job when a DDL event occurs (e.g drop table) you could use a database trigger to kick off the process..... just throwing out ideas

    MVDBA

  • MVDBA (Mike Vessey) wrote:

    here's a thought, could it be possible to script up something  that re-initialises the subscriptions and runs the snapshot job when a DDL event occurs (e.g drop table) you could use a database trigger to kick off the process..... just throwing out ideas

     

    Kind of what I thought also.  If you go this route, I would suggest creating a separate subscription for the new table, initialize it from a snapshot, and then let it run.

    I think you may need to check for the of the tables, and drop the subscriptions if the table is dropped.

     

    This other thing I thought of was if there are tables where the schema is the same with each drop / create, you can set up the replication using a view as the source and a table as the destination.  I THINK that will work.

     

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Michael L John wrote:

    MVDBA (Mike Vessey) wrote:

    here's a thought, could it be possible to script up something  that re-initialises the subscriptions and runs the snapshot job when a DDL event occurs (e.g drop table) you could use a database trigger to kick off the process..... just throwing out ideas

    Kind of what I thought also.  If you go this route, I would suggest creating a separate subscription for the new table, initialize it from a snapshot, and then let it run.

    I think you may need to check for the of the tables, and drop the subscriptions if the table is dropped.

    This other thing I thought of was if there are tables where the schema is the same with each drop / create, you can set up the replication using a view as the source and a table as the destination.  I THINK that will work.

    are you suggesting a separate publication/subscription  for each table?? - not sure about that approach

    you should be able to do this in a single publication with a couple of commands

    drop article

    add article

    re-initialise subscription

    Create snapshot

    obviously if you have multiple table drop/recreate in one batch then you would want to wait to the end of the process until you start creating the snapshot (otherwise you might get a snapshot per table)

     

    MVDBA

  • MVDBA (Mike Vessey) wrote:

    are you suggesting a separate publication/subscription  for each table?? - not sure about that approach

    you should be able to do this in a single publication with a couple of commands

    Yes, I think they almost have to do something like this, as much as it pains me to think of that.   There's no control over when and if a table, or tables, are created.

    If the tables are their own subscription, you are only going to deal with one thing in the event of some kind of issue.  Let's say 5 new tables are created, and one subscription is created for them.  If there is an issue with one of the tables in that subscription, and it needs to be reinitialized, there are 4 other tables that are affected.

    Another thought I had was SSIS.  Which is another pain point for me!  It may be easier to trigger execution of an SSIS package that queries the new tables, creates them on the second server, and fills them.

     

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

Viewing 15 posts - 1 through 15 (of 18 total)

You must be logged in to reply to this topic. Login to reply