• I've seen CA XOsoft protect a large number of MS SQL servers in a variety of configurations. I've never seen issues with data consistancy as described here.

    Let's first review MS SQL server and how transactional updates are applied. MS SQL is a 'write-ahead' transactional Database, when updates are applied to an MS SQL DB the transactional updates are first written to the transaction logs, then the update is asynchronously applied to the DB file. If an update were beign applied to the transaction log and the server went down hard, then when MS SQL is brought back online, SQL would execute a recovery in order to bring the DB online. SQL will compare the transaction log to the DB file. SQL will read through the checkpoints in the transaction log. Completed transactions without a checkpoint represent updates which have not been applied to the DB. During the recovery process, SQL will update the DB file with the completed transactions and set a checkpoint in the log file (Roll Forward). If the update is incompleted, then SQL will remove the partial update from the transacation log, "rolling back' to the last consistant checkpoint.

    If your server went down hard with transactional updates partially completed, then those updates would be lost when the server came back online. This is a function of how MS SQL maintains data consistancy.

    CA XOsoft both synchronizes and replicates both the SQL transaction logs (.LDF files) as well as the SQL DB files (.MDF & .NDF files). CA XOsoft preserves the write-order in it's replication process. An update made to SQL will be written to the transaction. CA XOsoft acts as a file system filter driver which captures changes as they pass through the kernel and are being applied to the File System. CA XOsoft creates a replication file which corresponds directly to the update being applied to the file system and sends that replication file to the Disaster Recovery Server (Replica) in real time.

    When an update is applied to the SQL transaction log, it is sent to the Replica and applied to the Replica. When SQL then asynchronously updates the DB file, CA XOsoft then creates a replication file and sends that replication file to the Replica server as well.

    If the Production server (Master) were to go offline, then all the updates applied to the file system have been sent to the Replica server. When a CA XOsoft switchover occurs, CA XOsoft brings the SQL instance on the Replica online. MS SQL executes a recovery and either rolls forward those completed transactions in the logs and sets a checkpoint in the log file, or MS SQL will roll back the uncompleted transactions to the last consistant checkpoint.

    There is no way to get the transaction logs and DB files "out of sync" because CA XOsoft is replicating the updates made to the DB files in the exact order they are being applied. Assuming that the .ldf and .mdf files are "out of sync" as stated by a previous poster is impossible as it is inferring that transactional updates are being sent and applied out of order. This is absolutely not true.

    What would cause corruption would be failing to replicate a particular DB or log file. Mounting the DB's on the replica while a CA Xosoft scenario is running, or stopping and then restarting replication and skipping synchronization are about the only way to corrupt a DB. These are unfortunate events, but all are the result of administrative oversights, not the fault of the product itself.

    In regards to comments about Forward, vs. Backward perfromance.....

    When you run a CA XOsoft scenario, CA XOsoft will need to synchronize the data. The process of synchronization is to compare the data on the active server with the data on the inactive server in order to compare the differences. Once the data which is different is identified, only the differences are sent from the active server to the inactive server. (I use the terms active and inactive because a switchover may have occurred and in a backwards scenario the Replica is the active server and the Master is the inactive server).

    It may take some time to complete the comparison and send the data. If the backwards synchronization takes a long time, as reported AND all of the available bandwidth is consummed during the process, then I would be looking at the amount of available throughput. This, in itself, is a bit of a paradox. If we use less bandwidth it will take longer. If we try to complete synchronization faster, then we would need to use even more bandwidth. The amount of data that has changed will dictate how much data needs to be sent. The amount of data that needs to be sent will dictate how much bandwidth is used and more importantly how long that bandwidth will be used to complete the process. There is no product that can suspend the laws of physics. A T-1 will only run at 1.54 Mb/s, which means you will never push more than 16.6GB of data through that circuit in a 24 hr period (provided you completely saturate the circuit). This is a raw amount of throughput. We need to keep in mind that TCP overhead and network latency are going to reduce the 'realistic' throughput on a circuit. A Properly sized network will accomdate the synchronization process without any issues.

    We should also try to keep in mind that during the reverse synchronization and subsequent replication, the users are still up and running on an active server. This is really the key we should be focusing on here.

    Some of the benefits of CA XOsoft over native SQL replication are:

    Auto detection of databases and the subsequent auto-configuration of the replciation scenario. In SQL replication you would need to manually modify the SQL job to add new or modified Databases. CA Xosoft allow you to execute an Auto-Detect Databases which automatically adds all new or modified DB's to the replication scenario for you.

    Assured Recovery is a tool included in the product which allow you to verify the Database consistancy on the Replica server.

    Data Rewind will allow you to rewind the DB to a point in time in the past, such as prior to a corruptioni event. This significantly reduces data loss and improves the recvoery point objective as well as the recovery time objective as compared to traditional backups/restores.

    A Centralized management utility that allows you to manage the DR repliation and high availability of a variety of servers including: MS SQL, MS Exchange, File Server, Oracle, IIS, BES. AS well as a variety of Operating Systems: Windows, Solaris, Red Hat Linux, AIX. This unifies your DR solution as opposed to maintaining a skillset in a variey of replication solutions which would vary from application and Operating System.