Latency issues in Transactional Replication

  • rogelio.vidaurri - Monday, January 30, 2017 10:09 AM

    TheSQLGuru - Monday, January 30, 2017 9:06 AM

    rogelio.vidaurri - Monday, January 30, 2017 7:14 AM

    Hi again,

    Something strange though, just with this new database's replication:

    instance A (active):
    database    replicated transactions    replication rate trans/sec    replication latency (sec)
    mydatabase <------------>  9 <------------>  125 <------------>   0.016
    myNEWdatabase <------------>    0  <------------>    0.1059547 <------------>     10.073

    instance B (passive):
    database    replicated transactions    replication rate trans/sec    replication latency (sec)
    myactivedatabase <------------>   0 <------------>    62.5  <------------>   0.073
    myNEWdatabase <------------>    0 <------------>    0.2172968 <------------>    5.223

    even when there's no activity on both ends, the replication latency (sec) value is high
    any ideas?

    thanks

    You put the tlog for the new database on a slow USB key drive??

    no, we didn't
    mdf and ldf files are on the same drive (sme folders) where myactivedatabase's files are

    Is it possible to place the .mdf and .ldf files on separate drives to reduce/eliminate the dis head contention? Is the distributor on this machine as well? if so, where is the distribution database?

    Steve Jimmo
    Sr DBA
    “If we ever forget that we are One Nation Under God, then we will be a Nation gone under." - Ronald Reagan

  • sjimmo - Monday, January 30, 2017 11:00 AM

    rogelio.vidaurri - Monday, January 30, 2017 10:09 AM

    TheSQLGuru - Monday, January 30, 2017 9:06 AM

    rogelio.vidaurri - Monday, January 30, 2017 7:14 AM

    Hi again,

    Something strange though, just with this new database's replication:

    instance A (active):
    database    replicated transactions    replication rate trans/sec    replication latency (sec)
    mydatabase <------------>  9 <------------>  125 <------------>   0.016
    myNEWdatabase <------------>    0  <------------>    0.1059547 <------------>     10.073

    instance B (passive):
    database    replicated transactions    replication rate trans/sec    replication latency (sec)
    myactivedatabase <------------>   0 <------------>    62.5  <------------>   0.073
    myNEWdatabase <------------>    0 <------------>    0.2172968 <------------>    5.223

    even when there's no activity on both ends, the replication latency (sec) value is high
    any ideas?

    thanks

    You put the tlog for the new database on a slow USB key drive??

    no, we didn't
    mdf and ldf files are on the same drive (sme folders) where myactivedatabase's files are

    Is it possible to place the .mdf and .ldf files on separate drives to reduce/eliminate the dis head contention? Is the distributor on this machine as well? if so, where is the distribution database?

    we can't, they are all on the same drive (RAID 10)

    how big should a distribution db be?
    ours is 44.2GB (mdf), 1.69GB (ldf)
    simple recovery mode, a full backup every night

    our active db is 122GB (mdf), 28GB (ldf)
    full recovery mode, a full backup every night, a log backup every hour

    our new db is 20MB (mdf), 120MB (ldf)
    full recovery mode, a full backup every night, a log backup every hour

    when performance was critical, the max value for latency was 90 seconds, so, we've increased (for new db's replication)  the latency warning value from 30 to 120 and the critical performance icon stopped showing up

    we are still testing and see no errors, data is replicated correctly, but sp_replcounters still shows high values, right now it says 8 seconds under "replication latency" with no activity

  • rogelio.vidaurri - Monday, January 30, 2017 11:18 AM

    sjimmo - Monday, January 30, 2017 11:00 AM

    rogelio.vidaurri - Monday, January 30, 2017 10:09 AM

    TheSQLGuru - Monday, January 30, 2017 9:06 AM

    rogelio.vidaurri - Monday, January 30, 2017 7:14 AM

    Hi again,

    Something strange though, just with this new database's replication:

    instance A (active):
    database    replicated transactions    replication rate trans/sec    replication latency (sec)
    mydatabase <------------>  9 <------------>  125 <------------>   0.016
    myNEWdatabase <------------>    0  <------------>    0.1059547 <------------>     10.073

    instance B (passive):
    database    replicated transactions    replication rate trans/sec    replication latency (sec)
    myactivedatabase <------------>   0 <------------>    62.5  <------------>   0.073
    myNEWdatabase <------------>    0 <------------>    0.2172968 <------------>    5.223

    even when there's no activity on both ends, the replication latency (sec) value is high
    any ideas?

    thanks

    You put the tlog for the new database on a slow USB key drive??

    no, we didn't
    mdf and ldf files are on the same drive (sme folders) where myactivedatabase's files are

    Is it possible to place the .mdf and .ldf files on separate drives to reduce/eliminate the dis head contention? Is the distributor on this machine as well? if so, where is the distribution database?

    we can't, they are all on the same drive (RAID 10)

    how big should a distribution db be?
    ours is 44.2GB (mdf), 1.69GB (ldf)
    simple recovery mode, a full backup every night

    our active db is 122GB (mdf), 28GB (ldf)
    full recovery mode, a full backup every night, a log backup every hour

    our new db is 20MB (mdf), 120MB (ldf)
    full recovery mode, a full backup every night, a log backup every hour

    when performance was critical, the max value for latency was 90 seconds, so, we've increased (for new db's replication)  the latency warning value from 30 to 120 and the critical performance icon stopped showing up

    we are still testing and see no errors, data is replicated correctly, but sp_replcounters still shows high values, right now it says 8 seconds under "replication latency" with no activity

    Steve Jimmo
    Sr DBA
    “If we ever forget that we are One Nation Under God, then we will be a Nation gone under." - Ronald Reagan

  • sjimmo - Monday, January 30, 2017 11:38 AM

    rogelio.vidaurri - Monday, January 30, 2017 11:18 AM

    sjimmo - Monday, January 30, 2017 11:00 AM

    rogelio.vidaurri - Monday, January 30, 2017 10:09 AM

    TheSQLGuru - Monday, January 30, 2017 9:06 AM

    rogelio.vidaurri - Monday, January 30, 2017 7:14 AM

    Hi again,

    Something strange though, just with this new database's replication:

    instance A (active):
    database    replicated transactions    replication rate trans/sec    replication latency (sec)
    mydatabase <------------>  9 <------------>  125 <------------>   0.016
    myNEWdatabase <------------>    0  <------------>    0.1059547 <------------>     10.073

    instance B (passive):
    database    replicated transactions    replication rate trans/sec    replication latency (sec)
    myactivedatabase <------------>   0 <------------>    62.5  <------------>   0.073
    myNEWdatabase <------------>    0 <------------>    0.2172968 <------------>    5.223

    even when there's no activity on both ends, the replication latency (sec) value is high
    any ideas?

    thanks

    You put the tlog for the new database on a slow USB key drive??

    no, we didn't
    mdf and ldf files are on the same drive (sme folders) where myactivedatabase's files are

    Is it possible to place the .mdf and .ldf files on separate drives to reduce/eliminate the dis head contention? Is the distributor on this machine as well? if so, where is the distribution database?

    we can't, they are all on the same drive (RAID 10)

    how big should a distribution db be?
    ours is 44.2GB (mdf), 1.69GB (ldf)
    simple recovery mode, a full backup every night

    our active db is 122GB (mdf), 28GB (ldf)
    full recovery mode, a full backup every night, a log backup every hour

    our new db is 20MB (mdf), 120MB (ldf)
    full recovery mode, a full backup every night, a log backup every hour

    when performance was critical, the max value for latency was 90 seconds, so, we've increased (for new db's replication)  the latency warning value from 30 to 120 and the critical performance icon stopped showing up

    we are still testing and see no errors, data is replicated correctly, but sp_replcounters still shows high values, right now it says 8 seconds under "replication latency" with no activity

    Even with nothing to replicate, replication is doing things behind the scenes. There are various jobs scheduled to run every 10 minutes plus connection checks. But now I understand better your system and the problems of latency I believe are magnified by your configuration.

    Having all of your databases and t-logs on the same disk will cause read/write contention and will show up as high disk IO. It will also show up as paging.

    The size of the distribution database varies and depends upon many factors such as the number of packages, subscribers, commands and how long history is kept. Monitor your free space and you will eventually figure out the size needed.

    In SQL Agent are some jobs that run every 10 minutes for replication. One is called Agent history clean up: distribution. I suggest that you change the schedule on this to run for a few hours during a slow/off period. This will add to your load considerably. This will clean up history in your distribution database. But every time it runs it does DBCC's against several tables rebuilding indexes/primary keys. It is great as it will maintain up to date statistics but also will contribute to blocking which will cause other problems. (We have actually had to make modifications to the stored procedures used to reduce blocking.) I happen to run this job between 9PM and 2 AM. It has been running that way for a few years without issues. This will take a great load off your disks during your production hours.

    This change will not affect your replication but you may see some improvement.

    Steve Jimmo
    Sr DBA
    “If we ever forget that we are One Nation Under God, then we will be a Nation gone under." - Ronald Reagan

  • <ignore>

  • sjimmo - Monday, January 30, 2017 11:59 AM

    sjimmo - Monday, January 30, 2017 11:38 AM

    rogelio.vidaurri - Monday, January 30, 2017 11:18 AM

    sjimmo - Monday, January 30, 2017 11:00 AM

    rogelio.vidaurri - Monday, January 30, 2017 10:09 AM

    TheSQLGuru - Monday, January 30, 2017 9:06 AM

    rogelio.vidaurri - Monday, January 30, 2017 7:14 AM

    Hi again,

    Something strange though, just with this new database's replication:

    instance A (active):
    database    replicated transactions    replication rate trans/sec    replication latency (sec)
    mydatabase <------------>  9 <------------>  125 <------------>   0.016
    myNEWdatabase <------------>    0  <------------>    0.1059547 <------------>     10.073

    instance B (passive):
    database    replicated transactions    replication rate trans/sec    replication latency (sec)
    myactivedatabase <------------>   0 <------------>    62.5  <------------>   0.073
    myNEWdatabase <------------>    0 <------------>    0.2172968 <------------>    5.223

    even when there's no activity on both ends, the replication latency (sec) value is high
    any ideas?

    thanks

    You put the tlog for the new database on a slow USB key drive??

    no, we didn't
    mdf and ldf files are on the same drive (sme folders) where myactivedatabase's files are

    Is it possible to place the .mdf and .ldf files on separate drives to reduce/eliminate the dis head contention? Is the distributor on this machine as well? if so, where is the distribution database?

    we can't, they are all on the same drive (RAID 10)

    how big should a distribution db be?
    ours is 44.2GB (mdf), 1.69GB (ldf)
    simple recovery mode, a full backup every night

    our active db is 122GB (mdf), 28GB (ldf)
    full recovery mode, a full backup every night, a log backup every hour

    our new db is 20MB (mdf), 120MB (ldf)
    full recovery mode, a full backup every night, a log backup every hour

    when performance was critical, the max value for latency was 90 seconds, so, we've increased (for new db's replication)  the latency warning value from 30 to 120 and the critical performance icon stopped showing up

    we are still testing and see no errors, data is replicated correctly, but sp_replcounters still shows high values, right now it says 8 seconds under "replication latency" with no activity

    Even with nothing to replicate, replication is doing things behind the scenes. There are various jobs scheduled to run every 10 minutes plus connection checks. But now I understand better your system and the problems of latency I believe are magnified by your configuration.

    Having all of your databases and t-logs on the same disk will cause read/write contention and will show up as high disk IO. It will also show up as paging.

    The size of the distribution database varies and depends upon many factors such as the number of packages, subscribers, commands and how long history is kept. Monitor your free space and you will eventually figure out the size needed.

    In SQL Agent are some jobs that run every 10 minutes for replication. One is called Agent history clean up: distribution. I suggest that you change the schedule on this to run for a few hours during a slow/off period. This will add to your load considerably. This will clean up history in your distribution database. But every time it runs it does DBCC's against several tables rebuilding indexes/primary keys. It is great as it will maintain up to date statistics but also will contribute to blocking which will cause other problems. (We have actually had to make modifications to the stored procedures used to reduce blocking.) I happen to run this job between 9PM and 2 AM. It has been running that way for a few years without issues. This will take a great load off your disks during your production hours.

    This change will not affect your replication but you may see some improvement.

    You are right. I've checked the job and it does this every 10 minutes:
    "EXEC dbo.sp_MShistory_cleanup @history_retention = 48"

    Unfortunately, we are talking about a 24/7 system, we don't have maintenance windows.
    I'm afraid that if we schedule such job to run for hours at night, it will affect our system's perfomance during those hours, correct?

    Do you think this job is causing our latency issues? or such high "replication latency" value?

    More details:
    free space on that disk is 315 GB.
    db server's average CPU usage is less than 20%.
    db server's disk IO is less than 25%, almost always under 10%.

    thank you

  • A) IIRC both secondary databases have their files on the same disk system, so latencies between files "should" be identical and thus not a factor here. Something to verify though.

    B) What about the tlog and data file growths on each secondary database and the percentage full for all? Perhaps full and getting tlog growths?

    C) Do you happen to have Always On set up for the poorly performing database with a readable secondary configured? That causes 14-byte version store pointer space to be added to all rows being modified so the secondary will work properly for read workloads. And THAT causes a extra tlog activity that has to be replayed AND possibly lots of page splits for those with the default 0 fill factor for all their indexes.

    D) Speaking of fill factor, could page splits be a difference between the good and bad databases??

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • rogelio.vidaurri - Monday, January 30, 2017 12:15 PM

    sjimmo - Monday, January 30, 2017 11:59 AM

    sjimmo - Monday, January 30, 2017 11:38 AM

    rogelio.vidaurri - Monday, January 30, 2017 11:18 AM

    sjimmo - Monday, January 30, 2017 11:00 AM

    rogelio.vidaurri - Monday, January 30, 2017 10:09 AM

    TheSQLGuru - Monday, January 30, 2017 9:06 AM

    rogelio.vidaurri - Monday, January 30, 2017 7:14 AM

    Hi again,

    Something strange though, just with this new database's replication:

    instance A (active):
    database    replicated transactions    replication rate trans/sec    replication latency (sec)
    mydatabase <------------>  9 <------------>  125 <------------>   0.016
    myNEWdatabase <------------>    0  <------------>    0.1059547 <------------>     10.073

    instance B (passive):
    database    replicated transactions    replication rate trans/sec    replication latency (sec)
    myactivedatabase <------------>   0 <------------>    62.5  <------------>   0.073
    myNEWdatabase <------------>    0 <------------>    0.2172968 <------------>    5.223

    even when there's no activity on both ends, the replication latency (sec) value is high
    any ideas?

    thanks

    You put the tlog for the new database on a slow USB key drive??

    no, we didn't
    mdf and ldf files are on the same drive (sme folders) where myactivedatabase's files are

    Is it possible to place the .mdf and .ldf files on separate drives to reduce/eliminate the dis head contention? Is the distributor on this machine as well? if so, where is the distribution database?

    we can't, they are all on the same drive (RAID 10)

    how big should a distribution db be?
    ours is 44.2GB (mdf), 1.69GB (ldf)
    simple recovery mode, a full backup every night

    our active db is 122GB (mdf), 28GB (ldf)
    full recovery mode, a full backup every night, a log backup every hour

    our new db is 20MB (mdf), 120MB (ldf)
    full recovery mode, a full backup every night, a log backup every hour

    when performance was critical, the max value for latency was 90 seconds, so, we've increased (for new db's replication)  the latency warning value from 30 to 120 and the critical performance icon stopped showing up

    we are still testing and see no errors, data is replicated correctly, but sp_replcounters still shows high values, right now it says 8 seconds under "replication latency" with no activity

    Even with nothing to replicate, replication is doing things behind the scenes. There are various jobs scheduled to run every 10 minutes plus connection checks. But now I understand better your system and the problems of latency I believe are magnified by your configuration.

    Having all of your databases and t-logs on the same disk will cause read/write contention and will show up as high disk IO. It will also show up as paging.

    The size of the distribution database varies and depends upon many factors such as the number of packages, subscribers, commands and how long history is kept. Monitor your free space and you will eventually figure out the size needed.

    In SQL Agent are some jobs that run every 10 minutes for replication. One is called Agent history clean up: distribution. I suggest that you change the schedule on this to run for a few hours during a slow/off period. This will add to your load considerably. This will clean up history in your distribution database. But every time it runs it does DBCC's against several tables rebuilding indexes/primary keys. It is great as it will maintain up to date statistics but also will contribute to blocking which will cause other problems. (We have actually had to make modifications to the stored procedures used to reduce blocking.) I happen to run this job between 9PM and 2 AM. It has been running that way for a few years without issues. This will take a great load off your disks during your production hours.

    This change will not affect your replication but you may see some improvement.

    You are right. I've checked the job and it does this every 10 minutes:
    "EXEC dbo.sp_MShistory_cleanup @history_retention = 48"

    Unfortunately, we are talking about a 24/7 system, we don't have maintenance windows.
    I'm afraid that if we schedule such job to run for hours at night, it will affect our system's perfomance during those hours, correct?

    Do you think this job is causing our latency issues? or such high "replication latency" value?

    More details:
    free space on that disk is 315 GB.
    db server's average CPU usage is less than 20%.
    db server's disk IO is less than 25%, almost always under 10%.

    thank you

    Your latency is caused by multiple factors as I stated previously, none of which does it appear that you are able to control. Thus you need to try to find a way to reduce it.

    I understand that you are a 24X7 shop. But when do you do your dumps? Are there periods of time that the system is slower than others? Even an hour here and an hour there. You could create multiple schedules on that cleanup job to run during those periods. This job isn't the cause of your latency, just one of the contributors.

    Steve Jimmo
    Sr DBA
    “If we ever forget that we are One Nation Under God, then we will be a Nation gone under." - Ronald Reagan

  • TheSQLGuru - Monday, January 30, 2017 12:31 PM

    A) IIRC both secondary databases have their files on the same disk system, so latencies between files "should" be identical and thus not a factor here. Something to verify though.

    B) What about the tlog and data file growths on each secondary database and the percentage full for all? Perhaps full and getting tlog growths?

    C) Do you happen to have Always On set up for the poorly performing database with a readable secondary configured? That causes 14-byte version store pointer space to be added to all rows being modified so the secondary will work properly for read workloads. And THAT causes a extra tlog activity that has to be replayed AND possibly lots of page splits for those with the default 0 fill factor for all their indexes.

    D) Speaking of fill factor, could page splits be a difference between the good and bad databases??

    B) all set to 10% autogrowth and unlimited
    C) we don't use Always On, our failovers are manual, just traditional sql transactional replication
    D) SELECT * FROM sys.configurations WHERE name ='fill factor (%)'  returns:
    configuration_id    name    value    minimum    maximum    value_in_use    description    is_dynamic    is_advanced
    109    fill factor (%)    0    0    100    0    Default fill factor percentage    0    1

    - the cleanup job runs every 10 minutes in a second or less, we'll try to schedule it to run every hour
    - we'll switch the load from the activedb to the newdb on that table only and see how it works
    - we are also migrating data from the activedb to a data ware house db to remove such unnecessary data from the activedb to see if it helps

    thank you

  • rogelio.vidaurri - Monday, January 30, 2017 1:24 PM

    TheSQLGuru - Monday, January 30, 2017 12:31 PM

    A) IIRC both secondary databases have their files on the same disk system, so latencies between files "should" be identical and thus not a factor here. Something to verify though.

    B) What about the tlog and data file growths on each secondary database and the percentage full for all? Perhaps full and getting tlog growths?

    C) Do you happen to have Always On set up for the poorly performing database with a readable secondary configured? That causes 14-byte version store pointer space to be added to all rows being modified so the secondary will work properly for read workloads. And THAT causes a extra tlog activity that has to be replayed AND possibly lots of page splits for those with the default 0 fill factor for all their indexes.

    D) Speaking of fill factor, could page splits be a difference between the good and bad databases??

    B) all set to 10% autogrowth and unlimited
    C) we don't use Always On, our failovers are manual, just traditional sql transactional replication
    D) SELECT * FROM sys.configurations WHERE name ='fill factor (%)'  returns:
    configuration_id    name    value    minimum    maximum    value_in_use    description    is_dynamic    is_advanced
    109    fill factor (%)    0    0    100    0    Default fill factor percentage    0    1

    - the cleanup job runs every 10 minutes in a second or less, we'll try to schedule it to run every hour
    - we'll switch the load from the activedb to the newdb on that table only and see how it works
    - we are also migrating data from the activedb to a data ware house db to remove such unnecessary data from the activedb to see if it helps

    thank you

    Sounds like a plan to try.

    Good luck.

    Steve Jimmo
    Sr DBA
    “If we ever forget that we are One Nation Under God, then we will be a Nation gone under." - Ronald Reagan

  • Hello,

    "we'll switch the load from the activedb to the newdb on that table only and see how it works"
    We did this yesterday and it worked. Very odd as it's the same load but to a different db/table/replication.

    We'll start working on the other two taks and see how everything goes.
    Once we move all reporting data from the operating database to a new one, do you think we should shrink the MDF file of the replicated operating database? as we would have removed tens of millions of rows.

    Thank you

  • rogelio.vidaurri - Thursday, February 2, 2017 6:37 AM

    Hello,

    "we'll switch the load from the activedb to the newdb on that table only and see how it works"
    We did this yesterday and it worked. Very odd as it's the same load but to a different db/table/replication.

    We'll start working on the other two taks and see how everything goes.
    Once we move all reporting data from the operating database to a new one, do you think we should shrink the MDF file of the replicated operating database? as we would have removed tens of millions of rows.

    Thank you

    Personally I am not a fan of shrinking database files unless the amount of space recovered is significant and you know the files will not grow. The reason is that assuming that the database currently is physically on the drive in one area (contiguous) and not split into multiple physical files (non-contiguous) spread out over the drive is that each time the database files grow then the growth happens by creating files wherever space is found on the drive. There is actually a science behind this, and worse yet is a performance hit as the drive heads have to scan the drives to gather the data.

    I have seen databases speed up by taking the physical files of and then copying them back onto a disk so that they are contiguous.

    After deleting the data though you may want to defrag the indexes on those tables in order to reclaim/cleanup the space that was used inside the database container. Again a science is involved, but essentially if you just delete data the records are actually still there and the space is marked for reuse. Like the physical files, this space is not contiguous. A record being inserted will find a physical space where the record will fit. so now records on a page are all over the place. Reindex will clean up the space and rearrange things to make it more optimal.

    I am giving you a couple of links:

    https://techexplainer.wordpress.com/2012/03/28/benefits-of-defragging-of-hard-drives-3/
    https://www.mssqltips.com/sqlservertip/2348/clean-unused-space-when-a-sql-server-table-with-a-variable-length-column-is-dropped/
    Good luck

    Steve Jimmo
    Sr DBA
    “If we ever forget that we are One Nation Under God, then we will be a Nation gone under." - Ronald Reagan

  • sjimmo - Thursday, February 2, 2017 7:46 AM

    Personally I am not a fan of shrinking database files unless the amount of space recovered is significant and you know the files will not grow. The reason is that assuming that the database currently is physically on the drive in one area (contiguous) and not split into multiple physical files (non-contiguous) spread out over the drive is that each time the database files grow then the growth happens by creating files wherever space is found on the drive. There is actually a science behind this, and worse yet is a performance hit as the drive heads have to scan the drives to gather the data.

    I have seen databases speed up by taking the physical files of and then copying them back onto a disk so that they are contiguous.

    After deleting the data though you may want to defrag the indexes on those tables in order to reclaim/cleanup the space that was used inside the database container. Again a science is involved, but essentially if you just delete data the records are actually still there and the space is marked for reuse. Like the physical files, this space is not contiguous. A record being inserted will find a physical space where the record will fit. so now records on a page are all over the place. Reindex will clean up the space and rearrange things to make it more optimal.

    I am giving you a couple of links:

    https://techexplainer.wordpress.com/2012/03/28/benefits-of-defragging-of-hard-drives-3/
    https://www.mssqltips.com/sqlservertip/2348/clean-unused-space-when-a-sql-server-table-with-a-variable-length-column-is-dropped/
    Good luck

    "After deleting the data though you may want to defrag the indexes on those tables in order to reclaim/cleanup the space that was used inside the database container."
    Sorry, I think I said it wrong. We'll delete the tables (they'll be removed from the subscription first), by doing that we believe the indexes are deleted as well, aren't they?
    We expect to not see a decrease in the MDF file but expect to see a decrease in the BAK file tonight. 

    I'll take a look at your links.

    We really appreciate it your help, thanks.

  • hi guys,

    replication continues to work correctly on the new db
    i guess we'll never know what happened, maybe we reached a limit and that's why it started to have latency issues?

    at least, it helped us to notice that we should split our database and separate reports from active tables
    by the way, if we want to see how big each table is, in megabytes, could we rely on the info provided by sp_spaceused?
    I mean the reservedKb, dataKb, reservedIndexSize and reservedUnused columns

    one last question: do you have experience/comments/suggestions on this same type of transactional replication between a web hosting provider and an Azure VM?
    we've been doing some testing the last couple of days and it seems to work correctly

    thanks for your help

Viewing 14 posts - 31 through 43 (of 43 total)

You must be logged in to reply to this topic. Login to reply