August 14, 2014 at 4:20 pm
I have several 2012 availability groups running on a cluster. I have one database that is bulk loaded every 30 minutes. The DB is about 1 GB in size. To be on the availability group it has to be set to full recovery mode, but simple or even bulk would obviously be better. Is there a better way to handle the transaction log size other than to run a backup after each bulk load causing extra overhead? With mirrors you could use simple, but since those are going away . . .
Thanks.
August 14, 2014 at 7:49 pm
Thor Bev (8/14/2014)
I have several 2012 availability groups running on a cluster. I have one database that is bulk loaded every 30 minutes. The DB is about 1 GB in size. To be on the availability group it has to be set to full recovery mode, but simple or even bulk would obviously be better. Is there a better way to handle the transaction log size other than to run a backup after each bulk load causing extra overhead? With mirrors you could use simple, but since those are going away . . .Thanks.
1) Who cares if mirrors are deprecated. IIRC they only go away the THIRD version after deprecation announcement (and some deprecated stuff can be kept around much longer I think). But even then it doesn't matter. I have clients still running SQL 2000 because it does what they need just fine. You can run mirroring for the next 15 years too if you want/need.
2) Just make tlog management part of your load process.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
August 15, 2014 at 10:05 am
As I understand it t-load management as part of the load process would include being in simple or bulk mode, which cannot be done in AG.
Viewing 3 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy