June 25, 2015 at 5:32 am
We are developing a Data Warehouse. The original instructions were to include HA into the solution . A decision was made to use Always On Availability Groups on SQL 2014 platform. So Full Recovery is needed. But we do not need to restore to point in time –weekly full and daily differential backups are sufficient. Disk space is an issue both for size of Tlog growth and back ups of TLog. To keep AG and Full Recovery model Is there a way I can consistently overwrite the tlog without actually backing it up?
June 25, 2015 at 5:36 am
No.
Availability groups requires full recovery model. In full recovery model, the transaction log is only marked reusable when a log backup runs.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 25, 2015 at 5:36 am
why DW needs high availability?
Why cant you have a good backup strategy in place and SQL clustering?
June 25, 2015 at 5:47 am
Thanks for your reply .
This was one option I was considering running after a daily insert into the DW tables
BACKUP LOG [databaseName]
TO DISK = 'nul:' WITH STATS = 10
My understanding is limited but is this not technically just a backup being run but not saved anywhere?
June 25, 2015 at 6:10 am
Clustering uses shared storage which we have had problems with in the past. We are trying to move away from that. Ha was a project requirement and after costs for licensing , and man hours, I am reluctant to just write it off with out first trying everything to make it work
June 25, 2015 at 8:50 am
aido-1116077 (6/25/2015)
Thanks for your reply .This was one option I was considering running after a daily insert into the DW tables
BACKUP LOG [databaseName]
TO DISK = 'nul:' WITH STATS = 10
My understanding is limited but is this not technically just a backup being run but not saved anywhere?
I'd run that frequently, say every 10 minutes, during your load process. Otherwise the log may grow to a very large size. Also, no need for the STATS. That adds overhead that you don't need.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 25, 2015 at 9:57 am
Thanks for your help folks.
Thanks Grant - I will take the stats out
I tested that command this afternoon on a TLog 65 GB 95% Log usage. Took 5 mins 30secs to run and left the Log at 65 GB with 2.97% usage.
building on Grant Suggestion - The Data Loads, currently a one step procedure, can be split into 2 steps and I can insert steps in-between and at the end to run my "false backup".
Or can I a completely separate agent job to just run the code every 10 minutes or so?
June 25, 2015 at 11:24 am
aido-1116077 (6/25/2015)
Thanks for your help folks.
Thanks Grant - I will take the stats out
I tested that command this afternoon on a TLog 65 GB 95% Log usage. Took 5 mins 30secs to run and left the Log at 65 GB with 2.97% usage.
building on Grant Suggestion - The Data Loads, currently a one step procedure, can be split into 2 steps and I can insert steps in-between and at the end to run my "false backup".
Or can I a completely separate agent job to just run the code every 10 minutes or so?
I'd suggest the separate agent job. I don't know how often your current process does transactions or commits, but you'd want the log cleaned out as much as you can.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 26, 2015 at 1:39 pm
why can't you schedule log backup and overwritten on existing .bak file ? with compression option
you can do it by SQL Script or maintenance plans
June 30, 2015 at 5:00 am
even with compression it still does take up a certain amount of space.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply