Diff backups every 2 hours.

  • I have a DB which I am in the process of inheriting and it has a DB backup schedule. Full every week, Diff every 2 hours, 15 tlog. Full backup and Diff backup tend to take all the resources available on the system and can slow down performance. Some of the users have complained about random slowness in a system for just a little bit so I was thinking about changing the dif backup schedule from every 2 hours to 12 or 24 hours. I am also aware that in case of disaster, this can be a problem because if the diff backups are running every 12 hours, I will have to apply all 48 tlogs backups which can be time consuming. I am not sure how busy the system gets when these diff backups are running. So instead of writing the whole book about my concerns, all I need is some advise from experts. Which are some of the scenarios I should have in mind before changing the backup from 2 hours to 24 hours? MAX memory settings have been set to default which is 217%. Will it help if I change the max memory setting to let's say 80% of whatever the total server memory is? Will it help with the performance if diff backup runs every 2 hours? Please advise.

  • Monitoring and observing the system will certainly tell you more than I can remotely. The one thing I'll say is that doing a differential every two hours, those differentials at the end of the week are going to be big (depending on the system) and certainly slower, both for backup process and recovery. We were doing weekly backups, daily differentials (and yeah, the end of the week ones were large), with logs every 15 minutes. Worst case scenario a days worth of logs. However, you very seldom hit the worst case. You do need to take that into account when looking at your Recovery Time Objective, but it's probably OK (again, depending on the system).

    I would recommend that you get wait statistics on the server to understand where it's running slow. Especially get them before and after the backup process runs (or even during using extended events). That will tell you more about why things are running slow. It might not be the backups at all.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey - Friday, January 26, 2018 9:40 AM

    I would recommend that you get wait statistics on the server to understand where it's running slow. Especially get them before and after the backup process runs (or even during using extended events). That will tell you more about why things are running slow. It might not be the backups at all.

    Which makes sense.

  • Full backup and Diff backup tend to take all the resources available on the system and can slow down performance. 

    That is unacceptable in my book and should be corrected ASAP. Until you get it corrected there may be things you can do to reduce the impact of backups.

    I have never heard of 2 hour DIFF backups, and without further details on your needs I will say that it is probably rather silly. If you are changing a lot of data your DIFFs will get big as Grant mentioned (unless you are changing the SAME DATA over and over). But there are likely scenarios where applying a big DIFF is still significantly more efficient/faster than replaying logs. 

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Diffs more often mean less logs to restore. However, the diff does grow, as Grant mentioned, and it's rare you'll restore. I would tend to say that you're trading off some slowness now for some potential higher RTO. I'd lean towards slowly backing off diffs to maybe 4 or 8 hours and seeing if that addresses your issue.

    Is memory a typo? Set to 217%  of RAM? Why bother then?

  • NewBornDBA2017 - Friday, January 26, 2018 9:10 AM

    ...MAX memory settings have been set to default which is 217%. Will it help if I change the max memory setting to let's say 80% of whatever the total server memory is?

    Yikes! You shouldn't have the maximum server memory be set to more than you have physical RAM.  The OS needs some memory for itself, and once all physical memory is used then you're hitting the page file.  That adds an extra disk write and disk read for SQL Server to do an operation that it thinks is for data it already has in memory.  I'd suggest you do some research on best practices for this.

  • Chris Harshman - Tuesday, January 30, 2018 10:30 AM

    once all physical memory is used then you're hitting the page file.

    Once all physical memory is depleted  you'll get a BSOD

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Perry Whittle - Thursday, February 1, 2018 4:31 AM

    Chris Harshman - Tuesday, January 30, 2018 10:30 AM

    once all physical memory is used then you're hitting the page file.

    Once all physical memory is depleted  you'll get a BSOD

    That doesn't sound right...

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • If the MAX memory was not set have you also check other basic settings like number of tempDB files set to the number of CPU cores, also the file placement of these as per best practice, it could make a huge amount of difference.  I don't know what the application it is or if you are able to get indexes added but this can also make all the difference on basic performance, I usually run this script and try to address a few indexes at a time until the advantage is below 5000, I would then contact either the developers or vendor to point this out and once they have agreed to the indexes apply them  http://www.sqlservercentral.com/scripts/DMV/71995/

    On the backups, if it's not a 24/7 system could a full backup not be run out of hours every day and then a diff every half day, or even just full once a day and then the rest just transaction logs every 15 minutes?  I suppose it also depends on what backup retention you need and the amount of storage available.

  • Chris Harshman - Tuesday, January 30, 2018 10:30 AM

    NewBornDBA2017 - Friday, January 26, 2018 9:10 AM

    ...MAX memory settings have been set to default which is 217%. Will it help if I change the max memory setting to let's say 80% of whatever the total server memory is?

    Yikes! You shouldn't have the maximum server memory be set to more than you have physical RAM.  The OS needs some memory for itself, and once all physical memory is used then you're hitting the page file.  That adds an extra disk write and disk read for SQL Server to do an operation that it thinks is for data it already has in memory.  I'd suggest you do some research on best practices for this.

    The server has 64 GB of physical memory and I had the baseline setup, monitored the performance for couple of days and the memory consumption has been around 50% so I ended up allocating 10 GB for OS and set the 90% of 64 to SQL Server. So far no issues.

  • OK maybe I oversimplified, technically Windows does start using the page file before all physical memory is used up.  The main point I was trying to make is that SQL Server manages its own memory, and DBAs try to set the max server memory to a point below the amount of physical memory to minimize the number of hits the OS is making to the page file.  Here's some articles on the subject:

    https://technet.microsoft.com/en-us/library/ms180797(v=sql.105).aspx
    https://www.brentozar.com/blitz/max-memory/
    https://www.sqlskills.com/blogs/jonathan/how-much-memory-does-my-sql-server-actually-need/

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply