February 11, 2016 at 8:54 am
Hi,
We have a database that supports our CRM, used by several customers.
The database has some tables with historical data (changes made to primary tables) and primary data tables (very important tables..).
The server has SSDs disks and SATA.
The recovery mode should be FULL, since the primary data is very, very important..
Since the historical data table is very large and grows a lot every day what's the best way to implement the database? Should I create a FILEGROUP and another database file to store the historical and have a specific backup plan for the group (once a day is all it's necessary) and on the PRIMARY group have a full backup every 12h, diff every hour and log every 5min?
Thanks,
Pedro
February 11, 2016 at 9:11 am
This really depends on your application and control of the database. If you can move historical data to another database, that's what I'd recommend there. Then you can have separate backup plans for each. This also allows you to move this to another server later if you really need to. However, your application needs a way to query other databases/servers. Linked servers are problematic, and ideally you'd start building this in now, even using two connection strings if you have the data in the same database.
You can do the filegroup, but you certainly will need that filegroup backup in the event of a restore, so periodically you'd need to take another one and restore it as a test. Also, practice restoring filegroups separately just to keep skills sharp.
February 11, 2016 at 9:27 am
Thanks for the reply...
Initially I though that also... Create another database with files in the SATA disks and create a synonym in the CRM database for the tables in the history database so the app won't even know the difference... But I was bad...Apparently it isn't 🙂 Thanks... It can be in SIMPLE recovery mode that won't have any impact on the main database...
If a full backup takes less than a minute do you thinks it's a good idea (or better than a full every 12h, diff every hour and log 5min) to have full backups every hour, diff every 15min and log every 5min?
Thanks,
Pedro
February 11, 2016 at 11:13 am
More fulls can be better, but I like simple over complex. I would think about the issues with administratively managing full/diff/logs and performing restores.
That being said, I might do fulls every hour and just restore them somewhere. That way I have a copy elsewhere if things go south.
One note. be careful about dropping logs too quickly. I might still keep a couple days of logs, even if I have hourly fulls. If this process were to fail and I stopped getting fulls, I'd want to be able to get back up on Monday am if the process had failed on Friday.
February 11, 2016 at 11:30 am
Something else to think about is growth. Databases grow over time and CRM systems are certainly not known for being small. Take the total size of all your backups for a day and think about how many days worth of backups you want to keep. Multiply those numbers together and that's how much space you're going to need on your backup volume to support your backup plan. Example:
Space per Day = 1TB of full + 100 GB of logs
Total Required = 1.1 TB * 5 days
Total Required = 5.5 TB
Your requirements will increase as the size of your database increases, so keep that in mind when you put together your backup strategy.
Also, make sure you have time allocated to test those backups and verify that they're good. You don't want your first restore to be when you really need it and you figure out the backups are no good. That would be a really bad day.
February 11, 2016 at 12:20 pm
Steve Jones - SSC Editor (2/11/2016)
More fulls can be better, but I like simple over complex. I would think about the issues with administratively managing full/diff/logs and performing restores.That being said, I might do fulls every hour and just restore them somewhere. That way I have a copy elsewhere if things go south.
One note. be careful about dropping logs too quickly. I might still keep a couple days of logs, even if I have hourly fulls. If this process were to fail and I stopped getting fulls, I'd want to be able to get back up on Monday am if the process had failed on Friday.
Ed Wagner (2/11/2016)
Something else to think about is growth. Databases grow over time and CRM systems are certainly not known for being small. Take the total size of all your backups for a day and think about how many days worth of backups you want to keep. Multiply those numbers together and that's how much space you're going to need on your backup volume to support your backup plan. Example:Space per Day = 1TB of full + 100 GB of logs
Total Required = 1.1 TB * 5 days
Total Required = 5.5 TB
Your requirements will increase as the size of your database increases, so keep that in mind when you put together your backup strategy.
Also, make sure you have time allocated to test those backups and verify that they're good. You don't want your first restore to be when you really need it and you figure out the backups are no good. That would be a really bad day.
I forgot to mention I don't need to restore to a point in time...
I just need to restore if the database crashes or something else goes wrong.
I don't need to keep N full backups, just the last one and the log from that backup forward...
So in this scenario the best is to have a full recovery mode, full backup every hour and transaction log (to truncate the log) every hour also but 30min after the full backup?! If anything goes wrong I can still try a tail log backup to restore the database to a full working situation...
Thanks,
Pedro
February 11, 2016 at 1:19 pm
Always, always, always keep two fulls and the logs from the earliest one forward.
You never know when you will lose a file or have disk corruption on that file.
February 11, 2016 at 2:26 pm
Steve Jones - SSC Editor (2/11/2016)
Always, always, always keep two fulls and the logs from the earliest one forward.You never know when you will lose a file or have disk corruption on that file.
Amen. I've had this happen to me. The disk went haywire and I lost a bunch of files. It's a good thing I was checking my backups or I never would have found the drive problem. I think I was luck to have found it. I had 5 days of full and log backups and was happy that I did. The SAN guys disconnected the LUN, moved what was recoverable and gave me a new one. Thankfully, I didn't need the backups, but I was happy to have them.
February 11, 2016 at 5:10 pm
Steve Jones - SSC Editor (2/11/2016)
Always, always, always keep two fulls and the logs from the earliest one forward.You never know when you will lose a file or have disk corruption on that file.
😀 Thanks for the advice... Disks aren't forever 🙂
We make the backups, with check, to the SATA disks (that are mounted as RAID5).
The disks have 2TB storage so we can keep, probably, 1 full day of backups... We'll be tunning the algorithm as we go 🙂
We also haven't forgot about the integrity maintenance task for the database (DBCC CHECKDB), a common mistake... (I've seem people loose their data because the backups had the corrupted data..). If the integrity task ends with no error the backup is made normally (overwrite the first existing), otherwise it creates a new backup and send an alert email.
Thanks for your adviesses,
Pedro
February 11, 2016 at 6:11 pm
You might not need a full day's, but I'd roll one of those full's each day (first/last) to another disk.
February 12, 2016 at 3:59 am
Steve Jones - SSC Editor (2/11/2016)
You might not need a full day's, but I'd roll one of those full's each day (first/last) to another disk.
I'll configure the maintenance task to keep just 3 (each one takes 1GB).
I moved the history table to another database and created the synonym and the app works like nothing changed :-)... Yeah for synonyms 😀
One thing I noticed though... The main database had 3GB and took 25 secs to have a full backup... Now the database has 1.3GB and takes 4 secs to full backup.
The history database has the remaining size, 1.5GB, just one table, and takes 24 secs to have a full backup... The table has a varbinary(max) column I use to store a compressed XML of the original data (we don't have enterprise edition so I don't have compression). Does it take that long due to the varbinary column on the table?!
Thanks,
Pedro
February 12, 2016 at 5:55 am
Sorry, I'm confused, you say
PiMané (2/11/2016)
The recovery mode should be FULL, since the primary data is very, very important..
Then you say
PiMané (2/11/2016)
I forgot to mention I don't need to restore to a point in time...I just need to restore if the database crashes or something else goes wrong.
Why set Full recovery in the first place
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
February 12, 2016 at 6:01 am
Perry Whittle (2/12/2016)
Sorry, I'm confused, you sayPiMané (2/11/2016)
The recovery mode should be FULL, since the primary data is very, very important..Then you say
PiMané (2/11/2016)
I forgot to mention I don't need to restore to a point in time...I just need to restore if the database crashes or something else goes wrong.
Why set Full recovery in the first place
If recovery mode is not FULL and the database has problems I'll only have the last full backup (that can be an hour earlier) and without the logs backups I can't rebuild to the point when it crashed! Or can I?!
And the tail log backup won't exist since the log is "disabled"...
Viewing 13 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply