Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12

Part 2: Why you should be using the Full Recovery Model Expand / Collapse
Author
Message
Posted Monday, September 21, 2009 3:59 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Friday, July 18, 2014 8:44 AM
Points: 166, Visits: 199
... other types of database that would not necessarily be suited to operation under the Full Recovery Model include ... databases that experience discrete infrequent modifications.


I cannot agree with you here.

If you are on Simple, and your log was truncated this morning, and today you have had "discrete infrequent modifications", aren't you going to want to recover them?

Frequency of modification has nothing to do with the choice of recovery model. You should be more concerned with data loss, of any size.

In other words, a database with infrequent modifications could be an excellent candidate for the Full Recovery Model.




Post #791452
Posted Monday, September 21, 2009 6:51 PM


Right there with Babe

Right there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with Babe

Group: General Forum Members
Last Login: Friday, April 4, 2014 4:40 PM
Points: 751, Visits: 917
In general, a database should be in full backup mode for precisely the reasons described here. There are advantages to using the Simple backup model.

The prime one being less concern about the transaction log growing or worry about them filling up. They can still grow of course (assuming autogrowth is on) in simple, but this will be a less frequent event under normal usage patterns using simple than using full, and there is no need to trake the time to truncate those logs.

Also, certain types of transactions are generally faster under a simple model.



---
Timothy A Wiseman
SQL Blog: http://timothyawiseman.wordpress.com/
Post #791477
Posted Wednesday, September 23, 2009 8:55 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, July 19, 2013 7:44 AM
Points: 38, Visits: 162
I almost can't believe that *any* recovery model is presented like this as a 'best practice'; and certainly not as WHY you SHOULD be using this or that recovery model.

It depends totally on the following factors:

- What kind of organisation is it?
- How many transactions are there per hour/minute/second
- Downtime? How long does is take to go back to that point in time?
- If you make a daily full and quarterly DIFFS you can lose 15 minutes of transactions at the most. Can these transactions be added manually again? What kind of transactions are they?

So many questions that this depends on.

What is worse? Taking 2 full hours to restore to a certain point in time (thereby losing 2 while gaining perhaps half an hour), or losing half an hour of data and have the organisation re-enter said data? Most often we find that organisations have shorter downtime (eventhough there's a loss of data) with the simple recovery model because restoring it is easy and often transactions can be resubmitted by the users themselves.

There is NO set of rules or laws that state that you should use either Simple or Full recovery model, thinking in that manner is shortsighted at best. A smart DBA checks each database and defines a recovery model for each database depending on it's use.

In practice, most databases with the Simple recovery model can be brought online far quicker than those with Full recovery models. Please do no take this shortsighted article on face value and think a little bit beyond shortsightedness. I've rated the article as 'awful', simply because it says that people SHOULD use the Full recovery model, it's a mindboggingly dumb statement. Sorry for my rude reaction, but my point should be made with a little bit of force behind it.
Post #792708
Posted Wednesday, September 23, 2009 10:01 AM


Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Saturday, August 23, 2014 6:03 AM
Points: 351, Visits: 1,556
bas de zwart (9/23/2009)
I almost can't believe that *any* recovery model is presented like this as a 'best practice'; and certainly not as WHY you SHOULD be using this or that recovery model.

It depends totally on the following factors:

- What kind of organisation is it?
- How many transactions are there per hour/minute/second
- Downtime? How long does is take to go back to that point in time?
- If you make a daily full and quarterly DIFFS you can lose 15 minutes of transactions at the most. Can these transactions be added manually again? What kind of transactions are they?

So many questions that this depends on.

What is worse? Taking 2 full hours to restore to a certain point in time (thereby losing 2 while gaining perhaps half an hour), or losing half an hour of data and have the organisation re-enter said data? Most often we find that organisations have shorter downtime (eventhough there's a loss of data) with the simple recovery model because restoring it is easy and often transactions can be resubmitted by the users themselves.

There is NO set of rules or laws that state that you should use either Simple or Full recovery model, thinking in that manner is shortsighted at best. A smart DBA checks each database and defines a recovery model for each database depending on it's use.

In practice, most databases with the Simple recovery model can be brought online far quicker than those with Full recovery models. Please do no take this shortsighted article on face value and think a little bit beyond shortsightedness. I've rated the article as 'awful', simply because it says that people SHOULD use the Full recovery model, it's a mindboggingly dumb statement. Sorry for my rude reaction, but my point should be made with a little bit of force behind it.


Thank you for your comments.

You are absolutely right. Each and every database environment is unique and the specific requirements for a robust DR strategy require that evaluation be performed on a case by case basis.

To emphasise this need, one of the closing points of the article reads:

“Database backups and SQL Server disaster recovery planning are detailed topics and require much more consideration, research and planning than are presented in this article alone.”

I would also like to highlight that designing a DR strategy is not the topic being presented by the article. Taken from the article:

“To summarise the key message here, in order to be able to restore your database to a particular point in time you "must" be using the Full Recovery Model.”

Once more, thank you for your feedback. It’s always good to see database professionals who are passionate about their work.



John Sansom (@sqlBrit) | www.johnsansom.com
Post #792763
Posted Wednesday, September 23, 2009 10:16 AM


Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Saturday, August 23, 2014 6:03 AM
Points: 351, Visits: 1,556
Festeron (9/21/2009)
... other types of database that would not necessarily be suited to operation under the Full Recovery Model include ... databases that experience discrete infrequent modifications.


I cannot agree with you here.

If you are on Simple, and your log was truncated this morning, and today you have had "discrete infrequent modifications", aren't you going to want to recover them?

Frequency of modification has nothing to do with the choice of recovery model. You should be more concerned with data loss, of any size.

In other words, a database with infrequent modifications could be an excellent candidate for the Full Recovery Model.



Hi Festeron,

Thank you for your comments.

There is no one solution fits all, perhaps I should have made my original point clearer, although I'm pretty sure I did NOT say "All databases with discrete infrequent modification should use the SIMPLE Recovery Model"

To re-iterate my prior statement, with emphasis:

In addition to those you highlighted, other types of database that would not necessarily be suited to operation under the Full Recovery Model include Read Only databases and databases that experience discrete infrequent modifications.


So the point I was trying to be make is that using the FULL recovery model for static (read only) or relatively static databases might be overkill in some scenarios. For example, if a database is only updated say by a batch process once a month, the SIMPLE recovery model may be a more appropriate choice. Provided a FULL database backup is taken after the batch process, the database can be restored to it's current state at any point within the month.

Thanks again for your comments!



John Sansom (@sqlBrit) | www.johnsansom.com
Post #792778
Posted Wednesday, September 23, 2009 6:25 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Monday, November 30, 2009 6:40 PM
Points: 127, Visits: 66
Jeffrey Williams-493691 (9/21/2009)
No, BackupExec and other utilites like that won't backup SQL Server database files that are open. SQL Server locks the files and will not allow them to be backed up.

This should be corrected to say: BackupExec and other utilities won't backup SQL Server database files unless their SQL Server "agent" software is purchased seperately and installed. Please see: http://www.backupexecfaq.com/articles/concepts/backing-up-microsoft-sql-server.html Other enterprise backup programs have similar - ask your vendor's rep.
Post #793014
Posted Monday, April 25, 2011 8:08 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Thursday, July 17, 2014 11:04 AM
Points: 47, Visits: 313
Quick question hopefully :).

Network team manages backups to tape and said tape system/application can not be setup to backup at specific times. It has some kind of queue system with windows of opportunity. Also, we have a SAN backup process that does 4 backups during business hours which I control times and frequency to some degree. Network team configures backend filer/SAN system wich is a black hole to me at the moment.

Now all my databases are set to simple. My problem is, I do think we could benefit from Full Recovery Model in many of our databases. So how would one overcome the challenge of balancing tape and SAN methods? And how does one deal with difficult Network Admins :) and managers who don't understand a thing you are talking about :).

I absolutely hate tape backups :) and it's a love/hate relationship with SAN. Occasionally, I have issues with SAN method filling up and needing to remove snapshots do to a large transaction. I would love to write to disk and have them backup the disk drive :).
Post #1098122
Posted Monday, April 25, 2011 2:52 PM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Monday, August 25, 2014 2:09 PM
Points: 27, Visits: 138
Sophisticated method: Tape backup systems (the one we use, for example) will work with a purchased agent that can be installed on the SQL Server's server. When the tape backup system wants to back up, it contacts the agent, which contacts VSS (a SQL Server service on the SQL Server) to back up the databases to tape.

Dumbed down method (the one we use): Pick a time which is likely to be before the tape backup window of opportunity, and create a SQL Server maintenance plan/job which backs up the databases at that time to a specified folder (for example, SQLBackup). Tell the tape backup system to back up folder SQLBackup. This method wastes resources, including the space needed for the backups (stored on disk as well as on tape), I/O (writes to disks and then copied to tape), etc. We use it because our databases tend to be small, that is, 100 MB to 1 GB, and we have long windows for backup.

I have so many questions for your network admin, you really really need a better relationship with that person. (Example of questions--what are the SAN backups doing? And are they interfacing correctly with SQL Server? If so your backups may be right there ...) Perhaps you can convince the network admin that they could solve some of their existing problems if they worked more closely with you... Good luck!
Post #1098298
Posted Tuesday, April 26, 2011 8:14 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Thursday, July 17, 2014 11:04 AM
Points: 47, Visits: 313
Your sophisticated method for tape sounds like something similar to what we have. (NetBackup)
I totally would like to write to disk and have them back that up to what ever medium they wish. This method would cost a good bit in storage but I think it is well worth the cost for stability and I can manage the backups for SQL. We have a happy mix of large and small databases majority are ranging from 1 - 10 GB and a handful of them upper 30 GB and 1-2 in the 300 GB range.

As far as the relationship with the Network Admins it is not that it is bad, I guess its just neither knows enough of the other world :). I have been digging into the SAN information (NetApp) to understand it better and try and speak intelligently about the products but without being able to touch and play it is very difficult.

After a little further research, the SAN uses a client end to interface with SQL much like the tape method you mentioned. The Client pauses, takes snapshot of SQL and then I think it triggers a drive level copy (not 100% sure how that fully works, yet). This side works and I use this on occasion to make copies for dev with the larger databases. Now keep in mind I use the client side to control how often they occur and type of backup. Currently, we kind of use this as a nice to have backup where tape is the major line of defense. As I mentioned, if the SAN backup method runs out of space then I have to clear some or all backups or I lose the drive. If I was in full recovery mode and taking log backups say every 15min and have to clear backups I lose the integrity of full recovery mode. with the added question what do I do for the tape backup if log backups occurring every 15min from another system (SAN). I would imagine a possible scheduling conflict could occur let alone a pain to restore. I haven't seen anyone mention how they balance the different methods and I can not make a recommendation to our team.

The SAN we are using could be used as a source for tape backups, which I asked about. The reply is "We just don't have resources to dedicate to figuring out how as it requires lots of custom scripting". Anyway, perhaps someday we will find a better solution :).
Post #1098660
Posted Tuesday, April 26, 2011 10:04 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Monday, August 25, 2014 2:09 PM
Points: 27, Visits: 138
Interesting. Thanks for sharing.
Post #1098770
« Prev Topic | Next Topic »

Add to briefcase ««12

Permissions Expand / Collapse