SQL Clone
SQLServerCentral is supported by Redgate
 
Log in  ::  Register  ::  Not logged in
 
 
 


Avoiding Logging


Avoiding Logging

Author
Message
Steve Jones
Steve Jones
SSC Guru
SSC Guru (148K reputation)SSC Guru (148K reputation)SSC Guru (148K reputation)SSC Guru (148K reputation)SSC Guru (148K reputation)SSC Guru (148K reputation)SSC Guru (148K reputation)SSC Guru (148K reputation)

Group: Administrators
Points: 148432 Visits: 19444
Comments posted to this topic are about the item Avoiding Logging

Follow me on Twitter: @way0utwest
Forum Etiquette: How to post data/code on a forum to get the best help
My Blog: www.voiceofthedba.com
Jim McLeod
Jim McLeod
Ten Centuries
Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)

Group: General Forum Members
Points: 1235 Visits: 1121
Great editorial, Steve. You can't make solid, durable furniture (or databases) without logging!
Jim Murphy
Jim Murphy
Ten Centuries
Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)

Group: General Forum Members
Points: 1231 Visits: 1265
HA!

If it wasn't for the Murphy's of the world, we wouldn't need a tlog in the first place. But here we are, now everyone gets to become very familiar with recoverability strategies. You are welcome for the contribution.

I think some folks think that if the db is in the Simple recovery model, then the tlog is not used. Wrong. The .ldf is still locked by the OS indicating that it is still in use. And deleting the .ldf with the services stopped is, uh, bad. Why? Because SQL needs and uses the tlog, even on db's which are in Simple. It just doesn't use the tlog LONG TERM. Well, this is relative; when the next checkpoint occurs and whatnot.

I'm changing my name to Jones. Just trying to keep up with Steve.

Jim

Jim Murphy
http://www.sqlwatchmen.com
@SQLMurph
tim.elley@commarc.co.nz
tim.elley@commarc.co.nz
Grasshopper
Grasshopper (20 reputation)Grasshopper (20 reputation)Grasshopper (20 reputation)Grasshopper (20 reputation)Grasshopper (20 reputation)Grasshopper (20 reputation)Grasshopper (20 reputation)Grasshopper (20 reputation)

Group: General Forum Members
Points: 20 Visits: 246
Have to agree Steve. Here in Christchurch, thanks to the earthquake, I'm becoming intimately familiar with the other side of disaster recovery. Fortunately without too much drama, and very thankful it's so robust!

Tim Elley
Christchurch, New Zealand



Koen Verbeeck
Koen Verbeeck
SSC Guru
SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)

Group: General Forum Members
Points: 63526 Visits: 13298
In the somewhat special case of ETL, I'd sometimes like to turn of logging.
If a data load fails, the destination table can be truncated and the load can start over again, so you would not have to worry about inconsistency.
I'm only talking about the import layer here (the E of ETL). If updates are performed on datasets in the database, I would very much like logging, as I would like to go back to a previous state if necessary. But for imports, nah, I don't need logging :-)


How to post forum questions.
Need an answer? No, you need a question.
What’s the deal with Excel & SSIS?
My blog at SQLKover.

MCSE Business Intelligence - Microsoft Data Platform MVP
GSquared
GSquared
SSC Guru
SSC Guru (58K reputation)SSC Guru (58K reputation)SSC Guru (58K reputation)SSC Guru (58K reputation)SSC Guru (58K reputation)SSC Guru (58K reputation)SSC Guru (58K reputation)SSC Guru (58K reputation)

Group: General Forum Members
Points: 58717 Visits: 9730
Koen Verbeeck (3/8/2011)
In the somewhat special case of ETL, I'd sometimes like to turn of logging.
If a data load fails, the destination table can be truncated and the load can start over again, so you would not have to worry about inconsistency.
I'm only talking about the import layer here (the E of ETL). If updates are performed on datasets in the database, I would very much like logging, as I would like to go back to a previous state if necessary. But for imports, nah, I don't need logging :-)


Would the Bulk Logged recovery model accomplish what you want on that?

You can also have a staging database, where you bulk import, et al, kept in Simple recovery, and just leave it out of the backup and maintenance plans. The log will grow to accommodate your imports, but it's simpler and less critical than a "real" database. If needed/wanted, keep that one on a cheap RAID 0 array. If it crashes and burns, replace the disks and re-run the create script from source control, and don't worry about recovery. Just make sure it's set up so that you don't lose anything that matters if you lose the whole database.

- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread

"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Lynn Pettis
Lynn Pettis
SSC Guru
SSC Guru (96K reputation)SSC Guru (96K reputation)SSC Guru (96K reputation)SSC Guru (96K reputation)SSC Guru (96K reputation)SSC Guru (96K reputation)SSC Guru (96K reputation)SSC Guru (96K reputation)

Group: General Forum Members
Points: 96395 Visits: 38981
Actually, it is possible to do this in Oracle. I had a nice discussion about it with one of the Oracle gurus where I work. He would never recommend using this capability on an online database, but would use it in a tightly controlled batch process where online activity is prevented from accessing the database. He would also take precautions, including ensuring that there was a backup prior to and after the batch process.

Perhaps it is because this can be done in Oracle that people think SQL Server has a similar capability.

Cool
Lynn Pettis

For better assistance in answering your questions, click here
For tips to get better help with Performance Problems, click here
For Running Totals and its variations, click here or when working with partitioned tables
For more about Tally Tables, click here
For more about Cross Tabs and Pivots, click here and here
Managing Transaction Logs

SQL Musings from the Desert Fountain Valley SQL (My Mirror Blog)
Koen Verbeeck
Koen Verbeeck
SSC Guru
SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)

Group: General Forum Members
Points: 63526 Visits: 13298
GSquared (3/8/2011)
Koen Verbeeck (3/8/2011)
In the somewhat special case of ETL, I'd sometimes like to turn of logging.
If a data load fails, the destination table can be truncated and the load can start over again, so you would not have to worry about inconsistency.
I'm only talking about the import layer here (the E of ETL). If updates are performed on datasets in the database, I would very much like logging, as I would like to go back to a previous state if necessary. But for imports, nah, I don't need logging :-)


Would the Bulk Logged recovery model accomplish what you want on that?

You can also have a staging database, where you bulk import, et al, kept in Simple recovery, and just leave it out of the backup and maintenance plans. The log will grow to accommodate your imports, but it's simpler and less critical than a "real" database. If needed/wanted, keep that one on a cheap RAID 0 array. If it crashes and burns, replace the disks and re-run the create script from source control, and don't worry about recovery. Just make sure it's set up so that you don't lose anything that matters if you lose the whole database.


Bulk Logged recovery model certainly is an option. So is the Simple recovery model.
My point is that when the ETL tightly controls the batch process and the destination database is only used as a "dump" for the data (aka volatile staging area, where destination tables are cleared before the import process), that all the logging is just extra overhead interfering with (BULK) INSERT performance. For the same reason it is also recommended not to have constraints (be it foreign keys or check constraints) and to minimize indexing (you can even drop the indexes and recreate them after the import process).

If it is a non-volatile staging area, then a backup before the import process is sufficant, as Lynn already mentioned.

But maybe I'm preaching to the choir :-)


How to post forum questions.
Need an answer? No, you need a question.
What’s the deal with Excel & SSIS?
My blog at SQLKover.

MCSE Business Intelligence - Microsoft Data Platform MVP
taylor_benjamin
taylor_benjamin
SSC Journeyman
SSC Journeyman (80 reputation)SSC Journeyman (80 reputation)SSC Journeyman (80 reputation)SSC Journeyman (80 reputation)SSC Journeyman (80 reputation)SSC Journeyman (80 reputation)SSC Journeyman (80 reputation)SSC Journeyman (80 reputation)

Group: General Forum Members
Points: 80 Visits: 51
Earlier version of SQL Server did support No Logging.

Bulk Copy was the reason. In those days BCP was non-transacted resulting in higher performance. In order for it to be non-transactional you had to set your recovery mode to simple (They really didn't have Recovery Mode back then, but that is the closest thing they have today). In the oldest versions (4.21 say) it didn't even update indexes when you did a bulk copy of this nature using the BCP.EXE command.

After doing a BCP you had to update statistics, rebuild indexes and backup your database. Basically, it was the fastest way to get staging data into SQL Server. It was a pain. IT WAS FAST!

As you say, this mode is not supported with SQL Server today, but the idea is not a fantasy...it used to exist. Why doesn't it exist today; I don't know or care. Frankly, with todays hardware I see reasonable performance with logging, so why turn it off.

A second method of not using the transaction log was

SELECT ... INTO #SomeTempTable FROM ...

This query did not use the transaction log in any of the databases including TempDB. There were problems with this technique fixed in version 7 where the SYSOBJECTS table in Tempdb was locked until the select statement completed (A potential disaster). So the technique was not often used.

In SQL Server 2005 an dlater this technique still performs faster than creating a temp table and inserting data into it with a select statement. However, I know there is logging involved because I use that ability in transactions for SAVE POINTS and transactions. The level of logging is unknown to me nor why there is increased performance. However, I do have performance history demonstrating the technique is still valid.

There were lots of things we did, or had to do, in the old days that are no longer relevant, or methods have changed. That doesn't mean they were never true. For example, MS Best practices used to teach us to use a non-squential column with low data distribution for a clustered index. It was on their SQL Server tests for certification. Today the best practice is the exact opposite. MS recommends a sequential value as a clustered index, even if it is not the primary key.

Thanks for reminding me how old I am, and how long I have been working with SQL Server. :-)

Ben
Koen Verbeeck
Koen Verbeeck
SSC Guru
SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)SSC Guru (63K reputation)

Group: General Forum Members
Points: 63526 Visits: 13298
taylor_benjamin (3/8/2011)

Thanks for reminding me how old I am, and how long I have been working with SQL Server. :-)


There was a 4.21 version??? :-P


How to post forum questions.
Need an answer? No, you need a question.
What’s the deal with Excel & SSIS?
My blog at SQLKover.

MCSE Business Intelligence - Microsoft Data Platform MVP
Go


Permissions

You can't post new topics.
You can't post topic replies.
You can't post new polls.
You can't post replies to polls.
You can't edit your own topics.
You can't delete your own topics.
You can't edit other topics.
You can't delete other topics.
You can't edit your own posts.
You can't edit other posts.
You can't delete your own posts.
You can't delete other posts.
You can't post events.
You can't edit your own events.
You can't edit other events.
You can't delete your own events.
You can't delete other events.
You can't send private messages.
You can't send emails.
You can read topics.
You can't vote in polls.
You can't upload attachments.
You can download attachments.
You can't post HTML code.
You can't edit HTML code.
You can't post IFCode.
You can't post JavaScript.
You can post emoticons.
You can't post or upload images.

Select a forum

































































































































































SQLServerCentral


Search