Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««1234»»»

Avoiding Logging Expand / Collapse
Author
Message
Posted Tuesday, March 8, 2011 7:49 AM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 3:46 PM
Points: 42,462, Visits: 35,525
Koen Verbeeck (3/8/2011)
If a data load fails, the destination table can be truncated and the load can start over again, so you would not have to worry about inconsistency.


What would happen if the server failed at the point SQL was modifying the allocation constructs or system tables in the database and there was no logging? Not so easy to fix.

Logging is not just for the user data. It's for page allocations, allocation page modifications, system table modifications and a while lot more.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1074813
Posted Tuesday, March 8, 2011 8:14 AM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Wednesday, July 23, 2014 9:00 AM
Points: 1,595, Visits: 4,585
Transaction logging usually isn't an issue unless you're inserting 100,000+ records one.. at.. a.. time.. in a loop, or the DBA allows the transaction log to grow until it fills up available disk space. It's rarely an issue during the normal operation of an OLTP database. You can mitigate the negative performance effects of logging by placing the transaction log files on a seperate drive system, using the BULK COPY utility for your batch loads, and perhaps the Bulk Logged recovery model on occasion.
Post #1074840
Posted Tuesday, March 8, 2011 8:43 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: Administrators
Last Login: Today @ 6:20 PM
Points: 33,078, Visits: 15,192
GilaMonster (3/8/2011)
Koen Verbeeck (3/8/2011)
If a data load fails, the destination table can be truncated and the load can start over again, so you would not have to worry about inconsistency.


What would happen if the server failed at the point SQL was modifying the allocation constructs or system tables in the database and there was no logging? Not so easy to fix.

Logging is not just for the user data. It's for page allocations, allocation page modifications, system table modifications and a while lot more.


I had this argument with someone recently about loads. They were assuming that the load would fail and they'd restart it, but you need logic to allow restarts. Not everyone builds this into their processes and even if you don't have restart logic, the cleanup of old data needs to be transactional. So have you saved anything if we allowed imports w/o logging? Not sure, and honestly, not sure the vast majority of people are qualified to decide this.

Even if you are, is that the best decision for the company? The next person that does your job might not understand this type of feature and use it in other places.

If you don't need transactional logging during imports, then commit periodically, inserting batches of 10,000 or so, and running some log backups. If you're that busy and you have that much data, then I assume you ought to have some money to put log backups on separate spindles, and the t-log on separate spindles, and get better performance.

Ultimately, I don't think it's worth the risk to relational data to allow this. If the data isn't that important, or can be reloaded easily, perhaps it would be better to put this into some other structures. Maybe a NoSQL or columnar construct.







Follow me on Twitter: @way0utwest

Forum Etiquette: How to post data/code on a forum to get the best help
Post #1074862
Posted Tuesday, March 8, 2011 8:44 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: Administrators
Last Login: Today @ 6:20 PM
Points: 33,078, Visits: 15,192
Koen Verbeeck (3/8/2011)
taylor_benjamin (3/8/2011)

Thanks for reminding me how old I am, and how long I have been working with SQL Server.


There was a 4.21 version???


There was a version that didn't run on Windows ;)







Follow me on Twitter: @way0utwest

Forum Etiquette: How to post data/code on a forum to get the best help
Post #1074863
Posted Tuesday, March 8, 2011 9:00 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Monday, July 7, 2014 7:32 AM
Points: 22, Visits: 42
Wow! That really goes back. Yes, the first MS version of SQL Server was to take Sybase and port it to run on OS/2.

I am not sure, but as I recall, version 4.21 was the first version to run under the Windows OS (Window NT Advanced Server was the branding as I recall).

Version 4.21a was the first version of SQL Server to break away from Sybase as a purely MS code base.
Post #1074884
Posted Tuesday, March 8, 2011 9:11 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Monday, July 7, 2014 7:32 AM
Points: 22, Visits: 42
A little history about why one would ever even want no transactions...

Back in the old days when you spent $6k for a server with 32Meg (Yes Meg) of ram so you could have awesome performance, if you imported lots of data it really helped to turn off transaction logging, drop indexes, import data, rebuild statistics and rebuild indexes.

I worked on a data warehouse that imported a million health care claims monthly using a month-end extract. Using BCP to bring data into staging tables resulted in import of those million rows in 1 hour, where previously it took a whole week.

The need for this kind of tuning no longer exists in todays hardware and software capabilities. SSIS can do the kind of imports we did standing on it's head drinking coffee, smoking a non-politcally correct cigarette, reading a book and watching 2 movies.

So, how did we get around the possibilities of corruption?

1) Backup the whole database prior to any BCP activity, start over if it failed.
2) Use a staging database. If the import failed, we simply deleted the database and replaced it with a template. If we could we would break stuff down in batches and pick up where the last batch failed.

BCP was dangerous in those days. You were completely un-protected until it was complete and you performed a complete database backup. But with a data warehouse bringing in millions of rows, there were very few options outside of going to bigger hardware with another database vendor.

So, hats off to MS...look how far you've come baby!
Post #1074896
Posted Tuesday, March 8, 2011 10:14 AM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: Administrators
Last Login: Today @ 6:20 PM
Points: 33,078, Visits: 15,192
taylor_benjamin (3/8/2011)
Wow! That really goes back. Yes, the first MS version of SQL Server was to take Sybase and port it to run on OS/2.

I am not sure, but as I recall, version 4.21 was the first version to run under the Windows OS (Window NT Advanced Server was the branding as I recall).

Version 4.21a was the first version of SQL Server to break away from Sybase as a purely MS code base.


Actually, 4.20 came from Sybase and ran on OS/2, then was ported to Windows Advanced Server v3.1. I'm not sure is it was 4.21a or 4.21b that moved, but I went through all of them.

SQL Server 6.0/6.5 still contained some Sybase code as well. SQL Server 7.0 was a rewrite that was completely MS.







Follow me on Twitter: @way0utwest

Forum Etiquette: How to post data/code on a forum to get the best help
Post #1074951
Posted Tuesday, March 8, 2011 10:49 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 2:49 PM
Points: 13,283, Visits: 10,156
GilaMonster (3/8/2011)
Koen Verbeeck (3/8/2011)
If a data load fails, the destination table can be truncated and the load can start over again, so you would not have to worry about inconsistency.


What would happen if the server failed at the point SQL was modifying the allocation constructs or system tables in the database and there was no logging? Not so easy to fix.

Logging is not just for the user data. It's for page allocations, allocation page modifications, system table modifications and a while lot more.


As BI developer I can only say the following thing:
Oink?



I guess you're right. As ETL developer I only think about moving data, not so much about the features you just described...




How to post forum questions.
Need an answer? No, you need a question.
What’s the deal with Excel & SSIS?

Member of LinkedIn. My blog at LessThanDot.

MCSA SQL Server 2012 - MCSE Business Intelligence
Post #1074976
Posted Tuesday, March 8, 2011 11:49 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Tuesday, July 24, 2012 5:10 PM
Points: 17, Visits: 844
I thought I remembered a Version 1.0 of SQL Server and sure enough, there was. According to Wikipedia, Version 1.0 was released in 1989 as an OS/2 version on 16-bit. Version 1.1 was released in 1991, again on OS/2. Then in 1993 was the first Windows NT edition which was SQL Server 4.21. http://en.wikipedia.org/wiki/Microsoft_SQL_Server
Post #1075017
Posted Tuesday, March 8, 2011 1:12 PM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 3:46 PM
Points: 42,462, Visits: 35,525
Koen Verbeeck (3/8/2011)
GilaMonster (3/8/2011)
Koen Verbeeck (3/8/2011)
If a data load fails, the destination table can be truncated and the load can start over again, so you would not have to worry about inconsistency.


What would happen if the server failed at the point SQL was modifying the allocation constructs or system tables in the database and there was no logging? Not so easy to fix.

Logging is not just for the user data. It's for page allocations, allocation page modifications, system table modifications and a while lot more.


As BI developer I can only say the following thing:
Oink?



I guess you're right. As ETL developer I only think about moving data, not so much about the features you just described...


Exactly. I suspect most people asking for no logging don't think about the system/structure changes that get logged. If a load were to fail part way through, it would not be a case just of truncating the destination table and starting again.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1075086
« Prev Topic | Next Topic »

Add to briefcase ««1234»»»

Permissions Expand / Collapse