Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««45678»»

Transaction Log Growth, do you need it? Expand / Collapse
Author
Message
Posted Saturday, May 29, 2010 8:49 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 9:21 PM
Points: 11,192, Visits: 11,090
Gary7512 (5/29/2010)
Thanks Paul, I will take note of what you're telling me. It does seem like a good magic solution, but I suppose some fables warn the use of magic for self-gain comes at a price.

Well put




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #930053
Posted Saturday, May 29, 2010 1:14 PM
Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Today @ 1:57 PM
Points: 595, Visits: 1,671
Our IT management has talked about "snap" capabilities with our Netapp disk device. This reminds me of my post the other day about "DBA - Phantom job title." It may just be what I'm seeing in my company ( ~300 employees), or perhaps a trend for smaller companies. That is, "DBA by committee" where the various duties a DBA or DBA-team would handle are parceled out to various people, including some who have little if any sql server knowledge.

As far as the implications for successful disaster recovery, the thing that works against us is how rarely you actually need to restore from whatever kind of backups you use. Add to that non-technical managers managing technical departments with a constant eye on the budget and where does it lead?

If you've already gone 5 years without ever having a need to restore due to failure or corruption, why bring a $75k+/year DBA into a group of employees who have already established work habits and turf to protect ( some of these have found DBAs to be what they consider a road-block to rapid development in the past.



Post #930083
Posted Sunday, May 30, 2010 12:02 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 9:21 PM
Points: 11,192, Visits: 11,090
Indianrock (5/29/2010)
Our IT management has talked about "snap" capabilities with our Netapp disk device. This reminds me of my post the other day about "DBA - Phantom job title."

Exactly - and an interesting thread to read.
I would actually be pleased if the snap/clone type technology could be made to work. I would not particularly miss worrying about backups.
Sadly, every demonstration I have ever been to has failed to work properly, or at least had at least one glaring flaw.
No doubt it can be made to work, to some extent, on certain systems if you have enough money to spend.
Maybe one day snap technology will make SQL Server backups a thing of the past - but that day is not today, for most people...




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #930115
Posted Wednesday, June 2, 2010 11:51 AM


SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, July 18, 2014 12:06 AM
Points: 27, Visits: 462
bring a $75k+/year DBA

I don't know which economy you are in. Don't make everybody believe that is what a good DBA is worth. I say $150K. 75 you get a guy who shutdown database and delete that log file. Hahaha.... you have to fire the guy you hired.


Jason
http://dbace.us
Post #931559
Posted Wednesday, June 2, 2010 11:54 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Tuesday, January 21, 2014 3:17 PM
Points: 88, Visits: 285
Personal Motto, Don't Shrink the Log unless unexpected transactions occur. Prior DBA had incorporated shrinks into the maintenance jobs, shrink only jobs etc. This left what appeared to be lots of free space. We do quite a bit of batch loading over the weekends from mainframes, the log would grow to 30-40gb and then be shrunk to 5gb after the job. It appears 25-35gb free, so why not create a new database plenty of space... batch job crashes because of lack of disk space for the log file.

On the subject of SNAP and NetApp, that is a new discussion that is taking place. Could someone direct me to a good overview of the SNAP with SQL Server and the pros/cons of using it.

Thanks


Thom Bolin - MCITP SQLServer 2005
NTXDBA Blog[url=http://ntxdba.blogspot.com/][/url]
Post #931564
Posted Monday, June 7, 2010 1:11 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, March 15, 2011 9:21 AM
Points: 2, Visits: 38
the log would grow to 30-40gb and then be shrunk to 5gb after the job


Thom, I am not sure I understand why that is a bad thing to do.

Everyone, I could use some help with the following questions:

Why is it bad to shrink the log file?

Why is a log file necessary if I am taking backups of database on a daily or sometimes hourly basis? If there is a crash, I would use the backup as the restore point.

Our log file seems to grow very large, and can easily be shrunk down from over 1GB to about 1000K. I did not think that was a bad thing until I started reading through some of the posts on this thread.

Thanks for all the information everyone is posting. I am learning a lot.

Jarrod
Post #933667
Posted Tuesday, June 15, 2010 2:02 PM


Say Hey Kid

Say Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey Kid

Group: General Forum Members
Last Login: Thursday, July 17, 2014 2:01 PM
Points: 687, Visits: 3,002

Thom, I am not sure I understand why that is a bad thing to do.

Everyone, I could use some help with the following questions:

Why is it bad to shrink the log file?

Because it will (probably) only need to grow again to something near its original size as you perform more transactions. The auto-growth of the t-log file will cause a performance hit each time.


Why is a log file necessary if I am taking backups of database on a daily or sometimes hourly basis? If there is a crash, I would use the backup as the restore point.

As someone pointed out earlier, it's necessary b/c SQL won't work w/o one. You don't "need" it if you don't mind "sometimes" losing up to an hour's worth of work. Seriously, if you or your business owners are satisfied with that risk of data loss (and you don't care about point-in-time recoveries), then you don't need to worry about transaction log backups and you could probably run in Simple recovery mode.

Honestly, I could probably tolerate losing 1 hour or even 3 hours' work on our prime d.b., if it happened once every 5 years or so. But I like knowing that I can do point-in-time recoveries to undo a stupid mistake, especially since I'm the one who will likely execute that stupid mistake! Think about it: my users work on one row at a time with a beta-tested web app. containing lots of built-in error-checking. I make updates across thousands of rows, often with one-off, ad hoc code that uses INT ID values I type by hand. Whose error is more likely to cause havoc system-wide?


Our log file seems to grow very large, and can easily be shrunk down from over 1GB to about 1000K. I did not think that was a bad thing until I started reading through some of the posts on this thread.

Thanks for all the information everyone is posting. I am learning a lot.

Jarrod

If you want to manage your t-logs, read the excellent articles posted here by others who have written about this admittedly sometimes confusing subject.

Yours,
Rich
Post #937794
Posted Tuesday, June 15, 2010 2:35 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: Administrators
Last Login: Today @ 10:25 PM
Points: 33,051, Visits: 15,160
Short answer: you can't turn off the log. That's part of how ACID principles are maintained and you have an intact database system, with the log.

So with that being the case, why shrink it if it grows again to the same size? When you take log backups, the space gets reused, so set those up, get a good stable size, and leave it alone, checking on it periodically (monthly).


A few articles:
http://www.sqlservercentral.com/articles/64582/
http://www.sqlservercentral.com/articles/69476/







Follow me on Twitter: @way0utwest

Forum Etiquette: How to post data/code on a forum to get the best help
Post #937816
Posted Tuesday, June 15, 2010 2:39 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Thursday, April 10, 2014 6:38 PM
Points: 9, Visits: 41
How are you supposed to recover deleted or severely altered records if you have dozens or hundreds of users simultaneously inserting and updating records? How can you recover by going back to a point in time before a screw-up without undoing the 1000's of good updates and inserts done by other users?

That's what I don't understand. If you're the only person using the db then having point in time recovery is brilliant, but with loads of users you can't jump back to undo what "Dizzy DB Dave" did without losing all the good changes done by other people. (Assume everyone logs in via a web app so everyone appears to SQL as the same web user)

Maybe I have overlooked something?
Post #937818
Posted Tuesday, June 15, 2010 2:58 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, March 15, 2011 9:21 AM
Points: 2, Visits: 38
Thanks Steve and Rich for your replies. I actually have done a lot more research since posting this, and I am even more confused about the log file. :)

I first had the toughest time trying to figure out how to even read or use the log file. I came across some expensive applications that would allow me to read the log file. I couldn't find any reason to justify that kind of money. And I still have now idea how to use the log file in the way everyone is talking about here. Is there something built into SQL Server that would allow me to restore to a specific point using my log file, or does that take another application purchase?

I then read that I could use DBCC LOG to view the contents of the log file (it might have been in this thread or an article linked from it - I cannot remember where I found it now). This doesn't really give me any easily read data. I would have to get an understanding of the log file "code" to really understand what is going on. But, using this SQL, I found that my 1GB log file contained only 198 rows. Why is it so large then?

The reason I am trying to figure all of this out is because we have a database that is created from script. When it is created, it has about 300k initial records in a table. There are some other tables that have some data, but not much. There are 20 or 25 tables that have nothing, and are used with our program. So really, all that has happen to this database is that the tables, indexes, stored procs etc.. have been created from script and some initial data has been imported. Why is the log file 1 GB?

After I shrink it and then use the application against it for a little while for about 2 weeks, I ran DBCC LOG against it again. Right now, there are 1192301 rows in the log file, but it is only 768KB in size.

Is there something I should check in our creation scripts?
Should we just be shrinking it at the end of the script after is has been created?

Thanks in advance for the help.
Post #937843
« Prev Topic | Next Topic »

Add to briefcase «««45678»»

Permissions Expand / Collapse