SQL Server 2005 Log File Affinity

  • I wonder if someone can help me with an issue I have on a database server I have been working on.

    I have a reasonably large SQL 2005 database, circa 300GB with 2 60GB log files on 2 Seperate 15k RPM SCSI Drives, and another log file on a RAID 5 set. The third log file was created on the RAID 5 Set and set to autogrow in case the server filled up the other two fixed size log devices. This happened a few days back and it looks like the server started using the third log file.

    My problem is I'd like to get SQL to stop using the log file on the RAID 5 disks and start using either of the two log files on the Mirror sets - I have no idea how to do this, I already tried shrinking the third log file with the option to migrate the data to another device in the same filegroup but I dont think that worked as there was still data on the third log device.

    Any help you can offer me will be more than appreciated.

    Regards

    Stevie

  • If SQL started using the third log file it obviously needed it so perhaps getting rid of it might not be a good idea, SQL might need the space again.

    that said the log file is write sequential its going to fill one then move onto the next, so there is no performance gain having 3 log files, so perhaps make the first two or first one only bigger?

    You tried dbcc shrinkfile(,emptyfile) followed by alter database remove file right?

    did you backup the log beforehand to truncate it or have it in simple mode? Perhaps you had a long transaction so third file remained active, try doing this at a quite time for the app or better when no connections.

    after this action take a full backup of the database and master.

    ---------------------------------------------------------------------

  • SQL Did need the log file, quite desperately in fact, this was down to a bit of a schoolboy error on a colleagues part - ended up filling up 160GB on the RAID 5 set.

    I kind of realised the log file use was sequential, first two are as big as I can make on the mirror sets, and in normal operations should never be really be used fully - third one was a few MB in size and placed on a 3TB RAID 5 set, just in case it was ever required, which it was. My problem is getting sql to stop using it 🙂

    You tried dbcc shrinkfile(,emptyfile) followed by alter database remove file right? - yip says data still in log file, db is full recovery mode

    did you backup the log beforehand to truncate it or have it in simple mode? Perhaps you had a long transactin so third file remained active, try doing this at a quite time for the app or better when no connections. - yes, although now I am stopping SQL Agent and setting DB to single user mode while I try again - theres no-one else on database right now.

    after this action take a full backup of the database and master. - will do.

    Thanks kindly for the advice George - much appreciated.

  • Second time round -

    Must have been some remnants in third transaction logs between backup and dbcc shrinkfile(,emptyfile) and alter database remove file commands

    Re-ran transaction log backup, immediately ran dbcc shrinkfile(,emptyfile) and alter database remove file and it worked - I only have 2 transaction logs on the mirrors - as they should be, I can go to bed now !!!

    Thanks again.

  • good to hear, it can just be a case of timing sometimes

    ---------------------------------------------------------------------

  • that and the fact I can be a bit thick sometimes 😉

  • The question of why there was a delay can be answered by considering the internal structure of log files.

    Books Online has an excellent description of it:

    Transaction Log Physical Architecture

    Transaction Log Logical Architecture

    In essence, you just needed to wait for natural log activity to move the active portion of the log entirely out of the overflow file.

    Paul

  • Thanks for that Paul, I'll make a point of having a look at this. I've had an interesting journey over the weekend - highlighting my lack of knowledge on logging. The database I'm working on is quite large and we do index rebuilds / reorgs on it every week which seems to have helped fill the logs. bit of reading and found the bulk_logged recovery model which looks like it will make my life a bit easier 🙂

  • Yes, minimally-logged index rebuilds can be very efficient, especially if you are able to advantage of parallelism.

    Be sure to back up your log immediately before switching to BULK_LOGGED mode from FULL, and again when you switch back to FULL recovery.

    The second log backup will be much larger than usual (it includes a full copy of all pages with bulk_logged changes).

    Reorganization of an index is always fully logged.

    Paul

  • George / Paul

    Guys thanks for your help with this - I may have indicated this has been a very 'interesting' weekend, the logging issue has been only a part of the overall problem but solving it quickly with your help has allowed me to focus on other more key issues.

    Since wed last week I have had drive failures, drive re-builds, subsequent parity errors on new drives poorly configured backups (mostly myself I have to admit). All on a massive database that we only really look at 1% of at any one time 🙂

    I obviously have a big list of actions (database pruning / archiving, backups, sack the hardware vendor) and a load of reading - but I am much better positioned than I was on friday last week.

    Again thanks - much appreciated

    Cheers

    Stevie

  • No worries Stevie. Sounds like you've had a tough week!

    Good luck!

  • Stevie, if not already doing so sounds lie you would benefit from cutting down\spreading out the reindexing you are doing. If you search BOL for sys.dm_db_index_ physical_stats that entry comtains a script to only rebuild indexes that are in fact fragmented

    You could also perhaps change your strategy to do different tables on different days, all depends on your setup\requirements. This would spread out the the log activity and therefore the maximum size of log required.

    ---------------------------------------------------------------------

  • george sibbald (2/22/2010)


    Stevie, if not already doing so sounds like you would benefit from cutting down\spreading out the reindexing you are doing. If you search BOL for sys.dm_db_index_ physical_stats that entry comtains a script to only rebuild indexes that are in fact fragmented...

    Michelle Ufford (@SQLFool) has a widely-used script that is very flexible: http://sqlfool.com/2009/06/index-defrag-script-v30/

  • I'm already using a version of that very script 🙂 - plus we are re-indexing only the most recent DB partition which is about 1 months worth of data (approx 30GB).

    My big gripe at the moment is the size of the database, circa 300GB meaning any recovery wont be quick and when I start looking at replicating the database for fault tolerance I'll need a big disk on any second box. I was thinking that I could easily offload the large portion of the current DB to a seperate archive database, allowing me to have a slightly more nimble 'front end' database.

    Do you guys think this is a viable option?, we have no real burning desire to look at the historical data, its only queried on occasion.

  • stoviber (2/22/2010)


    I'm already using a version of that very script 🙂

    Woot! Were you a version 3 beta tester?

    As far as the achiving thing is concerned, yes it is very much a viable option. The details depend on the details, however 😉

    Paul

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply