Speed Issues

  • We have a database which is over a GB in size. The Transaction Log is currently over 14GB in size. This is an online system with users logging in from all over the country. At any given time there are over 40 users logged in to the system.

    Of late we have been experiencing system delays, slow connections.

    Can anyone please tell me the following:

    1. Does the size of the log file in anyway affect the performance of the Database Server? What would be the optimal size of the log file?

    2. We have a maintenance plan in place which rebuilds the Indexes once a week, when the usage is low. Usually between 2250 and 2500 records are added to 8 (of 22) tables per week. Do we need to execute the maintenance plan daily? There are over 400K records in the Transaction table.

    Any suggestions, links to relevant articles will be appreciated.

    Thanks.

  • I would recommend checking as often as possible (daily if you can) the index fragmentation level on the tables that are used in the slow queries so that you can trend their fragmentation level over the course of the week. If you see a particular table or index that is fragmenting quickly look at adjusting the fillfactor for the index.

    The trans log seems very large for a system that is only adding a couple of thousand records a week. I would look at doing some transaction log dumps to disk to reclaim that space. Dumping the tran log every 30 minutes would definately reduce that size and help you out. In fact I would investigate as to why the trans log is that large.



    Shamless self promotion - read my blog http://sirsql.net

  • You could use dbcc showcontig to know index fragmentation.

    Also check the script page of this site, that you will find already made script that you can use or modify.

  • Size of the log file definitely affects performance.  The log file should be around 20% of the database size.  A 1GB databse should not have a 14GB transaction log.  Have you ever backed up the transaction log?  Does the transaction log have free space or is it continually growing?  

  • I agree your trans log is way to big for the number of transactions. My 12GB dbs have 4gb trans logs with over 700,000 transactions per day. The size of the transaction log will certainly effect your performance. Are you truncating the log at checkpoints? What is your backup method like? Can you move to bulk logged transactions instead of full logged?

    If I were you I would set turncated log at checkpoint to true, and do hourly (ore every 3 hrs or something - depending on the data you can lose) transaction log backups to keep things small. Then analyze your free space in the log, shrink it to maybe 1GB, and watch to see if it grows with your new maintenance process. You should then have fewer fragments to worry about and you can reduce your defrags to monthly.

  • I don't know if the Transaction Log has ever been backed up before.

    Can one tell me how to check the free space in the Transaction Log file?

    What I did was to backup the Transaction Log file and Shrink it using DBCC. This I have done on test servers which is an exact mirror copy of the Production boxes in terms of the data and log file.

    BACKUP LOG dbName WITH TRUNCATE_ONLY

    GO

    DBCC SHRINKFILE (file_id, TRUNCATEONLY)

    GO

    This reduced the Transaction Log File from 14GB to 1MB.

    Another question that comes to my mind is, would it be a problem to execute the above commands during the day, when there are users are on the system.

    I also executed DBCC SHOWCONTIG, yesterday. I am going to do the same today, for comparison, and try to figure out if the execution plan we have scheduled for the week-end needs to be scheduled 2-3 times a week.

    Another thing that I noticed with Performance Monitors is that the Cache hit ratio on the Database file is 61% and on the Transaction Log is 49%. Is this something that needs or can be improved on. What are optimal cache hit ratio's for the Database as well as Tranasction Log file?

    Thanks.

  • Your cache hit ratio should be up around 99%. You are probably running a lot of ad-hoc queries against this box (at least I hope so) to get a hit ratio that low.

    To backup the log use BACKUP LOG <DBNAME> TO DISK = ' <LOCATION> ' I would run this frequently, every 15-30 minutes to prevent the log from growing to the massive proportions that it's reaching now. By putting the log backups to disk you can dump them to tape, which will enable you to be able to recover to a point in time if you ever lose the database for some reason.

    You can use DBCC SQLPERF(LOGSPACE) to find out the size and space free within the log files.



    Shamless self promotion - read my blog http://sirsql.net

  • Although all the suggestions above have their merits, I doubt they are actually the problem.

    When you say you have people connecting from all over the country, what do you mean?  Are they making direct connections over the internet or are they going over the internet and connectiing to a LAN?  The biggest reason for intermittent connection problems is usually connectivity.  So if you can reply with your basic configuration that would be helpful.

    Definitely follow the other posters' advice and setup a maintenance plan to backup your transaction log, if you want to have the ability to do full recovery and point-in-time recovery from your SQL Backups.  But if you are only really interested in have a full backup to restore from, then set your database properties to "Simple" recovery mode.  (Simple if you are using SQL2k.  Set "Truncate on Checkpoint" if you are on SQL7.)  Then attempt to shrink the Transaction log for the database to no more than 500MB...depending on your recovery model.

    Hope that helps.

  • I would recommend you setup a database maintenance plan. Enterprise manager has a nice wizard for doing this. Open enterprise manager, select your server, then go to tools and database maintance planner. You here you can schedule index reogrs, integrity checks, DB and log file backups. I usually do index reorgs and integrity checks once a week (sunday nights). Do full backups nightly and transaction log backups every 3 hrs.

    The easiest way to check free space, is to open EM, open db, go to view and choose taskpad, at the bottom you'll see a nice graph.

    Yes you can do log backups during the day, with users connected. Once you start doing this you shouldn't have to shrink the log. Do an initial shrink, then this maintenance should keep it at a good size.

    Your cache hit ratio is too low, it should be above 85% typically - ADD RAM, but after you take the above steps as they may remedy the situation.

  • What I mean by "I have users connecting from all over the country"? Ours is an OLTP system, with users logging in to the Application. The configuration is Clustered DB servers (2 nodes) and Clustered Web Servers (2 node).  We have 2GB of RAM on the DB servers. The SQL Server process uses above 1GB. Would a large Transaction Log file be responsible for this or could it be that's just the way SQL Server works (using memory and freeing it up only when the OS requires it).

    We have 2 firewalls set up one between the internet connection and another between the web and db server.

    Our recovery model currently is set to FULL. We do have jobs which take backups every night. Would it be a better option to change that to Simple or Bulk?

    Using the Taskpad Option, I found that 901MB is free and 13GB is used on the Transaction Log File. What does this mean?

    Also I just found out that in the start of February of this year, the Transaction Log was 2+GB, which means that it has added ~12Gb in 2 months. Is this the norm or does this mean that something somewhere needs to be fixed.

    Thanks

  • You definitely wnat a FULL recovery model.  This allows you to do FULL, DIFFERENTIAL, and LOG backups.  For heavily updated databses I do a weekly full backup, Differential backups about every 12 hours and hourly transaction log backups.  This way I can restore to within an hour if there is a crash.  It also means the most I have to do when restoring is 1 Full restore, 1 Differential restore and 11 TX log restores.  If I were doing TX Log backups every 15-30 minutes as suggested above I would also increase the frequency of Differential backups.   BOL does a good job of explaining the different backups.  The first TX Log backup you run will probably take a good 30 minutes becasue of the log size.  After that it should take less than 2 minutes depending on the frequency.  There is an option on DB's to Autoshrink,  opinions vary on using this option, which will keep the log and db files reduced in size, as  long a syour backing up regularly.  THere is a performance hit on this, I use it and I have not noticed a big hit with it on a system that has one table with over 20 million rows since 6/1/2002 (about 31000 inserts a day).

  • I do agree that 14 GB of log file space is big but i don't think it should really hit performance. The only time it can hit is depending on what your growth factor is for your logs.I would prefer to have the growth increment to 500 MB-1Gb depending on the usage. Regarding the indexes, as all suggessted, check the fragmentation more often.

  • I am trying to figure out whether dbreindex or indexdefrag is required for my database. I took one table as an example. I first executed dbcc showcontig and it gave me the following results:

    Scan Density [Best Count:Actual Count].......: 98.52% [667:677]

    Extent Scan Fragmentation ...................: 29.99%

    I read somewhere extent scan fragmentation > 0% means external fragmentation. Is 29.99% on the higher side? Would it affect performance to a great extent.

    I then executed dbcc dbreindex and got the following results for showcontig.

    Scan Density [Best Count:Actual Count].......: 99.70% [667:669]

    Extent Scan Fragmentation ...................: 18.24%

    Better than what it was but still not close to 0%.

    I then executed dbcc dbreindex a second time and got the following results for showcontig.

    Scan Density [Best Count:Actual Count].......: 99.70% [667:669]

    Extent Scan Fragmentation ...................: 8.07%

    My question is, Do I need to execute DBREINDEX more than once to achieve close to 0% Extent Scan Fragmentation? Or is there another way to maximize the effect of DBREINDEX?

    Total Pages was 5331 and total records in this table was 112684.

    Also when I do sp_spaceused (tablename) on the same table I get:

    records: 112684      

    reserved: 118672 KB 

    data: 52904 KB 

    index size: 11920 KB 

    unused: 53848 KB

    Is the unused space a high percentage or is this normal. I did the same on some of the other tables and they had over 90 MB of unused space

    Thanks.

  • Note

    you must make a Full backup daily (at least) and a transaction Log Backup at 1 hour (at maximum) so the data and the Log file will not increase that much.


    Alamir Mohamed
    Alamir_mohamed@yahoo.com

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply