June 5, 2009 at 9:53 am
Hi,
I have a Windows 2003 Server (32-bit), running SQL 2000. It has 2GB Ram.
iIve noticed one of my databases is going extremely slowly and timeouts are occuring. SQL is set to use all of the memory and is currently maxing out at 1.6GB in task manager. I also noticed I only have 50mb of memory left.
i've read somewhere, given the above scenario, it is best to restrict SQL to 1.5GB of memory so that Windows has some space to operate and less virtual memory is used. This sounds reasonable but wanted to check if anyone else has any guidance on this?
Many thanks
Rob
June 8, 2009 at 1:10 am
I think it would not be helpful to customize memory settings on a server with 2 GB RAM. Windows Server 2003 has a very good mechanism for dealing with memory allocation issues. Rather than this, you may consider to performance tune your database objects.
DBDigger Microsoft Data Platform Consultancy.
June 8, 2009 at 1:10 am
I think it would not be helpful to customize memory settings on a server with 2 GB RAM. Windows Server 2003 has a very good mechanism for dealing with memory allocation issues. Rather than this, you may consider to performance tune your database objects.
DBDigger Microsoft Data Platform Consultancy.
June 8, 2009 at 1:10 am
I think it would not be helpful to customize memory settings on a server with 2 GB RAM. Windows Server 2003 has a very good mechanism for dealing with memory allocation issues. Rather than this, you may consider to performance tune your database objects.
DBDigger Microsoft Data Platform Consultancy.
June 8, 2009 at 1:10 am
I think it would not be helpful to customize memory settings on a server with 2 GB RAM. Windows Server 2003 has a very good mechanism for dealing with memory allocation issues. Rather than this, you may consider to performance tune your database objects.
DBDigger Microsoft Data Platform Consultancy.
June 8, 2009 at 1:10 am
I think it would not be helpful to customize memory settings on a server with 2 GB RAM. Windows Server 2003 has a very good mechanism for dealing with memory allocation issues. Rather than this, you may consider to performance tune your database objects.
DBDigger Microsoft Data Platform Consultancy.
June 8, 2009 at 1:10 am
I think it would not be helpful to customize memory settings on a server with 2 GB RAM. Windows Server 2003 has a very good mechanism for dealing with memory allocation issues. Rather than this, you may consider to performance tune your database objects.
DBDigger Microsoft Data Platform Consultancy.
June 9, 2009 at 2:49 am
The first thing you should do is to try to find the cause of the slow performance.
Use Perfmon to look at items such as:
* Page file write operations (P/F read operations are normally not a problem)
* Disk I-O, especially read and write queueing
Also identify if there is anything running on your server that does not need to be there.
Also look at your SQL Server setup. Make sure you have Auto Close or Auto Shrink turned OFF - these can be very harmful to performance.
Next look at when you last rebuild the indexes of your critical tables. Use Google to research index rebuild issues - ther is a lot of good advice available.
Also look for the Microsoft paper Troubleshooting Performance Problems - this gives a good methodology to help you find and fix your problem.
After you have done all this, you may conclude that 2GB is not enough memory to get the performance you need, but you will probably find a lot of other things that need fixing while you are working on this.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
June 9, 2009 at 5:22 am
Thanks for the info.
I agree, analysing using perfmon, will probably show the issues more clearly. I know that the database schema is inefficient, the main table used has 142 columns!, and is over 500mb in size. Unforunately I'm not in a position to change this easily due to a legacy application using it, which I can't change.
The server is only used for SQL and nothing else is running
I've just switched off Auto Shrink, so hopefully this will help. I do back up, shrink both the db and trans logs daily, as well re-build the indexes using a maintenance plan.
Given the above restrictions, I guess i still need to know if it's worth adjust SQL memory for performance. I can't increase above 2GB unforunately.
Thanks
Rob
June 9, 2009 at 5:31 am
Your problem may be related to the repeated shrinking and growing of your database and log files. You need to stop doing this. Both database files and log files should be sized to cope with your normal workload. If you plan to shrink a file and expect it to grow again within 3 months, then don't shrink it.
Your database files could be in thousands of fragments, which will definitely harm your performance.
Try to find some quiet time and run a NTFS defrag on the volumes holding your database files. The normal tool that comes with W2003 is fine for this job but will process the whole volume, or you can download a tool from Sysinternals that allows you to defrag one file at a time.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
June 9, 2009 at 6:46 am
Both the data and transaction file grow continuously, and certainly truncating the transaction log improves performance. I understand this can fragment the files, but it seems to get slower the larger they are. I could try backing up the DB, removing it from the disk, and then restoring it if you think this would help?
I'd still like to know if lowering the memory available to SQL will help, given the scenario intially outlined, and indepently of any specific Database issues?
Thanks
Rob
June 9, 2009 at 7:17 am
Ther are two ways to deal with transaction log growth that do not involve manually shrinking the file:
1) Do more frequent Transaction log backups.
If your recovery SLA requires you to support recovery to the time the failure happened, you will need to have your database in Full recovery mode and do transaction log backups. Try running transaction log backups once per hour, as this can limit how big they will grow.
2) Put your database into Simple recovery mode.
If your recovery SLA only requires you to recover to the specific times (e.g. 18:00 the previous business day), you can do database backups to safeguard the SLA and have your database in Simple recovery ,mode. In this situation, SQL Server will automatically discard unwanted data from the transaction log and you do not need to do any further maintenance.
If you do index rebuilds of the entire database, you will need a bit over 50% free space in your database to provide the work space needed for the index rebuilds. If you rebuild indexes for each table individually, you will only need enough free space to keep a copy of your largest table. This can help keep your overall database size under control.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
June 9, 2009 at 7:21 am
Windows 2003 is fairly good at managing memory, but it is not as sophisticated as Windows 2008. Many people find that setting a maximum memory size for SQL Server can help performance. You can try reducing your maximum memory size by 100MB at a time until you find that there is no benefit in doing this. On your machine, if performance has not improved by the time you get down to 1GB then limiting memory probably won't help you.
Leave at least 24 hours between each reduction, so you can get enuogh feedback about performance changes. There is no need to restart SQL Server when you adjust memory in this way.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
June 9, 2009 at 8:37 am
As Ed stated, shrinking the database files is never a recommended approach. Do you rebuild indexes after you've shrunk the data file (.mdf)? If not, the shrink will definitely ruin your indexes and consequently negatively impact performance. Are your data and log files on the same set of drives? If so, perhaps you can move one of them to a separate DISK array?
-- You can't be late until you show up.
Viewing 14 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply