Blog Post

Should a DBA be involved in the physical set up of a Database Server? Part 3

,

This series of blog is to give a general idea what you could look at when setting up a new database server. First part of this blog talked about what could happen when you have a badly configured DB server while the second part listed what all steps can be carried out to get much better performance out of your SQL Server.

Once we got a good set up for the server (hardware configuration) we started looking at how the SQL Server is performing and if there was anything that can be done to tweak it more.

There are some basic things that can be done when installing and configuring SQL Server. As stated in the second part of this series of blog, we had 128 GB of RAM to play around with. We followed the general rule of thump of giving 90% of the total RAM to the SQL Server and the rest to the OS. We dedicated 116 GB to SQL Server using sp_configure. When we did this, the first thing the Admins told us was that we were wasting RAM because the OS does not need 12 GB. We had to explain that the 116 GB is just for buffer pool and there are other process of SQL server that will use the memory assigned to the OS such as multipage allocations, linked servers to name a couple.

We decided to do a load test in a controlled environment. We restored our production database and set up an environment where we could do a controlled load test. For load testing we used the load runner since this was already being used by our QA department.

Now a confession, we have a badly designed database inherited from third party software that we were using. We had already started replacing some of the application with our own but we still had to use some of the third party tools. One quarter of these tables does not have PK defined. There were no FKs defined at all on the database. When we ran the upgrade advisor we found out that these third party tools would work only in the compatibility mode of SQL server 2000 since we did not have code base to rebuild application and change the T-SQL code.

We did hours of load testing that pounded the Database server. One of the significant things that we saw was that there was contention on the TempDB. At that point we decided to have multiple files for TempDB. How many files to create was the point we were pondering. We read through Blogs and articles regarding this. Almost every one of them agreed to disagree on the number of files to create. And some stated that creating multiple files might not help depending on the scenario. We decided on having one TempDB file for every two cores. We created twelve files of 10 GB each for the TempDB.

We also saw that there was parallelism at work when executing four stored procedures. Instead of setting MAXDOP of 1 for the whole server, we gave a hint for just the four stored procedures. I know that this is not recommended but we did not see any reason why we should not do it this way.

Once these changes were made, we did another round of load testing. The results from these tests were quite positive as far as we were concerned. During the load test we were able to handle 8,000 batch requests per second. The CPU was just around twenty percent and we did not see any significant amount of disk queue length. The test showed that we were ready to move this server to production.

I was hoping that with this blog I will be able to complete the series but I still have not touched upgrade method we used. Maybe I will write part 4 of this blog series so that I can complete that part as well.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating