March 28, 2006 at 7:36 am
Hi,
I'm using a "normal" P4 (2.60 GHz) with 1GB ram and a 222 GB hard drive as my SQL server.
Until now I didn't had any issue with the amount of data to store neither the performance of the database queries, so the machine worked fine.
But in the near future I'll have to store a huge amount of data. I'm planning to receive 500,000,000 lines to import to my server.
As a test I built a table with 135,000,000 lines. The .mdf file is 24 GB.
Using indexes I get fairly results when running queries that limit the number of rows to fetch data. The problem occurs when I have to cross all the information in the table to get results. It takes a lot of time to run!
So the issue is:
1) Is my machine cappable to work with tables 4/5 times larger than the one I used as example?
2) If not what should I consider to purchase to have good performance?
Thanks
Regards
March 28, 2006 at 9:04 am
Hi,
I normally ask for 4 GB RAM for SQL Server. I don't usually enable AWE (it is a good idea too to enable it for more then 2 GB) so only 2 GB are allocated to SQL Server, but 2 other GB are for OS, applications etc. When the server works with a big amount of data it has to return it to the front end anyway.
So just ask your support to add memory. Also see that you have a BIG tempdb on the BIG drive. Page file goes on its own BIG drive, put big table and frequently used indexes on separate physical drives, put table that are frequently joined on separate physical drives.
But first of all I would watch common performance counters when running your query to see what is the bottleneck: processor queue length, disk queue length, available memory or whatever. Than you will be able to see what you have to change on your server.
Regards,Yelena Varsha
March 28, 2006 at 9:09 am
When you say 'normal' P4, I assume that you mean you are using a workstation class machine with a single processor, 1GB of RAM and a single, 222 GB hard disk. Please correct me if I am wrong. This will work OK for small databases, or even a test server, but if you are running a production application database on it, I would suggest getting a server class machine to replace it.
So, to answer your questions:
1)Yes, your machine is capable of working with large tables, as long as long waits are acceptable.
2) To host a database of the size you've mentioned and to have 'good' performance, I would consider purchasing more processors, memory, and a disk array....in other words, purchase a server class machine with dual processors, at least 4GB of RAM, and an external disk array. There are sooo many posts out here on sizing servers, so I will keep this short and to the point. In my opinion, you are severly undersized to host a database that size on your existing server. You need more processor and memory. If you only buy more processor and memory, you will most likely still have contention issues with your disk. Although your hard disk may be physically large enough to hold the data, you will run into contention when joining tables as the tables physically reside on seperate parts of the disk and, with only one disk, SQL Server can only access one area of the disk at a single given time. Not to mention, you have no redundancy in case of a disk failure ( which may only be a matter of time with as busy as your disk will become).
Viewing 3 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply