February 10, 2010 at 4:57 am
I am trying to get my head around memory useage with sql server - with my thoughts stemming from multiple instances of SQL 2005 on one machine.
First let me give you the background -
We are trying to save money on hardware and licensing and server room space by having all of our development and test environments on one server. However, in testing this we have a low spec server...
It is running SQL 2005 64 bit Enterprise. It has 4gb TOTAL RAM on the machine with 500gb disk space and 1 quad core processor.
We have 2 SQL instances on here. Now we aren't exactly bombarding this server with work, yet if we have one big query running on one instance it does seem to slow everything down. Just 1 big query?!?! This is where I don't think I understand memory management that well. Why would the SQL server use all available memory for a standard query. Is it the nature of the system? (Reading white papers as we speak).
Previously with 2 virtual instances on one host we have experienced 1 server locking the other. With 2 sql instances on 1 server.... what should I look out for? I have set Max Memory on both instances to 1.5gb. But we have some small Update queries timing out in Instance 2 if a query is running on instance 1.
Any general help on this would be great.
Thanks,
Shark
February 10, 2010 at 10:24 am
If you have defined 1.5GB of memory for each instance as the max memory, that is not the max memory that SQL Server will use. There is an additional overheard associated with managing the memory and some other objects, but that should not bee very large.
I would not say that you have memory issues necessarily. The symptoms you are seeing could be CPU or I/O related, and I would say those are the most likely culprits in this situation.
When you run an update query - it simply could be getting a parallel plan and maxing all CPU's on the machine which would block the other instance, or you could be performing a scan of a table and saturating the I/O.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
February 10, 2010 at 1:31 pm
I'm going to go with Jeff on most of this. I'd almost be willing to bet money that you are experiencing I/O bottlenecks. How is your disk subsystem built-up. How many disks, what is the raid configuration(if any), do the instances share the disks or are they split between.
Jeff is right, 1.5GB is not ALL the memory it will use, but even with those settings it is probably not worse than 1.75GB x 2.
Also with a quad core it is possible to do processor affinity and assign cores to each instance, HOWEVER, I would not do this without thoroughly reviewing your disk configuration because I am almost certain that is where you are getting hosed.
Also, keep in mind your "big query" might not be effectively using indexes so it is reading a lot more data than it needs. Look at its query plan to see what it is doing and if that is what you are expecting.
Cheers.
CEWII
February 10, 2010 at 3:54 pm
Thanks guys.
I can confirm all indexes are where they should be and the query is ok. It can run in 50 minutes on another server. On this one it can take 5 hours!
The server is on a SAN (disk config I am not sure as it is a test from our network guys). However, I am spreading the System, Data and Logs over 3 Physical (make that virtual disks) but with both instances sharing this spread.
What is the most effective perfmon counters or profiler checks to see what impact the memory is having?
I can see the machine memory really nosedives under task manager whenever this query kicks off.
Viewing 4 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply