March 5, 2008 at 7:47 am
When I look at windows task manager, the SQL Server process is using about 330 megabytes of RAM.
I would like SQL Server to use approximately 2gb instead of 330 mb. How is this accomplished?
March 5, 2008 at 1:34 pm
byron.baker (3/5/2008)
When I look at windows task manager, the SQL Server process is using about 330 megabytes of RAM.I would like SQL Server to use approximately 2gb instead of 330 mb. How is this accomplished?
First off, task manager is almost never accurate for memory usage. Use perfmon under SQLServer: Memory manager -> Total Server Memory.
Regardless, SQL will not consume the memory until it is actually needed.
Now you say that the SQL instance has 14gb allocated? How so? Max Server Memory setting? That is only it's max that it can consume before it will begin to page to disk. That does not imply that SQL will start up with memory allocated (changed from 2000).
There are many more areas to discuss on this topic, but help us to understand what exactly you're trying to accomplish, along with any relevant details (dedicated SQL server? edition of sql/server? etc.)
Thanks
March 5, 2008 at 6:03 pm
This is a SQL Server 2005 standard edition with the latest patches running on
a 32 bit windows xp server attached to an EMC SAN. The host has 2 dual core processors and 16gb of RAM. This is a decicated server.
I did not build this server, I inherited it. The configuration indicates that server memory can grow up to 14gb.
The issue is that during busy hours during the day, CPU utilization goes to the 80's and at times it will hit 100%. This is a high volume OLTP system. When the SQL Server spikes the CPUs to 100% over a sustained period of time. The SQL Server will denying login attempts and it eventually the web servers time out.
There is some ugly SQL and I am fixing it. I still think we are hitting something else. Unfortunately I am an Oracle/Teradata DBA will minimal SQL server expertise.
The server should not crash, just slow down. I need to determine why it is crashing.
I cannot turn on tracing or monitoring as this makes it crash more often.
The SQL server error log just lists failed logins when this event occurs.
Any ideas or should I convert the app to Oracle 🙂
March 6, 2008 at 5:58 pm
Couple quick things to check ...
- boot.ini
-- Is the /pae switch enabled?
- sp_configure
-- Post your output here
- Lock pages in memory (are you using it?)
- SQL Error logs
-- Might want to attach a couple in a reply, look for any errors or anything out of the normal
- System event viewer
-- Anything worth noting in system/application during these times?
- Perfmon
-- Start capturing some of the more important statistics
- PSSDiag
-- Do you have a MSDN parternship? You can have M$ configure a little executable to capture more or less your entire system state over a specified duration and have them analyze it. Basically all they do is run some DMV queries and tracing, then run it through sqlnexxus[/url] to show you a fancy output. You can recreate this quite easily but setting up an appropriate trace.
You may have some configuration issues, but from the sounds of it, you have some seriously poor performing code. My initial route would be to utilize the DMV's to find your worst performing queries, and determine where you are missing indexes. The following queries are based on what is in cache, meaning the output is only relevant since your last SQL restart.
Statements with highest AVG Cpu Time
SELECT TOP 50
qs.total_worker_time/qs.execution_count as [Avg CPU Time],
SUBSTRING(qt.text,qs.statement_start_offset/2,
(case when qs.statement_end_offset = -1
then len(convert(nvarchar(max), qt.text)) * 2
else qs.statement_end_offset end -qs.statement_start_offset)/2)
as query_text,
qt.dbid, dbname=db_name(qt.dbid),
qt.objectid
FROM sys.dm_exec_query_stats qs
cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt
ORDER BY
[Avg CPU Time] DESC
Statements with highest IO:
SELECT TOP 50
(qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count as [Avg IO],
SUBSTRING(qt.text,qs.statement_start_offset/2,
(case when qs.statement_end_offset = -1
then len(convert(nvarchar(max), qt.text)) * 2
else qs.statement_end_offset end -qs.statement_start_offset)/2)
as query_text,
qt.dbid, dbname=db_name(qt.dbid),
qt.objectid,
qs.sql_handle,
qs.plan_handle
FROM sys.dm_exec_query_stats qs
cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt
ORDER BY
[Avg IO] DESC
Missing indexes:
DECLARE @runtime datetime
SET @runtime = GETDATE()
PRINT '-- Missing Indexes --'
SELECT CONVERT (varchar, @runtime, 126) AS runtime,
mig.index_group_handle, mid.index_handle,
CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)) AS improvement_measure,
'CREATE INDEX IX_MI_' + object_name(mid.object_id, mid.database_id) + '_' + CONVERT (varchar, mig.index_group_handle) + '_' + CONVERT (varchar, mid.index_handle)
+ ' ON ' + mid.statement
+ ' (' + ISNULL (mid.equality_columns,'')
+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END + ISNULL (mid.inequality_columns, '')
+ ')'
+ ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement,
migs.*, mid.database_id, mid.[object_id]
FROM sys.dm_db_missing_index_groups mig
INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)) > 10
ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC
PRINT ''
GO
Viewing 4 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply