SQL Server memory performance

  • Hi, ive been playing around with creating big tables >10m rows, to see how it effets the overall computer performace.

    One thing that ive noted is that SQLServer holds select information in memory even after the view has been closed.

    This decreases perfermance, the ram is staying above 90% used, is there any tweak or way to free the ram faster?

    Here's the query:

    begin tran

    SET NOCOUNT ON

    SELECT TOP 10000000

    ID = IDENTITY(INT, 1, 1),

    igroup = 1 + CAST(Abs(Checksum(Newid()) %4 ) AS INT)

    INTO test1

    FROM sys.all_columns ac1 --Contains 4000 rows even on a brand new system

    CROSS JOIN sys.all_columns ac2; --so single cross join is more than enough for up to 16 million rows

    ALTER TABLE [dbo].[test1] ADD CONSTRAINT [PKst_] PRIMARY KEY CLUSTERED ([ID] ASC);

    GO

    CREATE NONCLUSTERED INDEX [ix] ON [dbo].[test1]

    ([igroup] ASC);

    select * from test1

    rollback

    ty for your time

  • memymasta (3/7/2012)


    This decreases perfermance, the ram is staying above 90% used, is there any tweak or way to free the ram faster?

    This increases performance! Next time you will query that table, the data pages will be read from memory (Buffer Cache) and not from disk.

    SQL Server decides what to keep in the Buffer Cache and what to discard using complex LRU algorithms that ensure that data read frequently is kept in the cache.

    So, why would you like SQL Server to release memory? Is it starving other processes?

    -- Gianluca Sartori

  • This increases performance!?

    Feels counterintuitive, to see ram hovering at 100% 24/7 on production server and think that it increases performace.

    I'd like to have atleast 10% spare for other OS processes.

  • In fact it should not go 100%.

    You can set a max limit for SQL Server memory using SSMS or sp_configure.

    However, SQL Server can (and will) release memory to the operating system in case of need.

    -- Gianluca Sartori

  • Chapter 4: http://www.simple-talk.com/books/sql-books/troubleshooting-sql-server-a-guide-for-the-accidental-dba/

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • If a chunk of data was used there are a chance it ill be used again so why flush it?

    Memory ill be flushed by demand since to erase memory is cheap and to load is expensive.

    Ill can find more about memory load and paging strategies used by DBMS and OS.

  • Found this @ microsoft, which supports the idea that more cached data is better.

    Microsoft


    By default, SQL Server dynamically grows and shrinks the size of its buffer pool (cache) depending on the physical memory load reported by the operating system. As long as enough memory is available to prevent paging (between 4 - 10 MB), the SQL Server buffer pool will continue to grow. As other processes on the same computer as SQL Server allocate memory, the SQL Server buffer manager will release memory as needed. SQL Server can free and acquire several megabytes of memory each second, allowing it to quickly adjust to memory allocation changes.

    Although our server has 8GB RAM, and SQLServer uses 6-6,5 GB at all time.

  • It dont mean performance problems per si (in fact I believe its a normal behaviour).

    You must check Buffer Cache Hit Ratio and others counters.

    Try find some "memory pressure" articles.

    there are some god ones here im SSC, you can also find nice sources at Jeff and Gail blogs.

    As Gails sugested you can read some books to get deep in this topic

  • Buffer cache hit ratio is a counter that should be ignored mostly.

    http://www.simple-talk.com/sql/database-administration/great-sql-server-debates-buffer-cache-hit-ratio/

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Great article! Thanks for the tip Gail.

    In my defense I never told BCHR is the only counter to check.:-D

    In fact no single counter ill tell the whole history.

    Reading the article I can tell the low PLE and high variation in the free pages graph are telling there a lot of pages being flushed and (re)populated.

    To satisfy my curiosity. Do you know that read-ahead behaviour "masked" the BCHR in 2000 and 7 versions?

    I remember one time BCHR helped me to find a problem (but I also remeber there are something with IO disk problem)

  • GilaMonster (3/7/2012)


    Buffer cache hit ratio is a counter that should be ignored mostly.

    http://www.simple-talk.com/sql/database-administration/great-sql-server-debates-buffer-cache-hit-ratio/

    I read Jonathan's article with interest. But I found it telling that nobody has answered the following question from Michal...

    "If read-aheads are fast enough to get the needed pages in the buffer (i.e. BCHR = 100), wouldn't query performance be the same?

    In other words, BCHR might not say whether you have enough memory, but it does tell you memory (or slow disks) is causing queries to suffer."

    Can you add any more to this please Gail?

  • In as few words as possible: don't trust it when it's high, start worrying when it's low.

    -- Gianluca Sartori

  • This book is a great resource. I found it very useful!

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply