There is insufficient system memory in resource pool 'internal' to run this query"

  • Hi,

    We have a certain customer running SQL 2008 64bt Standard, windows 2008 std.

    The server is using about 3.8GB of its 4GB of Physical memory. We can run queries in Query Analyser but when our web application tries to run even a simple SQL query against the database we get

    "There is insufficient system memory in resource pool 'internal' to run this query"

    Is this purely down to the fact the server is running out of Physical memory. Will SQL not use the page file, will it just generally throw up this error when it runs out of Physical memory. Also I've never come across this problem before, so it is a specific 2008 issue or 64 bit issue?

    Any help would be much appreciated.

    Many Thanks,

    Andy

  • From the error it sounds like the resource governor's enabled. What's the configuration for the internal resource pool?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hi Andy,

    We've been encountering the same issue in similar circumstances, albeit we're only using Windows 2003 Standard Server. We're migrating our SQL Server 2000 systems, that have been running 24/7 quite happily since 2001, up to SQL 2008 and finding on one of the servers that memory resource is quite a problem. I don't have an answer yet, but I'll share some of our thoughts.

    The first time we saw this was when our web servers started throwing ugly error messages when they called any stored procedures that involved further calls to extended stored procedures. The SQL error logs had corresponding entries with the exact text you describe regarding the internal resource pools.

    At this point it's worth mentioning that the Resource Governor is NOT enabled.

    Further examination of the SQL logs showed that the memory issue was a clear deterioration over a number of days; prior to the extended sps failing we could see that there were issues with our regular transaction log backup, where the following message is written:

    Memory constraints resulted reduced backup/restore buffer sizes. Proceding with 7 buffers of size 64KB.

    Going back further in the logs we get to a point where the transaction log backup was working fine but there were issues with Backup Exec which resulted in it dumping the output of dbcc memorystatus to the error log - I'm not familiar with that one, so I need to find out more.

    We're running on HP servers that include their Insight Manager software. This creates its own SQL instance to work with, and was never a problem under SQL 2000, but for some reason it now grabs the best part of 500Mb for the database. Together with its resident processes it's using in excess of 750Mb of memory, which for something we don't really need is a bit wasteful. So we've just disabled that and see how it goes over the next few days.

    If we do still encounter the problem I might play with some of the other DBCC commands that might help - FREESYSTEMCACHE, FREESESSIONCACHE and FREEPROCCACHE. That might resolve the problem without restarting the database.

    Overall, I'm with you Andy; why doesn't SQL start paging when physical RAM gets short? OK, performance will go downhill, but at least it keeps running.

    Any further hints will be much appreciated.

    Jon

  • Hi Gail,

    Thanks for you response, unfortunately the Resource Governor isn't enabled on this server.

    Many Thanks,

    Andy

  • Hi Jon,

    Thanks for your thoughts. I've spent about 3 days trying to find some helpful information to the memory issue.

    Our situation was that our customer needed to decommission one of their servers and move their database from 2005 to 2008. The application worked fine on 2005, but we just can't get it to work on 2008 at all.

    I do suspect that it is purely a physical memory issue in our case.

    Many Thanks,

    Andy

  • I'm probably tempting fate, but I think we've solved this issue; we've been running over a week without any problems, which beats any of our previous experiments.

    We have quite a high legacy of extended store procedures, OLE automation and distributed queries in our application, all of which use RAM outside of the main memory pool. This can be configured using the -g start up flag - under 2008 it defaults to 256MB, but under SQL 2000 it was 384MB. Having restarted the database with -g384 added to the options all now seems to be OK.

    Cheers

    Jon

  • Hi Jon,

    Thanks for the update. I will have a look at our set up and see if that makes a difference.

    Andy

  • Yesterday it also happened to me in a server that will be in production in a week, so I could do all the tests I want now. I wasn't at the office when it happened and the developers restarted the server to solve it.

    It's a Server with 1 processor (4 cores) and 4Gb of RAM (without using /3G neither /AWE).

    At that time, there had mainly been executed stored procedures. From '2009-05-12 18:03:51.293' to '2009-05-12 18:05:03.293' there was 650 stored procedures that made inserts/updates and deletes. No extended procedures were executed.

    It's SQL Server 2008 Standard Edition RTM version and I think I'll install Service Pack 1. Because in things that have been solve, there's:

    "SQL Server 2008 seems to make slow progress and may return error 701 if the buffer pool size is very small." and I'm not sure if it has something to do with it...

    http://support.microsoft.com/?scid=kb%3Ben-us%3B968369&x=3&y=4

    Here is a summary of what I've found in the error file:

    2009-05-12 18:15:37.48 spid30235 Failed allocate pages: FAIL_PAGE_ALLOCATION 1

    2009-05-12 18:15:37.48 spid30235

    Memory Manager KB

    ---------------------------------------- ----------

    VM Reserved 1681456

    VM Committed 1641124

    AWE Allocated 0

    Reserved Memory 1024

    Reserved Memory In Use 0

    [....]

    2009-05-12 18:15:37.60 spid30235

    Memory Pool (internal) Pages

    ---------------------------------------- ----------

    Allocations 194519

    Predicted 236669

    Private Target 0

    Private Limit 0

    Total Target 194529

    Total Limit 194529

    OOM Count 1

    [...]

    2009-05-12 18:15:37.60 spid30235

    Buffer Pool Value

    ---------------------------------------- ----------

    Committed 199936

    Target 204768

    Database 5280

    Dirty 426

    In IO 0

    Latched 1

    Free 137

    Stolen 194519

    Reserved 0

    Visible 204768

    Stolen Potential 10

    Limiting Factor 11

    Last OOM Factor 0

    Page Life Expectancy 522708

    2009-05-12 18:15:37.60 spid30235

    Process/System Counts Value

    ---------------------------------------- ----------

    Available Physical Memory 1617272832

    Available Virtual Memory 227524608

    Available Paging File 3527794688

    Working Set 1762062336

    Percent of Committed Memory in WS 100

    Page Faults 492460

    System physical memory high 1

    System physical memory low 0

    Process physical memory low 1

    Process virtual memory low 0

    2009-05-12 18:15:37.60 spid30235

    Procedure Cache Value

    ---------------------------------------- ----------

    TotalProcs 1

    TotalPages 534

    InUsePages 0

    2009-05-12 18:15:37.60 spid30235

    Global Memory Objects Pages

    ---------------------------------------- ----------

    Resource 191

    Locks 180

    XDES 21523

    SETLS 4

    SE Dataset Allocators 8

    SubpDesc Allocators 4

    SE SchemaManager 215

    SE Column Metadata Cache 313

    SQLCache 497

    Replication 2

    ServerGlobal 26

    XP Global 2

    SortTables 68

    [...]

    2009-05-12 18:15:37.71 spid30238 Error: 701, gravedad: 17, estado: 130.

    2009-05-12 18:15:37.71 spid30238 There is insufficient system memory in resource pool 'internal' to run this query.

    [...]

    2009-05-12 18:15:38.46 spid30241 There is insufficient system memory in resource pool 'internal' to run this query.

    2009-05-12 18:15:38.46 Error: 18456, Severity: 14, State: 40. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.

    2009-05-12 18:15:38.48 spid30241 Error: 701, gravedad: 17, estado: 89.

    2009-05-12 18:15:38.48 spid30241 There is insufficient system memory in resource pool 'internal' to run this query.

    [...]

    2009-05-12 18:28:56.91 Servidor Error: 17186, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.

    2009-05-12 18:28:56.91 Servidor Error: 17186, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.

    2009-05-12 18:28:56.91 Servidor Error: 18052, Severity: -1, State: 0. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.

    2009-05-12 18:28:56.91 Servidor Failed to enqueue kill_session task. There may be insufficient memory.

    2009-05-12 18:28:56.91 Servidor Error: 17186, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.

    2009-05-12 18:28:56.91 Servidor Error: 17186, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.

    And continues with the following line until the SQL Server service is restarted:

    2009-05-12 18:28:56.91 Servidor Error: 17186, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.

  • It has happened again and this time I could start checking what was going on ๐Ÿ™‚

    The first problem was connecting to database: I couldn't log into because I got error 701. So, I logged in successfully using the DAC connection.

    I started running the following, althought I had not a lot of expectation because we use quite a few of stored procedures:

    DBCC FREESYSTEMCACHE ('ALL');

    DBCC FREESESSIONCACHE

    DBCC FREEPROCCACHE

    DBCC DROPCLEANBUFFERS

    Nothing changed.

    I tried then with sp_who2 and got this:

    Mens 701, Nivel 17, Estado 123, Procedimiento sp_who2, Lรญnea 49

    Error: 701, Severity: 17, State: 123. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.

    I executed the following query to see all connections opened:

    SELECT

    spid,status,sid,hostname,program_name,cmd,cpu,physical_io,blocked,dbid,

    convert(sysname, rtrim(loginame)) as loginname,spid as 'spid_sort',

    substring( convert(varchar,last_batch,111) ,6 ,5 ) + ' ' + substring( convert(varchar,last_batch,113) ,13 ,8 ) as 'last_batch_char',last_batch, DB_NAME(dbid)

    from master.dbo.sysprocesses (nolock)

    order by loginname

    and here I saw something unexpected: there were 30.036 connections where the normal average is 40. All were done with the same login and against the same database: it was a 3rd party provider that every time they wanted to execute a stored procedure they were creating a connection and leaving it opened.

    The problem is different than the original post in this forum thread, but I think it's also interesting...

    Josep

  • Andy I think I would have been on the phone to Microsoft product support well before I hit 3 days trying to resolve this issue. This seems to be deep internals stuff and hunting and pecking on a forum likely won't be the thing you need (and certainly hasn't to this point). Add in the fact that relatively few people are on SQL 2008 at this point so we don't have a large body of knowledge/experience to draw on and you again get pointed to MS. ๐Ÿ™‚

    Best of luck with it. Perhaps you have stumbled onto another bug?

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • We had a similar issue porting our app from 2005 to 2008 - this fix did the trick, thanks for the post!

  • Hello!

    I have a similar problem and I want to know if u could solve the memory problem.

    If u can send me an email to contact you I would be very thankful. My migration was a success but I'm having many memory problems.

    Thanks!

    Luiz Fernandes

  • If you have the same problem did you try the same solution (-g startup setting)?

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • I experienced this error repeatedly on a Windows 2003 R2 server with 4GB of RAM, running in 64 bit mode on SQL Server 2008. The project needed repeated calls to extended stored procedures and would crash after 2-3 ETL stages with the following error:

    "There is insufficient system memory in resource pool 'internal' to run this query."

    I tried the -g312 (and -g512) memory switch on the SQL startup parameters, but this did not work.

    After adding an additional 4GB of RAM, so total RAM on machine was 8GB, the problem stopped. SQL Server ran using 7GB of RAM on average.

    It seems SQL 2008 running in 64bit mode requires lots of resources. My dev machine, Windows XP, 4GB RAM with SQL 2008 running in 32 bit mode, staged the same ETL process with no memory problems.

  • MemToLeave, -g switches and such are pretty irrelevant on 64-bit SQL Server.

    CSS SQL Server Engineers Blog Post

    64-bit SQL Server has 8TB of VAS (virtual address space) to play with on x64, 7TB on Itanium. Even on a server with 512GB RAM - yes GB - the buffer pool uses 0.5TB of VAS. That leaves at least 6.5TB of VAS for other stuff!

    The point of the -g switch was to reserve a certain amount of VAS, before thread stack space, because VAS was so tight on the 32-bit architecture. The default of 256MB (never been any different) ends up using 384MB of VAS once 0.5MB per thread for stack space has been reserved (this varies a bit depending on configuration, but it's about right).

    So, there's absolutely no need to use -g on 64-bit platforms, and as far as I know, it is ignored.

    If you're seeing problems like those reported earlier in this thread, it is most likely because of something outside of SQL Server's control leaking memory. Prime suspects are extended stored procedures - deprecated for good reason! Given effectively unlimited VAS, extended stored procedures with leaks, and other memory-related bugs, are more likely to cause problems, not less.

    If that seems counter-intuitive, consider that on 32-bit, a rogue XP would fail on a VAS allocation long before it caused server-wide memory problems. Now, with almost unlimited VAS, that safety net is gone...

    Paul

Viewing 15 posts - 1 through 15 (of 38 total)

You must be logged in to reply to this topic. Login to reply