September 16, 2010 at 1:00 pm
I am a SQL Server DBA with little to no experience on a Cluster OR with the 64bit platform.
Our new hosting company set up the Cluster for us and now that I've restored all of the Databases we are running into a problem with memory.
all of our Legacy DTS packages are working fine except one which blows with with this error when we try and run a Production size file into it:
DTS processed 744737 rows and blew up on 744738, with following error:
Error string: Error at Source for Row number 744738. Errors encountered so far in this task: 1.
Error source: DTS Data Pump
Error string: Error creating file mapping view: Not enough storage is available to process this command.
I'm thinking my memory is not allocated correctly.
This is a 2 node Cluster (active\passive) each node has 8GB of Memory. SQL Server 2005 64 bit on 64bit servers. MIN\MAX memory is set up at 1024mb min and 5120mb max. AWE is enabled and lock pages in memory is enabled. When I look in the task manager, the sqlserver service is using 5,481,592 k of memory.
Something does not seem right. I've been reading up on it but AWE and lock pages in memory on a 64bit platform is very confusing.
any recommendations???
September 16, 2010 at 1:40 pm
truncate your tran log and in datapump change "insert batch size" to 25,000.
Also you should set your recovery model to bulk-logged during high volume of inserts.
September 16, 2010 at 1:42 pm
You are right. "Not enough storage is available to process this command" error indicates that there is insufficient memory for that query to execute. Please check which process in SQL Server is consuming memory.
SQL Server is using more than 5 GB of memory. Also check the Memory utilization in Task Manager
Pradeep Adiga
Blog: sqldbadiaries.com
Twitter: @pradeepadiga
September 16, 2010 at 2:01 pm
AWE is not required on 64-bit machines
Lock Pages in Memory is good setting to use.
Please check if you are experiencing any memory pressures due to the adhoc queries.
If you think this box is a dedicated box for SQL Server and if no other aps are running on the box, you can consider leaving 2GB for OS and 6GB for SQL Server.
But consider all options before doing this.
Thank You,
Best Regards,
SQLBuddy
September 16, 2010 at 2:31 pm
Running the other instance on the other node of the cluster would also help free up some memory. If you have both set to use 5gb max on the same node on an 8gb box..you're going to have issues.
September 16, 2010 at 4:42 pm
I'm not sure what you mean by "Running the other instance on the other node of the cluster would also help free up some memory. If you have both set to use 5gb max on the same node on an 8gb box..you're going to have issues "
I only have one Instance and that Instance can only be active on one node at a time. I could fail it over to the other node.. but I'm not sure what good that would do.
It is troublesome to me that although the max memory for SQL Server is set to 5GB when I look in Task manager is is using slight more than 5GB.. ????
Also, I am not seeing any errors in the SQL Server logs.. only the system event logs.. I checked the paging file and it is only 2GB! I'm thinking this could be a problem??? My plan now is to change the pagefile to 16GB on the passive node and reboot then fail the cluster over. Make the pagefile the same on the other node and reboot and fail the cluster over again .. any comments?
September 17, 2010 at 7:25 am
another thing I have to considered.. our legacy DTS packages work fine with production volume data on our Production system which is sql server 2005 on 32bit (also clustered) . This new cluster is 64bit and I may need to install the Microsoft SQL Server 2005 Backward Compatibility Components from the article "The SQL Server Backward Compatibility package includes the latest versions of Data Transformation Services 2000 runtime (DTS)"
any comments? has any ever done this?
September 17, 2010 at 10:06 am
Max Memory only specifies the upper limit for the SQL Server Buffer Pool but not for the Non-Buffer Pool Memory and hence the utilization > 5 GB.
Thank You,
Best Regards,
SQLBuddy
September 17, 2010 at 10:29 am
One problem I fouund is that SQL Server 2005 Standard Edition on a 64bit
System does not support Lock Pages in Memory without CU 4 for Service Pack
3. Here is the article and a quote from the Article:
http://support.microsoft.com/kb/970070/
Microsoft SQL Server 2008 Standard Edition 64-bit
systems or Microsoft SQL Server 2005 Standard
Edition 64-bit systems that have the Lock Pages in
Memory privilege set for the service account of
the SQL Server Engine do not use Address Windowing
Extensions (AWE) APIs for buffer pool memory
allocation. The use of AWE APIs on 64-bit editions
of SQL Server for buffer pool memory allocation is
known as Locked Pages. Cumulative Update package 2
for SQL Server 2008 Service Pack 1 and cumulative
update package 4 for SQL Server 2005 Service Pack
3 provide support for Locked Pages.
I'm not sure if I should install the post service pack cummulative update or if lock pages in memory not functioning even matters that much!!!?? Or maybe throw more memory at SQL Server?
September 17, 2010 at 10:51 am
Lock pages in memory matters a lot... at least for this scenario
1. First try to check for the Page File activity during the Package runtime. You can use Permon to do this. If you are seeing increased activity then you should worry abt the insufficient memory and wheteher lock pages in mem setting is working or not.
2. Try to increase the memory for SQL Server.. Pls check my previous reply
3. Consider installing CU4 for SQL Server 2005 SP3 only if nothing else works.
Thank You,
Best Regards,
SQLBuddy
September 17, 2010 at 11:08 am
Thank you !! So you think that increasing the MAX memory to say 6GB from 5GB might do the trick?
I will use the hotfix is nothing else works.
September 17, 2010 at 11:18 am
Yes. Let's be optimistic. Please try that and we will proceed from there.
Thank You,
Best Regards,
SQLBuddy
September 17, 2010 at 12:34 pm
sp_configure 'min server memory', 1024
RECONFIGURE
GO
sp_configure 'max server memory', 6144
RECONFIGURE
GO
Do you know if this will require a reboot? I'm assuming I would just apply this change to my active node for the cluster. looks like it's an online operation.
September 17, 2010 at 12:48 pm
It doesn't need a restart.
Thank You,
Best regards,
SQLBuddy
September 17, 2010 at 12:49 pm
It would be better to change it in Server properties in SSMS.
Thank You,
Best Regards,
SQLBuddy
Viewing 15 posts - 1 through 15 (of 19 total)
You must be logged in to reply to this topic. Login to reply