out of memory, 8007000e, -2147024882

  • I have a table with 314 fields. First field is Primary key. For testing purposes I created the following code to see if I can add 1 million records using MS Access 2003 using ADODB. After adding a large number of records SQL server gives error: "out of memory, 8007000e, -2147024882". If I click debug and then continue it does not proceed further. If I exit the program and restart it using to new starting number for the primary key, I am able to add much fewer records and receive the above error. Each time a few more could be added in that way.

    I first tried on Windows 2008 SBS with SQL express 2005 with 8GB of RAM, 64-bit. Then i tried on vista business 2GB RAM, 32-bit with SQL Express 2008. In both same problem. Windows 2008 SBS actually let me add much fewer records! Both have latest SP for OS and SQL Server.

    The code is below, please comment what is wrong with this picture.

    Dim con As New ADODB.Connection, Rs As New ADODB.Recordset

    Dim strSQL As String, lngField As Long, lngCounter As Long

    strSQL = "SELECT * FROM NPIDATA"

    con.ConnectionString = "Provider=Microsoft.Access.OLEDB.10.0;Persist Security Info=True;Data Source=localhost\SQLEXPRESS;Initial Catalog=NPI;Data Provider=SQLOLEDB.1;Integrated Security='SSPI';"""

    con.Open

    Rs.Open strSQL, con, 0, 3

    For lngCounter = 2000000 To 3000000

    Rs.AddNew

    For lngField = 0 To 313

    If lngField = 0 Then

    Rs(lngField) = lngCounter

    Else

    Rs(lngField) = 1

    End If

    Next

    Rs.Update

    Next

    Rs.Close

    Set Rs = Nothing

    con.Close

    Set con = Nothing

    MsgBox "done"

  • amin 70510 (1/27/2010)


    I have a table with 314 fields.

    That is a LOT of columns. No wonder why you're getting out of memory issues.

    Is there ANY way you can break up the table???

    +--------------------------------------------------------------------------------------+
    Check out my blog at https://pianorayk.wordpress.com/

  • SQL Express has hard limits on how much RAM it can access, but that's probably not the actual problem. I would tend to suspect that Access is what's eating up the RAM, and it's more likely to eat it up on the local machine than on the server, unless you're running Access on the server.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • You are aware that SQL Server 2005/2008 Express edition have restriction on the size of the database, 4GB. If each row is approximately 8K, I think I see why you run out of memory.

  • Looks like Gus and Lynn still have that mind meld going. 😀



    Alvin Ramard
    Memphis PASS Chapter[/url]

    All my SSC forum answers come with a money back guarantee. If you didn't like the answer then I'll gladly refund what you paid for it.

    For best practices on asking questions, please read the following article: Forum Etiquette: How to post data/code on a forum to get the best help[/url]

  • GSquared (1/28/2010)


    SQL Express has hard limits on how much RAM it can access, but that's probably not the actual problem. I would tend to suspect that Access is what's eating up the RAM, and it's more likely to eat it up on the local machine than on the server, unless you're running Access on the server.

    I gotta agree with Gus here. It looks like your code, opens a recordset then adds a row one at a time making the RS bigger and bigger until access can't deal with it any longer and you run out of memory, but I may be wrong cause I don't deal with ADODB very much.

    Here's a couple of things you might want to take into consideration.

    If you're just creating test data, there are a number of scripts on this site and other places on the web which will do it in a much more efficient manner. You want to do it all in one shot, or at least in larger chunks than 1 row at a time. You may find a sweet spot where based on your configuration you might see chunks of 100 or 200000 work better than the whole million at one time, but YMMV.

    have a look at the insert command. Also, why not make your primary key an identity column if you're just storing consecutive numbers. Make the database handle the incrementing of the key for you.

    Also check out Jeff's great article on Tally tables[/url] as you can probably extrapolate some of those techniques to help you build you test data.

    There's lots of room for improvement in your code, but the biggest one will be doing the insert in a set based manner.

    -Luke.

    To help us help you read this[/url]For better help with performance problems please read this[/url]

  • 1. Access ran out of memory: No Access did not run out of memory. SLQ server ran out of memory. The error message is given by SQL to Access notifying Access that SQL server is out of memory. Make a note, the out of memory message comes after adding fewer records when SQL server is on a separate machine with 8GB ram, 400GB FREE hard drive space using Windows 64-bit SBS. If SQL 2008 express ran on same computer (Vista business 32 bit with 2GB RAM and 200GB free space) the error message comes much later after adding many more records.

    2. There are too many fields: Dear SQL server, deal with it or give proper relevant error message. After all 314 columns is within specified limits of SQL server.

    3. SQL express has 4GB limit: The db file grew only up to 500MB before the above errors started to come. Note the above error came when SQL server is hosting ONE SINGLE DATABASE and ONE SINGLE TABLE DEFINED and nothing else. No relations, queries, functions, stored procedures, reports nothing at all. Considering this running out of memory just for this much is quite pathetic.

    4. Can the table be broken down in separate smaller componets: Lot of thinks can be done in life to work your way around rather that a direct approach. If need be that can and will be done but that is not the issue here. Issue here is what is wrong with this picture. If strange unexpected things can happen with SQL server like this then writing program using it would be very risky in real world. We need to understand and figure out what is truely wrong with this picture. The above is simply a test scenario before risking a production failure in real world as lot of other strange unexpected thinks can happen that should not.

    4. Using Auto number field for PrimaryKey: That is not the point/issue here.

    5. Access loads the entire recordset in its own memory of a database that is constantly increasing in size: Per the code that should not be happening. If you think it does than why is access still able to add more records in smaller chunks. Note the initial chuck is much (1000 times) larger before the above issue starts occuring. This the above postulate does not properly explain the glitch.

    6. There are better ways of inserting humongous number of records: Sure there could be many alternatives how different things can be done. However, that is not the point here I am bringing. Issue is why is above problem happening (beyond wildly guessing)? Secondly, it just should not unless I am "MISSING/OVERLOOKING SOMETHING".

  • OK, I'm still guessing a bit on a few of these things, but when I am, they are educated guesses.

    1. Access ran out of memory: No Access did not run out of memory. SLQ server ran out of memory. The error message is given by SQL to Access notifying Access that SQL server is out of memory. Make a note, the out of memory message comes after adding fewer records when SQL server is on a separate machine with 8GB ram, 400GB FREE hard drive space using Windows 64-bit SBS. If SQL 2008 express ran on same computer (Vista business 32 bit with 2GB RAM and 200GB free space) the error message comes much later after adding many more records.

    How do you know? Did you run any performance counters ont he client or the server to tell you exactly what is the problem? Also, you keep stating that you are running this on a SBS 2003 or 8 you didn't specify. If it's SBS, what else is running? Exchange, ISA, file shares, a domain controller and all of it's associated services, a web server? Probably all of these and perhaps more. You're not comparing apples to apples...

    2. There are too many fields: Dear SQL server, deal with it or give proper relevant error message. After all 314 columns is within specified limits of SQL server.

    I actually agree, that while this number of columns should not present any real challenges, you might think about breaking it apart, as there are a variety of limitations when sort rows of larger than 8k depending on the version, so if you have 314 varchar(8000) columns you may find yourself with issues later on. Don't fault SQL Server for something that's rather well documented.

    3. SQL express has 4GB limit: The db file grew only up to 500MB before the above errors started to come. Note the above error came when SQL server is hosting ONE SINGLE DATABASE and ONE SINGLE TABLE DEFINED and nothing else. No relations, queries, functions, stored procedures, reports nothing at all.

    The first part of this is great info. helps us realize that you didn't hit a file size limit, the rest is well unprofessional at best.

    4. Can the table be broken down in separate smaller components: Lot of thinks can be done in life to work your way around rather that a direct approach. If need be that can and will be done but that is not the issue here. Issue here is what is wrong with this picture. If strange unexpected things can happen with SQL server like this then writing program using it would be very risky in real world. We need to understand and figure out what is truely wrong with this picture. The above is simply a test scenario before risking a production failure in real world as lot of other strange unexpected thinks can happen that should not.

    Again, yes you can theoretically make a table this wide and there are cases where it is appropriate, however it's rather highly unlikely that the database was properly designed, hence the suggestion to break things up or perhaps a better way to describe it, normalize you database.'

    5. Access loads the entire recordset in its own memory of a database that is constantly increasing in size: Per the code that should not be happening. If you think it does than why is access still able to add more records in smaller chunks. Note the initial chuck is much (1000 times) larger before the above issue starts occuring. This the above postulate does not properly explain the glitch.

    Again, not an ADO expert here, so please forgive me, maybe you can explain exactly what your code does and how it interacts with the database. Or perhaps you can use profiler and see exactly how many statements are being used to execute this. From what I understand of your code you are doing each row as a separate insert. This is not the best way to accomplish what you are trying to do. You want to do it in sets. a million row set might cause you problems depending on memory, disk io, network io and a variety of other factors. This is where you need to find a sweet spot by testing it on your environment. Sometimes it's at 30,000, sometimes 500,000. It all depends on your environment.

    6. There are better ways of inserting humongous number of records: Sure there could be many alternatives how different things can be done. However, that is not the point here I am bringing. Issue is why is above problem happening (beyond wildly guessing)? Secondly, it just should not unless I am "MISSING/OVERLOOKING SOMETHING".

    I'm glad that you can see there would be better ways of doing it (emphasis is mine). This is the issue. No matter what you do you can always write code that is so inefficient that you'll exhaust all of your resources trying to fit a square peg in a round hole. Remember that databases are meant to deal with data in sets, while it seems you are used to dealing with data one row at a time in your application logic. It's a whole different perspective and once you make that jump you'll see your data access times drop quite substantially.

    -Luke.

    To help us help you read this[/url]For better help with performance problems please read this[/url]

  • Sounds to me like you know more about this than everyone else does, so we'll wait with baited breath for your next revelation on it.

    Since I already mentioned the limits on SQL Express (it can only address 1 GB of RAM), and you already dismissed that as a possible issue here, I don't have anything to add to this. By the way, why does it matter if your server has 8 Gig, when Express can only deal with 1 Gig, regardless of how much the server has? Kind of irrelevant, but you keep bringing it up. Not that a RAM limitation could possibly have anything to do with what you're asking about, since you already dismissed that idea, and you are always correct and all-knowing, I'm just curious as to what horrible ignorance I must have that makes me think the 8 Gig on the server doesn't matter. Please reveal to me my mistake on that regard.

    Alternately, actually read what was actually written, get your ego out of the way, and maybe you'll solve your problem.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • 1. I want to express my appreciation for quick and detailed responses.

    2. If I was "know it all" and had ego problem, I would not be posting a problem that I am trying to understand. Just because I do not know the answer does not mean "any answer" by "the expert" is a valid answer if my dumb brain cannot make sense out of it.

    3. Sorry if I offended you by trying to use my logic and not simply accepting your logic. I do not think it would be very productive for an amateur (=me) to simply accept an "expert's (=you) logic" without making sense out of it as valid. Since I am the one faced with this problem I am sorry but I would need to be convinced by the expert explanation even if the expert (you) are correct and I am wrong and I hope I am wrong if that makes you feel better.

    4. This is my logic regarding the RAM limitation and I may be wrong (yes I have not run any perfomance counters): The SBS 2008 server is 64-bit with 8GB RAM and 500GB hard drive. Less than 100 GB hard drive is used. It is has IIS and exchange server running but practically not in use and idle. No active file share. Exchange server is NOT IN USE. Hardly any user accounts. The only job of the computer is to host one single table of one single database in SQL express 2005. Based on that I believe at least 1 full GB is made available to SQL server! Note in real life many database, tables with complex relations and queries would need to be hosting and running simultaenously with multiple users. I connect to this server via DIFFERENT computer. After about 100,000 record inserts I get the out of memory error. However, if I host SQL server express 2008 on the same client (vista business) computer that is only 2GB RAM, it is able to insert about 300,000 records before out of memory.

    Secondly, once the out of memory error occurs, if I RESTART the program (MS Access 2003 using ADODB), I can keep on adding approximately 1000 records at a time after restarting the client software repeatedly. Considering the latter situation, how do you explain out of memory for each 1000 record inserts being allowed everytime the program is restarted. Some how SQL sever is able to "make more memory" for every installment of 1000 record inserts and restart of client software. Also regarding the comment of sweet spot of about 10,000 to 30,000 records. Here the sweet spot is approx (actually below) 1000 magic number and that is too low once the magic limit (100,000 to 300,000) is reached. It would be too low for ADODB to be useful that it cannot even safely insert 1000 records once the magic limit has reached. Also note the SQL server is NOT HOSTING any other database and is otherwise not in use, it simply being tested at this time.

  • Are you running all of your INSERT statements in one batch?

    ---------------------------------------------------------
    How best to post your question[/url]
    How to post performance problems[/url]
    Tally Table:What it is and how it replaces a loop[/url]

    "stewsterl 80804 (10/16/2009)I guess when you stop and try to understand the solution provided you not only learn, but save yourself some headaches when you need to make any slight changes."

  • I'd suggest you run profiler and find out if you are doing what you think you are doing with your inserts, ie. doing them one at a time like I believe you are doing, vs doing them all in one chunk like you are thinking you are... Also, you're going to definitely want to set up some performance counters on the OS so that you can find out what application is hogging all of your resources.

    So Since I'm not an ADO guru, I decided to do a bit of research to find out how the recordset.update command works, and make it do what I'd prefer to see happen which is do it in 1 go... Then you can tune it to your needs. If you're interested you can find the same document from MSDN by Googleing [adodb recordset update] and choosing the 1st MSDN link...

    try this...

    Dim con As New ADODB.Connection, Rs As New ADODB.Recordset

    Dim strSQL As String, lngField As Long, lngCounter As Long

    strSQL = "SELECT * FROM NPIDATA"

    con.ConnectionString = "Provider=Microsoft.Access.OLEDB.10.0;Persist Security Info=True;Data Source=localhost\SQLEXPRESS;Initial Catalog=NPI;Data Provider=SQLOLEDB.1;Integrated Security='SSPI';"""

    con.Open

    Rs.Open strSQL, con, 0, 3

    For lngCounter = 2000000 To 3000000

    Rs.AddNew

    For lngField = 0 To 313

    If lngField = 0 Then

    Rs(lngField) = lngCounter

    Else

    Rs(lngField) = 1

    End If

    Next

    'Rs.Update

    Next

    Rs.UpdateBatch 'this is the change I made. Build your entire recordset first then issue updatebatch.

    Rs.Close

    Set Rs = Nothing

    con.Close

    Set con = Nothing

    MsgBox "done"

    Again since I'm not an ADO expert I'm not sure if ADO will treat this as one big insert or a whole lot of little ones I'm not sure how this would perform... this is where profiler or running a trace will help you significantly...

    OR, if you really wanted an answer as to how this performs strictly from a SQL server standpoint you could chuck Access altogether and write this in TSQL and execute it in SSMS. IF you can execute a properly formed TSQL statement in SSMS and it doesn't run SQL Server out of memory, guess where I'll start saying your memory bottleneck is (again)?

    To help us help you read this[/url]For better help with performance problems please read this[/url]

  • A side note, are you allowing tempdb to grow on your instance?

    I don't believe this is a SQL Server out of memory, but I could be wrong. I would learn towards an ADO or Access error in memory, not SQL Server. The code for SQL Server is essentially the same in Express, with limitations.

    I have seen strange out of memory errors with DB2 drivers and linked servers before because there were some limitations in the driver code as to how it used memory. A restart of SQL Server fixed it because the driver was abandoned and reloaded in a new memory space.

    No idea if that's what is happening here, but I think Lynn has a good idea above. Trace the instance and see what ADO might be doing.

  • amin 70510 (2/1/2010)


    1. I want to express my appreciation for quick and detailed responses.

    2. If I was "know it all" and had ego problem, I would not be posting a problem that I am trying to understand. Just because I do not know the answer does not mean "any answer" by "the expert" is a valid answer if my dumb brain cannot make sense out of it.

    3. Sorry if I offended you by trying to use my logic and not simply accepting your logic. I do not think it would be very productive for an amateur (=me) to simply accept an "expert's (=you) logic" without making sense out of it as valid. Since I am the one faced with this problem I am sorry but I would need to be convinced by the expert explanation even if the expert (you) are correct and I am wrong and I hope I am wrong if that makes you feel better.

    4. This is my logic regarding the RAM limitation and I may be wrong (yes I have not run any perfomance counters): The SBS 2008 server is 64-bit with 8GB RAM and 500GB hard drive. Less than 100 GB hard drive is used. It is has IIS and exchange server running but practically not in use and idle. No active file share. Exchange server is NOT IN USE. Hardly any user accounts. The only job of the computer is to host one single table of one single database in SQL express 2005. Based on that I believe at least 1 full GB is made available to SQL server! Note in real life many database, tables with complex relations and queries would need to be hosting and running simultaenously with multiple users. I connect to this server via DIFFERENT computer. After about 100,000 record inserts I get the out of memory error. However, if I host SQL server express 2008 on the same client (vista business) computer that is only 2GB RAM, it is able to insert about 300,000 records before out of memory.

    Secondly, once the out of memory error occurs, if I RESTART the program (MS Access 2003 using ADODB), I can keep on adding approximately 1000 records at a time after restarting the client software repeatedly. Considering the latter situation, how do you explain out of memory for each 1000 record inserts being allowed everytime the program is restarted. Some how SQL sever is able to "make more memory" for every installment of 1000 record inserts and restart of client software. Also regarding the comment of sweet spot of about 10,000 to 30,000 records. Here the sweet spot is approx (actually below) 1000 magic number and that is too low once the magic limit (100,000 to 300,000) is reached. It would be too low for ADODB to be useful that it cannot even safely insert 1000 records once the magic limit has reached. Also note the SQL server is NOT HOSTING any other database and is otherwise not in use, it simply being tested at this time.

    My being offended by your answer has nothing to do with you accepting my logic or not. It has to do with your general tone in your posts since your original question. If you don't understand how your tone could be offensive, then try reading your own posts as if you weren't you. Have a couple of other people in your company read what you posted and see if they agree that the tone is a little "off".

    If you and they think it's okay to assuse people of "wild guessing", or to state that we are "being pathetic", then that's the way you are, and I'll gladly ignore you from here on. (Yes, your post accused all of us of that and more. I'm not "reading between the lines", I'm quoting you.)

    Onto technical matters:

    As I already mentioned, the total RAM on the server doesn't even begin to matter in this. SQL Express cannot use more than 1 Gig, no matter how much total there is. If the server has less than 1 Gig, then that would matter. More than 1 Gig is immaterial.

    Let's start out with an assumption that your table has very narrow columns, perhaps they're all integers or something like that. That's unlikely, but it will work to illustrate this point. Integers are 8 bytes of data each.

    334 columns (I think that's what you posted) * 8 bytes per column = 2,672 bytes of data per row. That doesn't include any of the overhead, just the actual data, so it's an underestimate. 1 Gig = 1,073,741,824 bytes. That means 401,849 rows is all your SQL instance is going to be able to hold in RAM. That's being incredibly over-optimistic, since it doesn't include ANY of the row overhead, or anything else SQL Server has to hold in RAM in order to do its job.

    Assuming some of the columns are bigger than 8 bytes, the number goes down even further from there.

    So, 334 narrow columns times a few hundred thousand rows = out of memory in SQL Express. (Your post says that assuming anything of that sort is pathetic. So I guess I'm pathetic for actually being able to do basic arithmetic.)

    Now, had your answer to my post been "I understand it has 1 Gig as its upper limit, but I'm not up to calculating how big a transaction that would be. How do I go about doing that?" THAT would have been an intelligent, reasonable response. A multi-paragraph rant about how wrong all of us are, wasn't.

    Edit: After posting, I went back to your original post (couldn't see it while I was posting the above). You mentioned 314 columns, not 334. That makes the upper limit 427,444 rows, in RAM, not including row overhead, et al. Doesn't change anything material in my post, but accuracy matters.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • I am sorry and I aplogize for my tone and thanks for the input, I appreciate.

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply