tempdb - unable to allocate page

  • Hi all,

    I'm attempting to run an update against a 1m row table. The update is via a join to a table of 47m rows, using a join based on a text column. There is an index on the 1m row table, but none in the 47m row table.

    Attempting to run the query returns the error: Could not allocate a new page for database 'TEMPDB' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

    I have 35Gb of free and available disk space on the server. tempdb is split over 8 files (8 cores on the server), with autogrowth, as per output from sp_helpfile below:

    namefileidfilenamefilegroupsizemaxsizegrowthusage

    tempdev1D:\Database\Data\tempdb.mdfPRIMARY5193728 KB5193728 KB1048576 KBdata only

    templog2L:\Database\Logs\templog.ldfNULL5193728 KB20971520 KB524288 KBlog only

    tempdev_13D:\Database\Data\tempdev_1.ndfPRIMARY5193728 KB5193728 KB1048576 KBdata only

    tempdev_24D:\Database\Data\tempdev_2.ndfPRIMARY5193728 KB5193728 KB1048576 KBdata only

    tempdev_35D:\Database\Data\tempdev_3.ndfPRIMARY5193728 KB5193728 KB1048576 KBdata only

    tempdev_46D:\Database\Data\tempdev_4.ndfPRIMARY5193728 KB5193728 KB1048576 KBdata only

    tempdev_57D:\Database\Data\tempdev_5.ndfPRIMARY5193728 KB5193728 KB1048576 KBdata only

    tempdev_68D:\Database\Data\tempdev_6.ndfPRIMARY5193728 KB5193728 KB1048576 KBdata only

    tempdev_79D:\Database\Data\tempdev_7.ndfPRIMARY5193728 KB5193728 KB1048576 KBdata only

    I've already run a transaction backup truncate only. sp_spaceused returns:

    database_namedatabase_sizeunallocated space

    tempdb45648.00 MB39539.59 MB

    reserveddataindex_sizeunused

    1061280 KB865584 KB195432 KB264 KB

    This seems odd. Could I really be maxing out tempdb with this single query? Or is there something else I should be looking into?

    Regards, Iain

  • Hi there, I was reading a post about this issue some time ago on the 'sqlauthority' blog so I looked it up again. It has some cool tips I saw in the comments about this error exactly. Here are some that might help:

    Make sure tempdb is on another drive, this will help with performance. Also, reducing scan time of tables that aren't frequently used helps reduce time of execution. So I haven't personally dealt with this, but hopefully it can be useful to you.

    Natali 😀

    "Press any key to continue, where's the ANY key?" 😛 Homer Simpson

  • I'd get a look at the execution plan. If the query is against millions of rows without an index, likely it's creating a hash table in order to do the join and that's going to all go into tempdb. Or, if it's doing a merge join, again, without indexes, you're probably getting a data sort, which will occur within the tempdb.

    A JOIN on a text field, by that I assume you mean a large object (LOB), not just a VARCHAR or something, is a problematic approach anyway. You may need to break down the process and do it in chunks.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Hi Grant,

    The join is straight varchar to a substring of a varchar column (apologies for not being explicit about this), that is zero padded inline via replicate. There is a hash join and a sort, but the bit I don't understand is how tempdb can't allocate a new page. With 37Gb of free space in the db itself, plus autogrowth potential, this seems way too much space to be used?

    Interestingly, the query fails almost immediately, rather than after a long period as I might expect if the server was truggling to allocate space.

    I've worked round the problem, but am still curious as to what the problem may have been and of course keen to ensure it doesn't happen again. Any thoughts?

    Thanks, Iain

  • Yes this query could be maxing out tempdb. A lot depends on the size of the varchar field. I have seen queries (similar to what you have described) fill up 200GB drives with smaller tables than you have cited.

    This typically comes back to query design.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • It's absolutely a problematic approach to a JOIN. Functions like that means you can only ever scan the table.

    The reason it might be erroring out so quickly is because it's going to try to allocate that space immediately when it starts to run and if that hash table is as big as i imagine it would be for a query like this...

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Interesting. My curiosity is piqued.

    The workaround is to pre-parse the column into a temp table, including the relevant keys for joins, then index, then join to this instead. This works fine. My initial thinking was that this approach would be more expensive, given two disk writes of 47m rows, but apparently not.

    So what would be in the hash table that wouldn't be in the temp table created above? How is it that this would max out tempdb?

  • Not seeing all the code & structures involved... not sure. Just guesses. Maybe moving to a tempdb creates a more up to date set of stats. Maybe moving to tempdb eliminates the functions so different mechanisms of filtering are used within the hash. I can't see what you can see, so I can't know what you can know.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Which is fair enough 🙂

    Thanks for posting, I'll see if I can't do some research myself and update this thread at a later point.

  • I don't understand is how tempdb can't allocate a new page. With 37Gb of free space in the db itself, plus autogrowth potential, this seems way too much space to be used?

    Hi Iain,

    By the looks of your original output of the files on tempdb you have a maxsize set on each of the files, and each of your data files are already at that maxsize (i.e. 5193728 KB). Even with autogrowth enabled you're not going to be able to increase the size of the data files beyond this value.

    Therefore, you're going to have to increase the maxsize of each data file to allow tempdb to grow.

    Hope this helps,

    Phil

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply