Need to overload tempdb

  • Yes, you read that title correctly.

    I have a somewhat hilarious problem. We have some queries that are overloading tempdb which fills up the drive and brings everything to a halt. This is a reporting server, not OLTP, so we've been given permission from the application owner to kill all queries that fill up the drive.

    The hilarity is that I have nothing in non-prod that replicates this issue. I need to deliberately cause tempdb spills or over allocation in order to test this issue. Unfortunately, I have no idea how to deliberately force the issue and Google is telling me how to resolve it, not automate it or cause the issue deliberately.

    I don't need code for a solution. I just need code or suggestions for how to deliberately cause tempdb to overload. Does anyone have anything?

     

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • How about creating a giant temp table from a CROSS JOIN using 2 fnTally functions.  Then join the temp table to itself forcing a HASH JOIN with tiny memory grants and no parallelism.  If anyone asks where you got this I'LL DENY IT 🙂

    declare @n1 bigint = 2000000;   -- base rows
    declare @n2 int = 4000; -- widen factor

    select row_number() over (order by n1.n, n2.n) as a,
    replicate('x', @n2) AS b
    from dbo.fntally(1, @n1) n1
    cross join dbo.fntally(1, @n1) n2
    into #t;

    -- hash join with tiny grant -> tempdb spills
    select count_big(*)
    from #t as t1
    join #t as t2 with (hash) on t1.a = t2.a
    option (hash join, maxdop 1, min_grant_percent = 0, max_grant_percent = 0);

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • Heh. No worries. I'm doing this on a sandbox server. I'll give it a try. Thanks.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Bah, humbug. It's erroring out.

    FYI: Your "into #t" is in the wrong spot.

    And I need to figure out the below error. I think my sandbox server is a version lower than the servers I'm having issues with.

    Msg 321, Level 15, State 1, Line 13

    "hash" is not a recognized table hints option. If it is intended as a parameter to a table-valued function or to the CHANGETABLE function, ensure that your database compatibility mode is set to 90.

    EDIT: rereading the error. Looks like "hash" is old code. Not for current versions. Compat. 90 is ancient.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Ha yeah I was reluctant to test it out.  Sorry for the syntax issues.  Originally I had a table CREATE statement but then switched to SELECT INTO because that's more typically inefficient.  The idea was to force temp writes with a horribly memory constrained HASH JOIN.  OPTION(hash join, maxdop 1) ought to be enough.  The memory grants was from my foggy memory since I'm running on Azure SQL

    Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können

  • This isn't code, but you could also put a max size on tempdb so you don't need to have as much use to run out of space if you are just trying to detect when a query is filling tempdb and needs to be killed.

  • Jack Corbett wrote:

    This isn't code, but you could also put a max size on tempdb so you don't need to have as much use to run out of space if you are just trying to detect when a query is filling tempdb and needs to be killed.

    I could, but that would generate an emergency call to fix the issue. All hands on deck, explain yourselves to leadership kind of emergency call. This is a reporting server, not OLTP. So the application team has requested we just kill the SPIDs causing the issue before it gets to an emergency.

    I was all excited about Resource Governor's ability to limit tempdb usage. Then I realized this functionality isn't available in SQL 2019. So now I have to build a kill SPIDs kind of solution.

    Rather than kill all SPIDs, I need to figure out how to kill the SPIDs that are causing tempdb growth. I've got the default trace for EventClass 92 & 93 and tempdb which shows me this data:

    SPID StartTime EventName DatabaseName Filename GrowthMB DurMS

    88 2025-11-12 03:39:41.813 Log File Auto Grow tempdb templog 500.000000 1097

    88 2025-11-12 03:39:32.097 Log File Auto Grow tempdb templog 500.000000 717

    88 2025-11-12 03:39:22.910 Log File Auto Grow tempdb templog 500.000000 796

    Now I just need to figure out how to track the date & time stuff, group it together within X time frame and use that to kill the SPID.

    So far, I have thoughts of a 5 minute scheduled job that initially checks free space:

    DECLARE @GDriveFreeSpace INT = (SELECT DISTINCT ROUND(((available_bytes/1048576* 1.0)/(total_bytes/1048576* 1.0) *100),2) as FreePercentage
    FROM sys.master_files AS f
    CROSS APPLY sys.dm_os_volume_stats(f.database_id, f.file_id)
    WHERE volume_mount_point = 'G:\')

    IF @GDriveFreeSpace <= 57
    BEGIN
    SELECT 'Hello,World';
    END

    And in the BEGIN / END block write my code for verifying the high problem SPIDs from the default trace, then kill them. But I also want to track the running query so we can run back to the app team and say "This query is causing damage. Fix it, please. Or stop running it."

     

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Steve Collins wrote:

    Ha yeah I was reluctant to test it out.  Sorry for the syntax issues.  Originally I had a table CREATE statement but then switched to SELECT INTO because that's more typically inefficient.  The idea was to force temp writes with a horribly memory constrained HASH JOIN.  OPTION(hash join, maxdop 1) ought to be enough.  The memory grants was from my foggy memory since I'm running on Azure SQL

    As an FYI, I changed your code by removing the hash lines and this worked. At least, it increased the size of tempdb as I allowed it to run, which is what I really need.

    declare @n1 bigint = 2000000;   -- base rows
    declare @n2 int = 4000; -- widen factor

    select row_number() over (order by n1.n, n2.n) as a,
    replicate('x', @n2) AS b
    into #t
    from dbo.fntally(1, @n1) n1
    cross join dbo.fntally(1, @n1) n2;

    -- hash join with tiny grant -> tempdb spills
    select count_big(*)
    from #t as t1
    join #t as t2
    on t1.a = t2.a;

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • You can use this query to get the sessions using tempdb:

    Select 
    tsu.session_id,
    (tsu.internal_objects_alloc_page_count + tsu.user_objects_alloc_page_count) * 8 AS TempDBSpaceKB
    from
    sys.dm_db_task_space_usage tsu
    WHERE
    (tsu.internal_objects_alloc_page_count + tsu.user_objects_alloc_page_count) > 0
    ORDER BY
    TempDBSpaceKB DESC;

Viewing 9 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply