How to avoid this T-SQL statement affecting my TempDB?

  • WITH CTE AS (SELECT DISTINCT

    LTRIM(RTRIM(P.IncidentIDRef)) AS MODIDREF, P.JurisID, P.EntDttm AS CREATEDDTTM, P.DIDCU AS CREATEDPFIDREF, 'INC' AS MODCODE, 'C' RECTYPE,

    REPLACE(STUFF

    ((SELECT '</P>' + C.FDNARR

    FROM PSFDOC C

    WHERE C.FDDOC# = T .FDDOC#

    ORDER BY C.FDDOC#, C.FDLINE FOR XML PATH('')), 1, 10, ''), '</P>', '</P>') AS COMMENT

    FROM PSFDOC T JOIN

    PSDOCI P ON P.DIDOC# = T .FDDOC#

    WHERE P.DIFILE = 'PLINCD' AND (NOT P.DIDESC IN ('cad system narrative', 'dispatch narrative')) AND P.DIDESC NOT LIKE 'Converted Cad Comment%')

    SELECT *, CAST(CTE .COMMENT AS NVARCHAR(MAX)) AS COMMENT1, ROW_NUMBER() OVER (PARTITION BY CTE .JURISID, CTE .MODIDREF

    ORDER BY CTE .JURISID, CTE .MODIDREF) - 1 AS SEQNO

    FROM CTE

    This SQL statement is affecting my Temp Db around 11 GB....Help me friends ...I need alternate SQL statement not to affect my temp db..

    I have attached the estimated execution plan of this statement...If you need anything let me know please.

    Give me suggestions please..

    Thanks,
    Charmer

  • The only way I see to possibly avoid tempdb on this one would be to build permanent tables, do the work in them that tempdb is doing in worktables, and use those instead of the CTE. But even that isn't likely to work, because the SQL engine will still need to do sorts on the data, for the distinct and row number operations at the very least, and that's usually done in tempdb.

    If it's blowing tempdb up by 11 Gig, I'm assuming this query is dealing with a huge amount of data. Can you filter the data before-hand, in a separate table or query? That will still use tempdb, probably, but might use less of it.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • GSquared (8/28/2012)


    The only way I see to possibly avoid tempdb on this one would be to build permanent tables, do the work in them that tempdb is doing in worktables, and use those instead of the CTE. But even that isn't likely to work, because the SQL engine will still need to do sorts on the data, for the distinct and row number operations at the very least, and that's usually done in tempdb.

    If it's blowing tempdb up by 11 Gig, I'm assuming this query is dealing with a huge amount of data. Can you filter the data before-hand, in a separate table or query? That will still use tempdb, probably, but might use less of it.

    Yes you are absolutely correct. This table has records around 6 million...

    ok GSquared, as you say let me filter the records in separate table and will work on it and i will let you know how things going on..

    Thanks,
    Charmer

  • Is "ORDER BY C.FDDOC#, C.FDLINE" required?

    What is the type of C.FDLINE?

    Creating an index on C.FDDOC#, C.FDLINE should imporve the tempdb size and execution time.

  • Suresh B. (8/29/2012)


    Is "ORDER BY C.FDDOC#, C.FDLINE" required?

    What is the type of C.FDLINE?

    Creating an index on C.FDDOC#, C.FDLINE should imporve the tempdb size and execution time.

    Yes, order is must because concatenation is based on this order columns.

    I have created a staging table and filtered the records as GSquared said.

    Now, the performance is so good. but it still affects the temp db. But anyway i am happy:-) with the results since temp db is affected only about 700 MB where it was affected around 11 GB earlier.

    Thanks,
    Charmer

  • Glad we could help.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • GSquared (8/29/2012)


    Glad we could help.

    Thank you so much, Pal...I really appreciate your suggestions

    Thanks,
    Charmer

  • Try putting the FOR XML PATH stringinator into a CROSS APPLY, where it can crunch up the rows of PSFDOC separately from the rest of the query. You might not even need the DISTINCT:

    ;WITH CTE AS (

    SELECT --DISTINCT

    LTRIM(RTRIM(P.IncidentIDRef)) AS MODIDREF,

    P.JurisID,

    P.EntDttm AS CREATEDDTTM,

    P.DIDCU AS CREATEDPFIDREF,

    'INC' AS MODCODE,

    'C' RECTYPE,

    x.Comment

    FROM PSDOCI P

    CROSS APPLY (

    SELECT COMMENT = REPLACE(STUFF(

    (SELECT '</P>' + C.FDNARR

    FROM PSFDOC C

    WHERE C.FDDOC# = P.DIDOC#

    ORDER BY C.FDDOC#, C.FDLINE FOR XML PATH(''))

    , 1, 10, ''), '</P>', '</P>')

    ) x

    --JOIN PSFDOC T

    --ON P.DIDOC# = T .FDDOC#

    WHERE P.DIFILE = 'PLINCD'

    AND (NOT P.DIDESC IN ('cad system narrative', 'dispatch narrative'))

    AND P.DIDESC NOT LIKE 'Converted Cad Comment%'

    )

    SELECT

    *,

    CAST(CTE.COMMENT AS NVARCHAR(MAX)) AS COMMENT1,

    ROW_NUMBER() OVER (PARTITION BY CTE .JURISID, CTE .MODIDREF ORDER BY CTE .JURISID, CTE .MODIDREF) - 1 AS SEQNO

    FROM CTE

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • ChrisM@Work (8/29/2012)


    Try putting the FOR XML PATH stringinator into a CROSS APPLY, where it can crunch up the rows of PSFDOC separately from the rest of the query. You might not even need the DISTINCT:

    ;WITH CTE AS (

    SELECT --DISTINCT

    LTRIM(RTRIM(P.IncidentIDRef)) AS MODIDREF,

    P.JurisID,

    P.EntDttm AS CREATEDDTTM,

    P.DIDCU AS CREATEDPFIDREF,

    'INC' AS MODCODE,

    'C' RECTYPE,

    x.Comment

    FROM PSDOCI P

    CROSS APPLY (

    SELECT COMMENT = REPLACE(STUFF(

    (SELECT '</P>' + C.FDNARR

    FROM PSFDOC C

    WHERE C.FDDOC# = P.DIDOC#

    ORDER BY C.FDDOC#, C.FDLINE FOR XML PATH(''))

    , 1, 10, ''), '</P>', '</P>')

    ) x

    --JOIN PSFDOC T

    --ON P.DIDOC# = T .FDDOC#

    WHERE P.DIFILE = 'PLINCD'

    AND (NOT P.DIDESC IN ('cad system narrative', 'dispatch narrative'))

    AND P.DIDESC NOT LIKE 'Converted Cad Comment%'

    )

    SELECT

    *,

    CAST(CTE.COMMENT AS NVARCHAR(MAX)) AS COMMENT1,

    ROW_NUMBER() OVER (PARTITION BY CTE .JURISID, CTE .MODIDREF ORDER BY CTE .JURISID, CTE .MODIDREF) - 1 AS SEQNO

    FROM CTE

    This one also works really great...Thank you Chris

    Thanks,
    Charmer

  • Thanks for the feedback. The original table had an unnecessary join which the new query eliminates, and if you can remove the DISTINCT now, then you remove a memory-hogging SORT too.

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • ChrisM@Work (8/30/2012)


    Thanks for the feedback. The original table had an unnecessary join which the new query eliminates, and if you can remove the DISTINCT now, then you remove a memory-hogging SORT too.

    yes..i really felt that in the performance...

    Thanks,
    Charmer

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply