How to write dynamic statement with way do it run quickly?

  • I work on SQL server 2012 I face Issue I implement dynamic SQL it very slow

    so I need to enhance statement below to work quickly .

    DECLARE @SQL NVARCHAR(MAX)
    select @SQL =CONCAT('
    SELECT * Into #NewTable
    FROM #SplitNumberAndUnitsFinal
    PIVOT(max(Value) FOR DKFeatureName IN ('+@Columns+')) AS PVTTable
    ',
    N' Select ''Variant ID'' as ''[Variant ID]'',''PART_ID'' as ''PART_ID'' ,''Vendor ID'' as ''Vendor ID'',''Manufacturer'' as ''Manufacturer'',''Digi-Key Part No.'' as ''Digi-Key Part No.'' ,''Mfr Part No.'' as ''Mfr Part No.'' ,''Description'' as ''Description'',''Category'' as ''Category'',''Family'' as ''Family'',''Obsolete'' as ''Obsolete'' ,''Non-Stock'' as ''Non-Stock'' ,''Part Status'' as ''Part Status'',''PartNumber'' as ''Z2 Part number'' , ''CompanyName'' as ''Z2 Manufacturer'' , ' +@Header + '
    union all
    select [Variant Number],PART_ID,[Vendor ID],Manufacturer,[Digi-Key Part No.] ,[Mfr Part No.],[Description],Category,Family ,Obsolete ,[Non-Stock] ,[Part_Status], PartNumber , CompanyName , ' +@Columns + ' from #NewTable


    ')
    exec (@SQL)

    so please any advice for code above to enhance it to work quickly ?

    so are there are any thing wrong you see on code above ?

  • That bit of code looks fine to me.  What I'd recommend is posting an execution plan.  That will likely be the best bet to figure out why it is slow.

    How many rows and columns are you pulling in that query?  It could be slow because you have a lot of data.  If you are pulling 1 billion rows and each row is 1 KB in size, that is 1 billion KB of data... or (roughly) 1 TB of data.  That will take a long time to pull the data.  Now if it is 10 rows of data at 0.01 KB, that should be quick.

    If you cannot provide an execution plan, I am just guessing and offering minor troubleshooting tips.  What I would do is figure out which part off  the dynamic SQL is slow and go from there.  Like run ONLY the INSERT portion of the query and see if it is slow.  If it is, you know that bit is the slow portion and you may need to do something to help that bit such as filtering some of the data if you don't need it all.

    But without an execution plan, it is all guesswork.  I doubt it is slow due to it being a dynamic query, it is likely something else such as a lot of data, bad statistics on the source tables, bad estimated rows, etc.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • The best way to get this answered is to post the actual SQL that is dynamically generated so we can see what the SQL is doing. (Just change exec(@SQL) to PRINT @SQL) and paste the results here.

    Then look at optimising that statement, maybe by rewriting it to replace PIVOT with an aggregate of CASE statements or adding one or more indexes to the tables. Also include the current execution plan and ideally some data.

  • Not sure that an index will help anything since there is no WHERE clause or ORDER BY.  No matter what index you put on there, it is going to be an index scan, which should be similar performance to scanning the heap, no?  And the INSERT should be faster on the heap than on an index (or indexes).

    I remember reading this query previously and the reason you needed it dynamic is because the column list (and header list) was unknown at the time the query starts and can change over time.  So if we change the pivot to a bunch of case statements, the query will work now, but will fail in the future. And trying to optimize the query for the current execution may work well, but cause slowdowns in the future.

    I mean, one thought I had was to use a table variable or temp table with well defined data types rather than dynamically creating them as you could tune them nicer, but that doesn't work as the column list needs to be dynamic.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Mr. Brian Gale wrote:

    Not sure that an index will help anything since there is no WHERE clause or ORDER BY.  No matter what index you put on there, it is going to be an index scan, which should be similar performance to scanning the heap, no?  And the INSERT should be faster on the heap than on an index (or indexes).

    I remember reading this query previously and the reason you needed it dynamic is because the column list (and header list) was unknown at the time the query starts and can change over time.  So if we change the pivot to a bunch of case statements, the query will work now, but will fail in the future. And trying to optimize the query for the current execution may work well, but cause slowdowns in the future.

    I mean, one thought I had was to use a table variable or temp table with well defined data types rather than dynamically creating them as you could tune them nicer, but that doesn't work as the column list needs to be dynamic.

    I think the query could be rewritten to produce this:

    SELECT npd.PART_ID, 
    npd.CompanyName,
    npd.PartNumber,
    npd.FlagAllow,
    CASE npd.DKFeatureName WHEN 'noparametric' THEN npd.Tempvalue ELSE NULL END nonparametric,
    CASE npd.DKFeatureName WHEN 'parametric' THEN npd.Tempvalue ELSE NULL END parametric,
    CASE npd.DKFeatureName WHEN 'family' THEN npd.Tempvalue ELSE NULL END family,
    CASE npd.DKFeatureName WHEN 'source' THEN npd.Tempvalue ELSE NULL END source
    FROM #nonparametricdata npd

    Which would be more efficient.

     

  • Yep, I agree with you Jonathan AC Roberts, but what if the number of columns is dynamic?  With version 1 it has 3, version 2 has 2, version 3 has 12 and so on?

    Updating the query every time the columns change would be an annoying task that would be better suited in a dynamic query.

    I am not trying to argue that multiple CASE statements wouldn't be faster, I am just thinking of the previous post (which went nowhere) was this one:

    https://www.sqlservercentral.com/forums/topic/how-to-add-column-columnunit-dynamically-when-flagallow-is-1

    Which from looking at your query, you already found (based on the CASE statements you came up with).  If case statements work, then the query doesn't even need to be dynamic.

    That being said, I am not confident that the premise of this post is accurate in that the dynamic part is what is making the query slow.  I think the query is slow due to something else (bad stats, tempdb spill, implicit conversions, high overall resource utilization (ie not enough CPU/memory), high disk I/O, high network I/O, gigantic data set,slow network link, etc).

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Mr. Brian Gale wrote:

    Yep, I agree with you Jonathan AC Roberts, but what if the number of columns is dynamic?  With version 1 it has 3, version 2 has 2, version 3 has 12 and so on?

    Updating the query every time the columns change would be an annoying task that would be better suited in a dynamic query.

    I am not trying to argue that multiple CASE statements wouldn't be faster, I am just thinking of the previous post (which went nowhere) was this one:

    https://www.sqlservercentral.com/forums/topic/how-to-add-column-columnunit-dynamically-when-flagallow-is-1

    Which from looking at your query, you already found (based on the CASE statements you came up with).  If case statements work, then the query doesn't even need to be dynamic.

    That being said, I am not confident that the premise of this post is accurate in that the dynamic part is what is making the query slow.  I think the query is slow due to something else (bad stats, tempdb spill, implicit conversions, high overall resource utilization (ie not enough CPU/memory), high disk I/O, high network I/O, gigantic data set,slow network link, etc).

    I wasn't arguing to stop the query being written in dynamic SQL, but you need to know what query is going to be written dynamically before you write the code to generate the SQL. I don't think it would be any harder to write code which generates the CASE query than the code that generates the PIVOT query.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply