faster in SSMS slower in proc

  • If I embed the sql inside a proc, it takes over an hour but if i just run as is, it executes in less than 10 seconds. I don't see it has to do anything with parameter sniffing since this proc doesn't even take parameter. I've attached the plan running adhoc. I was trying to get actual plan for the proc, but it won't even finish in an hour. Can somebody please advise?

  • You're inserting directly INTO #temp table, and when in SP you could have problems. Try create the temp table first and then use the insert into #temp(col1,col2, ...,coln) statement. You should not be having the same issue bt using the second way regardless it's via ssms or sp.

    Igor Micev,My blog: www.igormicev.com

  • Thanks for the quick response. Appreciate it. Just out of curiosity, why would proc behaves in different manner when we do select * into.

  • In your way you have a DML statement for not already existing DDL - the temp table that have to be created. The sp recompiles and it could slow down the execution.

    By creating the temp table first you ensure you have the DDL ready to be populated with the DML statement - the query.

    Igor Micev,My blog: www.igormicev.com

  • That didn't help either.

  • Post the new code here to see what you've tried, and other suggestion may come.

    Igor Micev,My blog: www.igormicev.com

  • SQL_Surfer (3/24/2015)


    If I embed the sql inside a proc, it takes over an hour but if i just run as is, it executes in less than 10 seconds. I don't see it has to do anything with parameter sniffing since this proc doesn't even take parameter. I've attached the plan running adhoc. I was trying to get actual plan for the proc, but it won't even finish in an hour. Can somebody please advise?

    It really would be more helpful to get that proc query plan to compare to the plans that produce the 10 second result set.

    In the meantime, try creating a stored proc for each of the batches and execute them individually. That will help use narrow the problem down to which event is slowing you down. In other words, if the insert into #temp_pick_order takes a few seconds, a few seconds for #temp_error then #temp_edi... runs for ever.. then in that scenario we have identified the bottle-neck and can take it from there.

    A few anecdotal observations based on your query plan:

    1) The actual number of rows is showing as 0... how many rows are you expecting.... that's just odd

    2) We have some table scans going on - that's not good, a clustered index is in order.

    3) Those SELECT DISTINCT statements are killing you - getting rid of those will help you a great deal; if you can change your logic so that you don't need to de-duplicate your data that would be very good. If things are that you must de-duplicate your data then you should look into alternative to DISTINCT (there are good alternatives on SQL Server Central, I'll conclude my comment with a simple method).

    Again, the most important thing is that second execution plan but these tips will help you with this query and in the future.

    Example of how to de-duplicate data not using DISTINCT

    -- Using a table variable some brevity

    DECLARE @temp TABLE (somevalue varchar(10));

    -- sample data with duplicates

    INSERT INTO @temp VALUES ('aa'),('aa'),('bb'),('bb'),('bb'),('cc'),('cc'),('dd');

    -- how to get distinct values while not using DISTINCT

    WITH distinct_values AS

    (

    SELECT pn = ROW_NUMBER() OVER (PARTITION BY somevalue ORDER BY (SELECT NULL)), somevalue

    FROM @temp

    )

    SELECT somevalue

    FROM distinct_values

    WHERE pn = 1;

    "I cant stress enough the importance of switching from a sequential files mindset to set-based thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code."

    -- Itzik Ben-Gan 2001

Viewing 7 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply