Stored Procedures slower than queries

  • Stijn Vervaet (1/12/2016)


    Apologies Michael, I was trying to get my DSQL-SPs generator to work so I could test it and forgot to provide you with the execution plans. I will provide you with these tomorrow when I'm back in the office.

    Since there are so many different update statements happening in the transaction, does it matter if I pick a large or a short one?

    Appreciate everyone's input thus far and willingness to even read about my issue.

    I guess I would want to see one with the biggest difference between the proc and the ad-hoc query. Also, can you give us the EXACT list of steps you are performing?

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • MadAdmin (1/12/2016)


    Stijn Vervaet (1/11/2016)


    I want to try the Dynamic SQL generation, but something tells me that (especially since some of my tables have a LOT of columns, since this is a pure OLTP database) there will be a considerable overhead in doing so

    Not sure why you would think that.

    Maybe you think it will create a plan per execution?

    Will happen if you do exec(@SQL) and hard code scalar values into the query. (are you using sp_executesql and passing parameters to this proc?)

    Gail's article lays this out completely. (Praise Gail, her mercies endure forever, Amen)

    Do you have a test case that proves it gets worse?

    Is your query nonsargeable in both dsql and originally? String concatenation is nothing compared to a table or index scan.

    Perhaps you now have a sargeable query with DSQL, but now have missing indexes?

    I have only had disproportionately heavily far superior results using dsql compared to catchall queries.

    Speed improvements were in the area of 99.9%, enough to get me a reasonable bonus as well as reputational gains in the team.

    Hi MadAdmin, I've continued my generation of DSQL SPs (an example you can see posted below), but unfortunately I see no such speed gains after testing multiple times against the same workload (vs not using DSQL SPs).

    If you have any more experiences or tips you could share on how you get such a great performance boost (I'll even take 33% 🙂 ) through the use of DSQL SPs, I'm all ears.

    Thank you!

    PS what does 'nonsargeable ' ? or is this a typo? I'm not familiar with this term.

  • Stijn Vervaet (1/12/2016)


    Hi MadAdmin, I've continued my generation of DSQL SPs (an example you can see posted below), but unfortunately I see no such speed gains after testing multiple times against the same workload (vs not using DSQL SPs).

    If you have any more experiences or tips you could share on how you get such a great performance boost (I'll even take 33% 🙂 ) through the use of DSQL SPs, I'm all ears.

    Thank you!

    PS what does 'nonsargeable ' ? or is this a typo? I'm not familiar with this term.

    So the original query was sargable. I was confused cos you mentioned something like this in previous post

    where id=1 OR id=2 OR.....OR ID=x

    Since the query is already sargable(i.e. written in a way that the engine could either use an index, or let you know of a missing index.), just create a normal proc without the dsql and see how that performs.

    Catch-all queries done right [/url]
    Gail Shaw's Performance Blog[/url]

  • Hey Michael, both query plans are the same (see image).

    To be clear, I don't generate the SPs on the fly. I generate them (manually) before starting my test. I've identified a handful of tables that get used a lot in my test workload and the developers have provided me with a build that (in case of those handful of tables) call the SP I created (generated) instead of Ad Hoc SQL. They don't yet want to change all code without me proving that SPs would make a difference.

    Now turns out that last night my final test - which I thought I was running with DSQL SPs - was still with my Old SPs (which update every field, in case of null : update with current value). I found a couple other bugs and finally managed to test with the proper stored procedures.

    I can't say I managed to get a 99% increase like AdamAdmin , it looks like a more modest 10 - 15% increase. However I have only converted a handful of tables, so more might be to gain! So at least that's the good news for now.

    If any one has any specific requests or ideas on how i could squeeze out even more performance (specific settings to check etc) I'll be happy to hear it!

    thanks again!

  • You can only get 99.99% improvement if the original query was doing scans. Seems like yours was not so 15% ontop of an already fast proc is very good.

    Would you ever want to update a value to NULL?

    Catch-all queries done right [/url]
    Gail Shaw's Performance Blog[/url]

  • that is a good question and one i've sent around to the developers 🙂

    EDIT: of course they do 🙁 ... haha ... ok that sucks, anyone know an alternative to NULL to set defaults as parameters for SPs? I'll start my investigation.

  • Stijn977 (1/11/2016)


    Hi Sir, talk about quick turnaround. I was actually already editing my post, please allow me to give some more details:

    Can you further define "big transaction"?

    ==> 500 to a 1000 updates + 10s of inserts per transaction. I understand a transaction should be as small as possible, in this case the application code has processed all changes in memory and needs to (at set times) write all changes to the database.

    Local variables and a join with static numbers table may not mean better performance.

    ==> The local variables are what I found (online best practice)to be a good way to (try and) avoid parameter sniffing, plan bloat. But indeed doesn't seem to be working as bloated cache still gets reported.

    ==> The static numbers table is in case multiple rows need to be updated. At the moment this is done by way of:

    UPDATE ... SET ... WHERE id = 1 OR id = 2 or ... id = x

    How exactly are you resetting buffers and updating statistics??

    ==> I restart the service and I run an sp_updatestats.

    Can you post some of your ad-hoc code, and the corresponding stored procedure that was created? It's kind of tough to make some sort of analysis and recommendation without seeing the actual code.

    ==> Sure, all these update SPs look like this (for a single row):

    ALTER PROCEDURE [dbo].[sp_ClientNotes_update]

    @id uniqueidentifier,

    @ordering smallint = NULL,

    @title nvarchar(20) = NULL,

    @content text = NULL

    AS

    BEGIN

    SET NOCOUNT ON;

    DECLARE @local_ordering smallint = null;

    DECLARE @local_title nvarchar(20);

    DECLARE @local_content text;

    SET @local_ordering = @ordering;

    SET @local_title = @title;

    SET @local_content = @content;

    UPDATE tbl_ClientNotes

    SET ordering=ISNULL(@local_ordering,ordering),

    title=ISNULL(@local_title,title),

    content=ISNULL(@local_content, content)

    WHERE id=@id

    END

    Those for multiple rows looks similar but JOIN with a split function (working with the static generated numbers tables) which passes a string of comma seperated id's and returns a table. I'm not @ work at the moment so typing this example from memory. 🙂

    I'm more than willing to change or drop any of these SP designs, I'm merely trying to test and find what works best given my environment. Much appreciate your time and feedback!

    Apologies for my confusion here, but are you updating one row only with a call to this procedure (from what the @id parameter implies). Rather than calling this procedure in a looping fashion why not use a set based approach? Given if that is what is happening.

    ----------------------------------------------------

  • Hello SSC-Enthusiastic ,

    You might try the following point :

    1. Create a table partition.

    2. Configure the SQL Server Instance Option ( Optimize for Ad hoc workloads).

    3. Check Max Degree of Parallelism value.

    hope this will give you performance gain.

    Regards :-),

    Abdullah. Zarour

  • Hi abdullah!

    Thank you for your input. 🙂

    i've already experimented with turning on workload for ad-hoc (as well as altering the db to turn on forced parameterization). Unfortunately not with good results. The cpu did go down, but it took a lot longer to process the test workload at the same time.

    Maxdop is set to 1 (parallelism off) since this is recommended for oltp environments (and has shown the best performance during tests).

    Table partitioning is something i have on my list to experiment with. The problem is that my development box doesn't have the IO performance as the live server, so I'm hoping I will be able to test adequately. The issue will be to find a good partitioning key.

    Appreciate your ideas!

  • Heya, I'm trying to update a lot of different rows, each needing different fields updated. So set based approach does not qualify I'm afeaid. Unless I am missing something.

    But thanks for joining the discussion! 🙂

  • If that is the case I am wondering how many different permutations of updated columns do you need compared to the number of Id's... which leads me to wonder if your database is normalized.

    ----------------------------------------------------

  • Just to share with you my way to troubleshooting such issues :

    1. try to get workload files from production .

    2. Run DTA (Database Tuning Advisor ) .

    which will give you a recommendations how you can improve the query processing performance by either modifying / or Adding (indexes, indexed views, and partitioning) .

    Regards,

    Abdullah. Zarour

  • abdullah.zarour (1/13/2016)


    Hello SSC-Enthusiastic ,

    You might try the following point :

    1. Create a table partition.

    2. Configure the SQL Server Instance Option ( Optimize for Ad hoc workloads).

    3. Check Max Degree of Parallelism value.

    hope this will give you performance gain.

    Regards :-),

    Abdullah. Zarour

    A table partition is not a performance tool. If anything, it can slow down your queries because you have to break up the index two represent the two different partitions. And for point #3 , it does not list what you do once you check the max degree of parallelism. From my reading of many topics, that is not something to mess with if you do not understand what you are doing especially.

    ----------------------------------------------------

  • Stijn977 (1/12/2016)


    Thanks all who contributed.

    I did continue altering my SP-generator (I don't want to create a SP by hand for each table in the database), this now generates the following at the end below (which I think is very similar to what you just posted Michael). This is one of my smallest tables, I have tables with 50 - 100 columns (again why I needed a script to generate these SPs).

    I finished my generator and created the stored procedures for the tables used in my test workload.

    Unfortunately I can see I am hitting at best 3000 batches / sec , whereas without SP I can hit > 3500.

    If you see any improvements I could make (I'll compare to your example shortly Michael) to the DSQL, I'm all ears.

    CREATE PROCEDURE gnr8_upd8_club_sequence_records

    @id bigint,

    @club_overall_match_stats bigint = NULL,

    @type tinyint = NULL,

    @amount int = NULL,

    @start_date datetime2 = NULL,

    @end_date datetime2 = NULL

    AS

    BEGIN

    SET NOCOUNT ON

    DECLARE @SQL_Cmd nvarchar(Max);

    DECLARE @SQL_Params nvarchar(Max);

    SET @SQL_Cmd = N'UPDATE dbo.fixtures WITH(REPEATABLEREAD) SET ';

    IF(@club_overall_match_stats IS NOT NULL) SET @SQL_Cmd = @SQL_cmd + N'club_overall_match_stats = @_club_overall_match_stats,';

    IF(@type IS NOT NULL) SET @SQL_Cmd = @SQL_cmd + N'type = @_type,';

    IF(@amount IS NOT NULL) SET @SQL_Cmd = @SQL_cmd + N'amount = @_amount,';

    IF(@start_date IS NOT NULL) SET @SQL_Cmd = @SQL_cmd + N'start_date = @_start_date,';

    IF(@end_date IS NOT NULL) SET @SQL_Cmd = @SQL_cmd + N'end_date = @_end_date,';

    SET @SQL_Cmd = SUBSTRING(@SQL_Cmd,1,LEN(@SQL_Cmd)-1) + N' WHERE id = @_id';

    EXEC sp_executesql @SQL_Cmd,

    N'@_id bigint,@_club_overall_match_stats bigint,@_type tinyint,@_amount int,@_start_date datetime2,@_end_date datetime2',

    @_id = @id,@_club_overall_match_stats = @club_overall_match_stats,@_type = @type,@_amount = @amount,@_start_date = @start_date,@_end_date = @end_date

    END

    GO

    Can't you just get rid of dynamic SQL altogether?

    UPDATE dbo.fixtures

    WITH(REPEATABLEREAD)

    SET club_overall_match_stats = ISNULL(@club_overall_match_stats, club_overall_match_stats),

    type = ISNULL(@type, type)

    amount = ISNULL(@amount, amount)

    start_date = ISNULL(@start_date, start_date)

    end_date = ISNULL(@end_date,end_date)

    WHERE id = @_id

    -- optional check, to skip the update if none of the parameters contains any value

    AND (@club_overall_match_stats IS NOT NULL

    OR @type IS NOT NULL

    OR @amount IS NOT NULL

    OR @start_date IS NOT NULL

    OR @end_date IS NOT NULL

    )

    _____________
    Code for TallyGenerator

  • abdullah.zarour (1/13/2016)


    Hello SSC-Enthusiastic ,

    You might try the following point :

    1. Create a table partition.

    2. Configure the SQL Server Instance Option ( Optimize for Ad hoc workloads).

    3. Check Max Degree of Parallelism value.

    hope this will give you performance gain.

    Regards :-),

    Abdullah. Zarour

    Quick questions. Have you ever partitioned a table? If so, can you provide some actual facts as to what the performance gains you realized?

    Do you have your servers set to optimize for ad-hoc workloads? If so, why? Do you have any facts to back up that it will actually help the performance of the OP's system?

    Do you have MAXDOP set to 1? If so, why? What made you make this change to your system?

    abdullah.zarour (1/13/2016)


    Just to share with you my way to troubleshooting such issues :

    1. try to get workload files from production .

    2. Run DTA (Database Tuning Advisor ) .

    which will give you a recommendations how you can improve the query processing performance by either modifying / or Adding (indexes, indexed views, and partitioning) .

    Regards,

    Abdullah. Zarour

    Did you actually read this thread? If so, you would realize that the OP is trying to change from ad-hoc queries, probably embedded into code, to using stored procedures. There is nothing in production to actually look at!

    Stijn977 (1/13/2016)


    Hi abdullah!

    Thank you for your input. 🙂

    i've already experimented with turning on workload for ad-hoc (as well as altering the db to turn on forced parameterization). Unfortunately not with good results. The cpu did go down, but it took a lot longer to process the test workload at the same time.

    Maxdop is set to 1 (parallelism off) since this is recommended for oltp environments (and has shown the best performance during tests).

    Table partitioning is something i have on my list to experiment with. The problem is that my development box doesn't have the IO performance as the live server, so I'm hoping I will be able to test adequately. The issue will be to find a good partitioning key.

    Appreciate your ideas!

    Stijn977, do you realize that you are using the trial and error method to try to determine what the problem may be? Are you sure you have a problem?

    We are still not sure of what you are trying to do. Can you post the actual code, the code for the underlying tables, and if possible, some sample data?

    What are the exact steps you are performing?

    You said:

    Stijn977 (1/13/2016)


    Heya, I'm trying to update a lot of different rows, each needing different fields updated. So set based approach does not qualify I'm afeaid. Unless I am missing something.

    But thanks for joining the discussion! 🙂

    I'm willing to bet that you can write this set based.

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

Viewing 15 posts - 16 through 30 (of 56 total)

You must be logged in to reply to this topic. Login to reply