Optimizing Stored Procedure to go Parallel.

  • I have a stored procedure that calls several views that rely on each other. In the past these views used to go parallel and use up all 100% of the CPU (12 cores), and now when the same stored procedure runs it only uses 8% of the CPU (1 core). This extends the time spent on the query from roughly 10-15 sec to 2-3min. I'm not quite sure why this is happening. Are there some obvious things to look at when optimizing views to utilize all cores/threads? Also, it doesn't matter if I set Cost Threshold for Parallelism to 1 or 50 or 5, it is always the same, and I have Max Degree of Parallelism set to 0 as well, which should mean to use all cores when available.

    Thanks in advance,

    Peter

  • pkapas (1/22/2015)


    I have a stored procedure that calls several views that rely on each other. In the past these views used to go parallel and use up all 100% of the CPU (12 cores), and now when the same stored procedure runs it only uses 8% of the CPU (1 core). This extends the time spent on the query from roughly 10-15 sec to 2-3min. I'm not quite sure why this is happening. Are there some obvious things to look at when optimizing views to utilize all cores/threads? Also, it doesn't matter if I set Cost Threshold for Parallelism to 1 or 50 or 5, it is always the same, and I have Max Degree of Parallelism set to 0 as well, which should mean to use all cores when available.

    Thanks in advance,

    Peter

    Run the query with Show Actual Execution plan on in SSMS. Compare Actual and Estimated rows on all branches of the query. My strong suspicion is that your statistics are stale and you are getting estimates of 1 row (or very small numbers) and actuals are MUCH higher. This will lead to index seek/lookup/loop join plans that are HORRIBLE performers at higher row counts!! I see this ALL THE TIME at clients - EVERYONE relies on auto update statistics. 20%+500 rows just does not cut it!!

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • pkapas (1/22/2015)


    I have a stored procedure that calls several views that rely on each other. In the past these views used to go parallel and use up all 100% of the CPU (12 cores), and now when the same stored procedure runs it only uses 8% of the CPU (1 core). This extends the time spent on the query from roughly 10-15 sec to 2-3min. I'm not quite sure why this is happening. Are there some obvious things to look at when optimizing views to utilize all cores/threads? Also, it doesn't matter if I set Cost Threshold for Parallelism to 1 or 50 or 5, it is always the same, and I have Max Degree of Parallelism set to 0 as well, which should mean to use all cores when available.

    Thanks in advance,

    Peter

    Quick question, can you post the execution plan?

    😎

  • Take a look at make_parallel() by Adam Mechanic. I have had a lot of success with it and may be helpful in this situation.

    http://sqlblog.com/blogs/adam_machanic/archive/2013/07/11/next-level-parallel-plan-porcing.aspx

    "I cant stress enough the importance of switching from a sequential files mindset to set-based thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code."

    -- Itzik Ben-Gan 2001

  • Thank you for the replies. I have attached the execution plan in different formats, and the message after the execution of the stored procedure.

    It is a stored procedure that calls a view that relies on 5 other views, but within those views there are other views also called. So a bit complex.

  • If the stats refresh doesn't work,

    Do you have a DBA monitoring the servers?

    As its possible that having seen the processors running hot he decided to set the Max Degree of Parallelism to 1 to smooth out the processors and keep resources in reserve for other queries, or has someone added an OPTION(MAXDOP 1) somewhere?

    _________________________________________________________________________
    SSC Guide to Posting and Best Practices

  • I'm a one man team for the whole company, taking care of System Administration, Database Administration, Development, Networking, Implementations, End-User support, etc. Basically everything IT. A jack of all trades in a sense. I'm learning as much as I can, and started off with MS Access, and recently moved our queries, and tables over to SQL Server. So I'm still fairly new to it, and no one has really touched anything in SQL Server besides me. As far as I know, it's running with default settings, but memory and cpu settings are set to use all resources when needed. We also have SSDs in this system where the DBs reside, and the system is fairly idle unless queries run.

  • pkapas (1/22/2015)


    I'm a one man team for the whole company, taking care of System Administration, Database Administration, Development, Networking, Implementations, End-User support, etc. Basically everything IT. A jack of all trades in a sense. I'm learning as much as I can, and started off with MS Access, and recently moved our queries, and tables over to SQL Server. So I'm still fairly new to it, and no one has really touched anything in SQL Server besides me. As far as I know, it's running with default settings, but memory and cpu settings are set to use all resources when needed. We also have SSDs in this system where the DBs reside, and the system is fairly idle unless queries run.

    There are some DISASTROUSLY BAD defaults in SQL Server (and Windows OS and your file system too!!). I have a SQL Saturday session entitled "SQL Server Defaults SUCK!!". 😎

    I can promise you there are a HUGE number of things your team is doing wrong/suboptimally/not doing that are causing untold issues.

    On the flip side, you (like all of the clients I first engage with) prove what I have been saying for a decade+: SQL Server is an AMAZINGLY good product considering how well it does with SOOO many entities mistreating it SOOO badly!! :w00t:

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Haha, well that's a + for Microsoft then. A good product starts with making sure it can adapt to even inexperienced users. I'm planning on getting better with SQL Server in the next couple of years, but I guess I can only do that by researching and trying out new things. Any ideas how I could try to tweak my SQL Server, or change defaults to get better performance? I'm not an expert at understanding the execution plan either. Where are you supposed to look at the number of rows that are estimated. Is it at the last step. For that one it says 3.30018 rows. Is this the issue, that it cannot estimate the proper number of rows? The actually rows affected was 12192. Why am I getting a bad estimation?

  • TheSQLGuru (1/22/2015)


    pkapas (1/22/2015)


    I'm a one man team for the whole company, taking care of System Administration, Database Administration, Development, Networking, Implementations, End-User support, etc. Basically everything IT. A jack of all trades in a sense. I'm learning as much as I can, and started off with MS Access, and recently moved our queries, and tables over to SQL Server. So I'm still fairly new to it, and no one has really touched anything in SQL Server besides me. As far as I know, it's running with default settings, but memory and cpu settings are set to use all resources when needed. We also have SSDs in this system where the DBs reside, and the system is fairly idle unless queries run.

    There are some DISASTROUSLY BAD defaults in SQL Server (and Windows OS and your file system too!!). I have a SQL Saturday session entitled "SQL Server Defaults SUCK!!". 😎

    I can promise you there are a HUGE number of things your team is doing wrong/suboptimally/not doing that are causing untold issues.

    On the flip side, you (like all of the clients I first engage with) prove what I have been saying for a decade+: SQL Server is an AMAZINGLY good product considering how well it does with SOOO many entities mistreating it SOOO badly!! :w00t:

    Like the autogrow mdf files by 1mb. That's been going on since the 90's.

    "I cant stress enough the importance of switching from a sequential files mindset to set-based thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code."

    -- Itzik Ben-Gan 2001

  • Haha, yeah, I noticed that or the growth my percentage for the log files. I recently read the optimization article on this site, and tried to stick to most of the points listed in it. It was a good read: http://www.sqlservercentral.com/articles/Performance/119308/

  • pkapas (1/22/2015)


    Haha, well that's a + for Microsoft then. A good product starts with making sure it can adapt to even inexperienced users. I'm planning on getting better with SQL Server in the next couple of years, but I guess I can only do that by researching and trying out new things. Any ideas how I could try to tweak my SQL Server, or change defaults to get better performance? I'm not an expert at understanding the execution plan either. Where are you supposed to look at the number of rows that are estimated. Is it at the last step. For that one it says 3.30018 rows. Is this the issue, that it cannot estimate the proper number of rows? The actually rows affected was 12192. Why am I getting a bad estimation?

    There are a myriad of reasons for bad estimates.

    Best way to a) get your system in shape and b) learn, is to have your company bring in an expert for a performance review. Significant ROI there on many levels. Stuff like this is shooting fish in a barrel for a qualified consultant!!

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply