CPU usage overhead after setting Max DOP

  • HI All,
    I recently have configured the MAX DOP on our production server from 0 to 8. The machine has 16 core in total.
    This was done to prevent a hanging query from consuming all the CPU of the Machine (100% CPU until the query is killed).
    However, I have noticed that the overall average CPU consumption has almost doubled.
    Could that be because of a MAX DOP change?
    Your expert opinions are welcome 🙂

  • I think you should have fixed the one query that was the issue, since you already identified it as a problem.
    instead you changed the entire server behavior to account for one bad query.

    In my shop, we already have a maxdop set to 8 instead of the default of zero, because I'm under the impression that the combining of those more-than-8-parallel streams can often cost more to merge them together than a lower number of streams, so in general, i think 8 is a sweet spot.

    what is double CPU for you? from 25 to 50 percent, or from 2 percent 4 percent?
    i think details are going to be important here

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Hi Lowell,
    You are right about fixing the query. We are already in works with the third party vendor over this. He has to come up with the fix for it.
    But in the mean time, I had to find a way so the issue would not kill the server CPU.
    8 for max DOP is the sweet spot. it is also the recommended by Microsoft.
    As for the details: I am talking about achange in average from the low 30s to high 50s.

    Salim

  • In addition to MAXDOP, it's a good idea to set the Cost Threshold for Parallelism to a higher value than the default of 5. Out of the box. You'll see fewer plans going parallel unnecessarily if you make that adjustment.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey - Wednesday, August 30, 2017 8:54 AM

    In addition to MAXDOP, it's a good idea to set the Cost Threshold for Parallelism to a higher value than the default of 5. Out of the box. You'll see fewer plans going parallel unnecessarily if you make that adjustment.

    I don't have much experience there but i'll give do some research on this value. But from your experience have setting Max DOP caused an increase in average CPU consumption of SQL machine?

  • Grant Fritchey - Wednesday, August 30, 2017 8:54 AM

    In addition to MAXDOP, it's a good idea to set the Cost Threshold for Parallelism to a higher value than the default of 5. Out of the box. You'll see fewer plans going parallel unnecessarily if you make that adjustment.

    +1 to Grant's suggestion.  I've heard this same thing for years now and I have set CTFP to 50.  It did nothing but good things because almost everything didn't go parallel.

  • salimdallal - Wednesday, August 30, 2017 1:47 PM

    Grant Fritchey - Wednesday, August 30, 2017 8:54 AM

    In addition to MAXDOP, it's a good idea to set the Cost Threshold for Parallelism to a higher value than the default of 5. Out of the box. You'll see fewer plans going parallel unnecessarily if you make that adjustment.

    I don't have much experience there but i'll give do some research on this value. But from your experience have setting Max DOP caused an increase in average CPU consumption of SQL machine?

    For your research, here's a page where Paul posted how to query the plan cache.  https://www.sqlskills.com/blogs/jonathan/tuning-cost-threshold-for-parallelism-from-the-plan-cache/

  • salimdallal - Wednesday, August 30, 2017 1:47 PM

    I don't have much experience there but i'll give do some research on this value. But from your experience have setting Max DOP caused an increase in average CPU consumption of SQL machine?

    Directly? No, not really. However, changing the value will result in the optimizer making different choices since that value is taken into account. I haven't seen a radical set of changes from changing this value, but I can envision how it might happen. Do you have before & after execution plans?

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey - Wednesday, August 30, 2017 8:54 AM

    In addition to MAXDOP, it's a good idea to set the Cost Threshold for Parallelism to a higher value than the default of 5. Out of the box. You'll see fewer plans going parallel unnecessarily if you make that adjustment.

    Grant , is it suggested to change MAXDOP (with ref to Microsoft doc as well , based on the NUMA config) and CT for parallelism as a unanimous opinion , or there are any other areas to be considered for dependencies ? Thank you.

  • Arsh - Thursday, August 31, 2017 7:48 AM

    Grant Fritchey - Wednesday, August 30, 2017 8:54 AM

    In addition to MAXDOP, it's a good idea to set the Cost Threshold for Parallelism to a higher value than the default of 5. Out of the box. You'll see fewer plans going parallel unnecessarily if you make that adjustment.

    Grant , is it suggested to change MAXDOP (with ref to Microsoft doc as well , based on the NUMA config) and CT for parallelism as a unanimous opinion , or there are any other areas to be considered for dependencies ? Thank you.

    Nothing is ever unanimous. Of course there are differences of opinion on all this.

    The general consensus is that you should change these values from the defaults. Exactly what to change them to is open to debate. I defer to others on how to deal with MAXDOP since it gets so much into hardware architecture where I just don't have adequate knowledge. I follow the guidelines offered by the SQLSkills team on this.

    The Cost Threshold for Parallelism value on the other hand, I can make specific suggestions on. I have a document on the best way to identify a specific value for your system The general consensus is that you should change these values from the defaults. Exactly what to change them to is open to debate. I defer to others on how to deal with MAXDOP since it gets so much into hardware architecture where I just don't have adequate knowledge. I follow the guidelines offered by the SQLSkills team on this. The Cost Threshold for Parallelism value on the other hand, I can make specific suggestions on. I have a document on the best way to identify a specific value for your system on my blog. If you don't want to do all that work, then I can suggest starting values of 50 for OLTP and 30 for DW. If you don't want to do all that work, then I can suggest starting values of 50 for OLTP and 30 for DW.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey - Thursday, August 31, 2017 8:10 AM

    Arsh - Thursday, August 31, 2017 7:48 AM

    Grant Fritchey - Wednesday, August 30, 2017 8:54 AM

    In addition to MAXDOP, it's a good idea to set the Cost Threshold for Parallelism to a higher value than the default of 5. Out of the box. You'll see fewer plans going parallel unnecessarily if you make that adjustment.

    Grant , is it suggested to change MAXDOP (with ref to Microsoft doc as well , based on the NUMA config) and CT for parallelism as a unanimous opinion , or there are any other areas to be considered for dependencies ? Thank you.

    Nothing is ever unanimous. Of course there are differences of opinion on all this.

    The general consensus is that you should change these values from the defaults. Exactly what to change them to is open to debate. I defer to others on how to deal with MAXDOP since it gets so much into hardware architecture where I just don't have adequate knowledge. I follow the guidelines offered by the SQLSkills team on this.

    The Cost Threshold for Parallelism value on the other hand, I can make specific suggestions on. I have a document on the best way to identify a specific value for your system The general consensus is that you should change these values from the defaults. Exactly what to change them to is open to debate. I defer to others on how to deal with MAXDOP since it gets so much into hardware architecture where I just don't have adequate knowledge. I follow the guidelines offered by the SQLSkills team on this. The Cost Threshold for Parallelism value on the other hand, I can make specific suggestions on. I have a document on the best way to identify a specific value for your system on my blog. If you don't want to do all that work, then I can suggest starting values of 50 for OLTP and 30 for DW. If you don't want to do all that work, then I can suggest starting values of 50 for OLTP and 30 for DW.

    Thanks Grant. Visited your blog on this. Yeah you are right . As an experience , it becomes very tempting to change the MAXDOP and CTP immediately unless one comes across a knowledgeable person. Both these parameters sometimes make you feel they are alternatives to each other but are rather more reciprocal , I feel ... and so crucial to understand properly. Ok with this in place , wouldn't the statistics play a very major role considering the fact that the optimizer has to choose between the parallel plan and the serial plan , as the cost calculation itself depends on the statistics ?  Also what's that cost that it calculates to determine whether the query crosses the threshold ?

  • Arsh - Thursday, September 7, 2017 7:43 AM

    Thanks Grant. Visited your blog on this. Yeah you are right . As an experience , it becomes very tempting to change the MAXDOP and CTP immediately unless one comes across a knowledgeable person. Both these parameters sometimes make you feel they are alternatives to each other but are rather more reciprocal , I feel ... and so crucial to understand properly. Ok with this in place , wouldn't the statistics play a very major role considering the fact that the optimizer has to choose between the parallel plan and the serial plan , as the cost calculation itself depends on the statistics ?  Also what's that cost that it calculates to determine whether the query crosses the threshold ?

    Yeah, absolutely the statistics and other methods of row estimates are the driving factors behind how the optimizer determines costs for execution plans. It's those estimated costs that are compared to the cost threshold to determine if it's possible for a plan to go parallel. However, just because a plan passes the cost threshold doesn't mean it will go parallel, just that it can. The optimizer will also cost out a parallel plan (there are VERY grotty detailed exceptions to this) when one is available. It may be chosen as the least cost plan. However, it's all based on those estimated row counts either from the statistics, fixed values or the row count calculations. Those costs are visible within execution plans.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply