January 8, 2016 at 6:44 am
Hi
CXPACKET11143860.7210627025.50516835.2297746866455.110.01140.01090.0005
the values for CXPACKET are very high.
We had the parallelism in 6, then we changed to 1. No good still
Any idea ?
Thanks
January 8, 2016 at 9:04 am
A couple big things are worth pointing out.
First, a high percentage of CXPACKET waits doesn't necessarily mean there's a problem. The only thing you know for sure from seeing CXPACKET waits is that some queries are running in parallel.
The control thread will accumulate CXPACKET waits for the duration of the parallel portions of the query, and that sort of waiting is benign. When CXPACKET waits can indicate a problem is when the child threads are given unequal amounts of work to do, and then they have to wait for other child threads to finish. This can happen if your data is skewed and the child threads are assigned ranges the optimizer thinks are equal, but actually contain very different numbers of rows.
You would need to dig in more and actually see whether you're seeing the benign sort of CXPACKET wait or the troublesome sort, and then run some tests to know whether changing MAXDOP is a good idea.
Second, it looks like you're just pulling those numbers straight from sys.dm_os_wait_stats? If so, those numbers are cumulative since the instance started or they were cleared by someone running DBCC SQLPERF.
If the total CXPACKET waits and duration are very high, then you wouldn't expect those cumulative numbers to change dramatically after a MAXDOP change.
It's much better to set up a job that captures the information from sys.dm_os_wait_stats at a defined interval and calculates the difference between the previous capture, so you get more granular information (I default to every 5 minutes unless there's a specific problem that even more granular information would help with). One simple example of how to do this can be found here http://simplesqlserver.com/2013/05/20/wait-stats-sys-dm_os_wait_stats-monitoring-and-using/.
Cheers!
January 8, 2016 at 1:15 pm
river1 (1/8/2016)
HiCXPACKET11143860.7210627025.50516835.2297746866455.110.01140.01090.0005
the values for CXPACKET are very high.
We had the parallelism in 6, then we changed to 1. No good still
Any idea ?
Thanks
A lot of times, CXPACKET is just a symptom. The cause frequently turns out to be code that needs a lot of help.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 8, 2016 at 1:21 pm
A couple things
The waits are cumulative since SQL started. So unless you're taking snapshots and comparing them, a change won't appear to have had any effect because there's all the history. Don't just look at the wait stats, you need to look at how the numbers change over time.
A CXPacket wait tells you that queries are running in parallel. Nothing more. It is not automatically indicative of a problem. It is definitely *not* a reason to make a knee-jerk change to the maxdop setting. Setting maxdop to 1 will probably cause far more harm than good. If you haven't done analysis into parallel queries and determined that there's definitely inappropriate parallelism that nothing else (like cost threshold or tuning the queries) fixes, then you've probably made a bad config change.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 10, 2016 at 12:02 am
river1 (1/8/2016)
HiCXPACKET11143860.7210627025.50516835.2297746866455.110.01140.01090.0005
the values for CXPACKET are very high.
We had the parallelism in 6, then we changed to 1. No good still
Any idea ?
Thanks
Piling on, with this little information there is no way one can even start to guess if there is a problem related to parallelism let alone a "solution". Making drastic changes like MAXDOP 1 is therefore not a good idea and as Gail said, it will probably only cause more problems.
😎
Viewing 5 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply