A couple of things there...
First, there's that Cost Threshold of Parallelism thing. 5 is way too low. I have a mixed OLTP and Batch Processing system and I've found the sweet spot for the OLTP side of it is right around 25 and so I set it to 25. I did set it to 5 once to see what would happen and things got really ugly CPU wise in a hurry. I also tried increments of 5 up to 50 and 25 was the sweet spot.
You said this Linq Query is doing a scan even on a good day. That's not a good thing. Can you attach the Actual Execution Plan (not a graphic but the actual saved plan) so we can take a look for you?
Getting back to the index maintenance you ran that seems to coincide with this incident. Ola's code is fine... the way people use it is not. If you're using the typical settings of REORGANIZE at 10% and REBUILD at 30%, you're actually doing things wrong despite a whole lot of people claiming that it's a best practice.
First of all, NEVER (and I don't say that often) REORGANIZE any index that has a "0" or "100" Fill Factor. It doesn't work the way most people think. It would take an hour to explain all the gazintas here but REORGANIZE removes critical space on pages when it's the most needed, frequently does nothing to improve average page density because it can't make extra pages, and isn't the tame little kitty that most people claim it is when it comes to resource usage. I also tell people that the only time you should use REORGANIZE is to compact LOBs and, if you ever need to do that, then you need to see part 3 of my series on index maintenance so that you can avoid that in the future.
I'll also tell you to NEVER do index maintenance on anything that has a "0" Fill Factor unless the purpose of the REBUILDs is to assign proper Fill Factors, which is a bit of an art. Even REBUILDing indexes with a "0" or improperly assigned "100" Fill Factor can cause what it caused for me... MASSIVE BLOCKING on the morning after the index maintenance ran due to all the bad page splits that occur on most of your indexes even they have an ever increasing key because no one ever correctly accounts for "ExpAnsive Updates". And, no... simply lowering the Fill Factor does NOT fix that problem. It fact, you may be causing a hell of a lot of wasted memory and disk space with absolutely no benefit.
And, to be sure, Paul Randal, the guy that wrote much of and was in charge of people writing the index maintenance stuff for SQL Server and is also the guy that wrote the BOL writeup for sys.dm_db_index_physical_stats. In many of his presentations, he says flat out that MS pressured him for a recommendation and that's when the 10/30 recommendation was born. If you actually do an in-depth study of the (his) BOL write up, you'll see that there's much more to index maintenance than just the 10/30 thing that most people have mislabeled as "Best Practice". It's actually better to not do any index maintenance than to do it wrong and the 10/30 thing is SO WRONG! I'm proof of that... I stopped doing any index maintenance for more than 2 years and CPU usage dropped from an average of 22% across 32 CPUs to 8% in 3 months and stayed there for the duration and blocking has all but vanished.
To be sure, though, there are some indexes that end up with very low page densities and you will need to do REBUILDs on those to recover memory and disk space (don't do REORGANIZE on these... it usually won't actually help). I recommend doing REBUILDs on such things at 82% for anything with page densities < 82% and 92% for those things with >82% but <92% page density over time. To be equally sure, that should only be a stop gap until you can actually determine the correct Fill Factor.
So, to summarize this post...
- Please attach the saved Actual Execution Plan so we can make a recommendation as to what needs to be done to your Linq Query and...
- Except for occasional page density repair as posted above, stop doing index maintenance on things that have a "0" Fill Factor until you can figure out the best Fill Factor, which includes determination of how "Expansive Updates" are affecting your indexes, causing usually unnecessary page splits even on index with "Ever Increasing Keys", and blocking you might not even be aware of.
- Change the Cost Threshold of Parallelism to 25 to start with and then slowly adjust it to find your "sweet spot". Or, just leave it at 25. Some people say set it to 50... I wouldn't go that high as the first and only guess.
is pronounced "ree-bar
" and is a "Modenism
" for R
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
"If you think its expensive to hire a professional to do the job, wait until you hire an amateur."--Red Adair
"Change is inevitable... change for the better is not."
When you put the right degree of spin on it, the number 3|8
is also a glyph that describes the nature of a DBAs job. 😉
How to post code problems
Create a Tally Function (fnTally)