Look for queries which are heavy on BLOB processing.
You mentioned BizTalk - do you have some massive FOR XML queries? It may be either massive XML (JSON, you name it) objects, or some smaller size BLOB’s taken in numbers. Or, most likely, it’s a bunch of XML’s being transformed to be consumed by BizTalk.
The other metrics mentioned in OP are mainly about writes, they won’t show anything when the Server is on it’s knees under the pressure from BizTalk. They actually may even improve as other queries have to wait in line for memory allocation till BizTalk is done with its thing.
Hmmm... prior to 2005, LOBs such as Text, NText, and Image defaulted to being stored out of row. In 2005, the new MAX datatypes and bloody XML datatype defaulted to being stored in-row. That's been a huge problem on my machines because of the way SQL Server is compelled to store data in the page the key says it belongs on. I've proven that can and frequently does cause (what I call) "trapped short rows" (sometimes as little as 1 INT per page) because it has no LOB info and the adjacent pages are chock-full with a single row or two with rather large LOBs that fill the page. It's a huge waste of memory and disk space not to mention a huge source of massive page splits if the rows are inserted without the LOBs and then later updated, which is a guaranteed massive "ExpAnsive" update.
As a result, I've saved a shedload on the number of pages that a Clustered Index actually contains. I've also done similar with large VARCHARs (change them to MAX, force them to be out of row). If a table contains LOBs AND has a bloody Modified_BY varchar column, I've found that changing those to a MAX datatype wedges them nicely amongst the larger out of rows without taking up extra space. It's kind of like getting some free storage because, although the out of row data is still stored on pages, the way the pages are used for out of row storage is quite different.
Of course, to prevent page splits on the CI with out of row lobs, you have to provide (best done with a default) at least a 1 byte default to cause the pointer in the pointer column of the CI to materialize during the insert instead as a later "ExpAnsive" update.
Doing that has cut way down on performance issues due to page splits (frequently NOT controllable just by reducing the FILL FACTOR because, except for one rare instance, INSERTs will ignore the FILL FACTOR and fill pages to as close to 100% as possible).
Of course, the best thing is to NOT store XML, JSON, and lengthy comments in your database but, if that's unavoidable, folks should try forcing them out of row (use the table option followed by "in-place" updates to do so).
As a bit of a sidebar, it's a damned shame that they didn't make it a database option instead of just a table option because there's just no merit to storing LOBs in-row if they happen to fit.
is pronounced "ree-bar
" and is a "Modenism
" for R
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
"Change is inevitable... change for the better is not".
"Dear Lord... I'm a DBA so please give me patience because, if you give me strength, I'm going to need bail money too!"
How to post code problems
How to Post Performance Problems
Create a Tally Function (fnTally)