If you haven't heard, PASS is running a contest where you could win conference registration to this year's PASS Community Summit or, if you have that covered, 4 nights coverage of hotel costs. All you have to do is enter by July 1. Check the link for details. Here's my best thing I learned, which was from the 2004 PASS Community Summit:
It was a packed, standing-room only presentation. The late Ken Henderson was talking on how SQL Server allocated memory, and was talking about the MemToLeave region. He discussed how large execution plans can tap memory from this space and how any such allocations require contiguous memory. So if MemToLeave is fragmented and you need a large amount of memory, a contiguous block may not be available, generating an error. I slotted away this knowledge, never thinking I would use it.
Fast forward a couple of years when one of our teams kept reporting that one of their nightly processes was failing with an error indicating that it couldn’t allocate contiguous memory. My mind went back to the Henderson talk. The error indicated it couldn’t allocate 229 MB of contiguous memory. When the default setup had only 256 MB total, it wasn’t surprising. Some were blaming SQL Server and asking a case be opened with Microsoft. I pointed to the amount of memory and asked what needed that much. We looked and found a horrendous query in a looping piece of code. It was being built dynamically, adding an AND clause to the WHERE each time it processed a record. This was to exclude said record and the process had tens of thousands of rows to process nightly. I can’t imagine what that execution plan looked like. We had a likely culprit and forced them to rewrite the process. After they did, the process didn’t fail again.