Back in my Oracle days, one issue that was always drummed into me was the importance of "instrumenting" the production version of application code – be it database code (PL/SQL), Java, or whatever.
By "instrumenting" I mean liberally sprinkling "debug switches" throughout the code – at the start of any transactions, for example – that at a later date can provide useful information about these transactions, the application modules to which they belong, the actions they are performing, the SQL code they are executing, and so on. Then, when the code is happily, or unhappily, running in a production environment, a system administrator or DBA can flick these switches, quickly trace the exact location of any "problem" code that is running outside the database, and pull in the appropriate developer to fix it.
In the Oracle world, we'd instrument the code using a built-in PL/SQL procedure called DBMS_APPLICATION_INFO. It caused me to wonder why there is no equivalent procedure for SQL Server (correct me if I'm wrong!).
I brought the topic up with Phil Factor and a few other developers and they looked at me aghast, deeming it safest to assume that I'd momentarily lost my marbles. Why would we ever want to take the time and overhead hit of all this extra instrumentation code? And for what benefit? If the problem is in the app, we can just turn ANTS Profiler loose on it! If it's in the database, the DBA can easily track down the problem using Profiler!
Even if the equivalent of DBMS_APPLICATION_INFO were available it may be a forlorn hope that developers would ever use it. However, given the difficulty inherent in tracking down performance issues in complex, many-tiered systems, I assume it's something that many DBAs would like to see? Or are the performance tools supplied with SQL Server so effective that the benefits of instrumented application code would be too marginal? Are any DBAs "taking the lead" and employing instrumentation techniques in their own T-SQL or CLR procedure code? If so, I'd love to hear about them.