• I agree that this is pretty nifty; I use a similar getup for generating reports using SQLMail.  That said, I've found some problems using the idea:

    1. No dependencies -- The SQL statements are not "in" SQL's knowledge, so determining if something breaks if you change the name of a table/column is not as easy as a sproc.

      This one has been a biggie.  We have a report that transfers data from one database to another with some calculations.  There is no way to do "select lhs, measure, scale, offset" when all the calculation code is in the query in the table.

    2. Query isn't cached -- Dynamic queries are a bad thing (as was told to me... my db isn't big enough to care)

    That said, I've mitigated some of the problems by adding Yet-Another-Layer:

    1. There is a table that has queries, descriptions, and I/O parameters
    2. The queries *mostly* point to stored procedures or views.  They sometimes have "formatting" peices (eg, casting everything to nvarchar & concatenating it so it can be handed off to SQLMail), but they don't reference anything but the sprocs.

    The upside is that DB dependencies are tracked in SQL server for you, and you have a "List of queries to do things."

    The downside is that you really should make a new stored procedure for the joining operations -- I don't know how effective joining a bunch of views together would be, so its not as "reusable/configurable" as the scheme in the article.

    Although all of this is beginning to sound very similar to the master database tables...  I've been thinking that I could create a few simple "filter" tables (eg, Object X, Column Y is visible as Name) that I would join with the sysobjects/syscolumns/whatnot.  I guess this wouldn't be at all portable among databases