Interesting article; thanks for writing it.
I work a lot with dimensional data warehouses, and use SP’s for all of my ETL work. I do logging very similar to this: I never thought to log a SQL_VARIANT, that idea will have to go into my “to review” list.
I end up using a simpler logging structure, but in fact use 3 log tables.
Job: a single record for each SP. Primary key is IDENTITY based, and is used by the other 2 logs for recording activity.
Job details: contains the “step by step” activity, and errors (if encountered) for the job.
Data Integrity: records (at a business key / time grain) any changes to the facts that I encounter: not the initial fact being inserted, but if any facts change upon re-running the ETL.
Any “time” fields used in the logging are DATETIMEOFFSET(7), and along with the standard error collection fields you have (I call them event fields and reuse them for other features), I have a couple of general “information” columns (like “remarks”).
One design difference I have (compared to your setup) is I log the activity into a table variable (with a defined table type like you did), but I pass that to a SP that stores it in the log table (via MERGE). I might have several steps (or records) in the logging variable before I get it into the log. By using a SP to do the logging, I can record detail activity using an INSERT / SELECT statement instead of just the variables.
For example, with very minor exception I use MERGE on my fact tables and capture the OUTPUT into a “data integrity” table variable. I then run a COUNT against that table variable, with the GROUP BY on the $ACTION field (INSERT, UPDATE, DELETE). In that way I record (in details) the record count of the ETL, including 0 if no new data arrived.
As a final step in my ETL code, I’ll check to see if any updates (or deletes, depending) occurred in the facts. If so I’ll log the “business key” level data into a “data integrity” log table, and set a WARNING state for the SP.
Beer's Law: Absolutum obsoletum
"if it works it's out-of-date"