Shifting gears back to the subject at hand...
One of the things we use custom schemas for is to identify the expendability and life expectancy of a table.
For example, we have a database used mostly for ETL from multiple sources where a whole lot of work/validation/normalization is done on the imported raw data before it's inserted into whatever final database it needs to go to. In the process of do so, it's really handy to do such things as leave a "breadcrumb trail" of logs-by-file or save interim data in tables especially for post-run troubleshooting.
With that thought in mind, we have several schemas to identify the type of data in the table. For example, the SCRATCH schema is for stuff that only needs to be kept for a week or so for "just in case we missed something" post-run troubleshooting. I've built a "sweeper" proc to clean up the proverbial cutting room floor that looks to see if the table has been used in the last week or not and, if not, the table is renamed with a suffix of "_ToBeDeleted". Such renamed tables are allowed to continue to exist for another week and then unceremoniously dropped so no one has to remember to go and clean that junk up. Heh... and yes... there's a fair bit of dynamic SQL that creates those types of tables.
We do similar as the data progresses through the necessary steps because we need to either not include columns from the original data file or we need to add columns. There just no sense in working with a 200 column table if you only need 40 of those columns to get to the final storage. The tables for those schemas are named according to the step and have similar "sweeper" processes applied. The reason why we don't just have the code that's doing the processing drop the tables is because the processes can be quite long and it's real handy to "pick up where you left off" if something goes wrong.
The "Sweeper" processes identify by schema name whether a table can go away in 24 hours or after a given period of time up to 2 years depending on our audit requirements.
Why not use separate databases for each major step? Because there's a whole lot of DRI (Declared Referential Integrity, i.e. FKs) involved and the only way you can really do DRI across databases is through the use of triggers and we just don't want the overhead of doing that.
is pronounced "ree-bar
" and is a "Modenism
" for R
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
"If you think its expensive to hire a professional to do the job, wait until you hire an amateur."--Red Adair
"Change is inevitable... change for the better is not."
When you put the right degree of spin on it, the number 3|8
is also a glyph that describes the nature of a DBAs job. 😉
How to post code problems
Create a Tally Function (fnTally)