I use schemas frequently.
For example, in a current ETL database, I have core functionality for all files that will be imported into the database, and then there are peripheral objects to support different data-consumers and types of files. (All data is file-based in this system. That's what the database is for.)
Core objects are in a Files schema. This is locked down pretty heavily, since any change made there, even in a dev environment, will affect everything in the datastreams. Changes to objects in the Files schema are, by policy, subject to a level of QA, review, regression testing, etc., that is extremely detailed. Code in there has to be approved unanimously by a fairly large committee of stakeholders.
Then there are a number of datastream-specific schemas that feed off of the Files schema. Changes in those affect only that specific datastream. Isolating these objects, even having the same objects in multiple schemas, means that there isn't cross-contamination. Projects to enhance, improve, refactor, etc., in those spaces is subject to a smaller level of scrutiny and a smaller number of stakeholders.
That isolation has kept a fair number of minor issues from spreading business-wide, instead being isolated. This makes troubleshooting easier, too. If you know where a problem is in terms of what services/applications are showing issues, then you know what datastream it belongs to, and you can more easily narrow down the issue for resolution.
It also makes it really easy to find objects for a specific project. They will be in the schema for that particular datastream.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon