Master database

  • I've got a server with 175 databases on it, all (theoretically) identical copies of each other from a metadata point of view. If I change a stored procedure I have to deploy it to 175 databases. I was thinking that it'd be nice if I could put the stored procedures in the master database and mark them as system objects so I would only have to deploy to one database.

    Is this a "bad" practice. If so, why? Is it just a bad idea or not feasible for some reason I'm not aware of?

    Despite our best efforts, the metadata sometimes ends up out of sync between one database and another which can be a bitch to identify. Somebody will think, I'll just tweak this procedure for this one client...

    My objective is to take that out of the equation. I'm talking specifically about user defined stored procedures and functions only.

    Any thoughts or suggestions would be appreciated.

    "Beliefs" get in the way of learning.

  • I wouldn't store it in the master database, no. Leave that database database as it is.

    Without really knowing what the sp does, it's a bit of guess work. You could create a database which hosts your shared objects, such as this SP. Then, you could either reference the sp from the other databases using 3 part naming convention (for example EXEC [SharedObjects].[dbo].[YourSP];) or create a synonym in all your databases, and then reference that:
    CREATE SYNONYM [dbo].[SharedSP] FOR [SharedObjects].[dbo].[YourSP]

    If, however, your SP is going to need to be able to reference objects in the calling database, without declaing the database name in the name, then you're going to need to create the SP in each database.

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • I'd put the stored procs in the master db.  You have no choice to get a "shared" proc.  The name must start with "sp_".

    I'd put the functions in a user "utility" db or whatever you want to call it.  Functions can inherently work locally even if called from another db.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • ScottPletcher - Thursday, January 11, 2018 8:31 AM

    I'd put the stored procs in the master db.  You have no choice to get a "shared" proc.  The name must start with "sp_".

    Really? Why? If you have to restore the databases on another server, you'd also have to include a script file for all your shared SPs in master, as you shoulnd't be restoring the master db. And starting with "sp_", that's a Microsoft things, and should really not be done, in case of name duplications.

    This advice is really a bad idea.

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • Thom A - Thursday, January 11, 2018 9:00 AM

    ScottPletcher - Thursday, January 11, 2018 8:31 AM

    I'd put the stored procs in the master db.  You have no choice to get a "shared" proc.  The name must start with "sp_".

    Really? Why? If you have to restore the databases on another server, you'd also have to include a script file for all your shared SPs in master, as you shoulnd't be restoring the master db. And starting with "sp_", that's a Microsoft things, and should really not be done, in case of name duplications.

    This advice is really a bad idea.

    But that's the only way it works.  You can't make a proc "special", i.e. auto-context to the current db, if the name doesn't begin with "sp_".  That's a vastly better option to me than having to create the same proc in every user db.  We have hundreds of dbs on our servers.  I'd rather create my utility-type procs -- such as sp_foreachdb (sic -- this is my version of MS's sp_MSforeachdb), sp_findcolumns, sp_listcolumns, and other custom utility procs -- once in master than repeat them hundreds of times and have to try to keep all of that in sync.

    I get the general rule -- "Never create user objects in master" -- but, as an expert, I accept exceptions if it's necessary for best productivity and performance.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • ScottPletcher - Thursday, January 11, 2018 10:05 AM

    But that's the only way it works.  You can't make a proc "special", i.e. auto-context to the current db, if the name doesn't begin with "sp_".  That's a vastly better option to me than having to create the same proc in every user db.  We have hundreds of dbs on our servers.  I'd rather create my utility-type procs -- such as sp_foreachdb (sic -- this is my version of MS's sp_MSforeachdb), sp_findcolumns, sp_listcolumns, and other custom utility procs -- once in master than repeat them hundreds of times and have to try to keep all of that in sync.

    I get the general rule -- "Never create user objects in master" -- but, as an expert, I accept exceptions if it's necessary for best productivity and performance.

    How would creating a user Utility database, however, impair productivity and performance? If they're objects that everyone needs access, you can change the permissions of the Public role, giving everyone those permissions.

    Using master for user objects, and then creating  those objects in there with names starting with "sp_" sounds more like a problem waiting to happen.

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • Thom A - Thursday, January 11, 2018 10:13 AM

    ScottPletcher - Thursday, January 11, 2018 10:05 AM

    But that's the only way it works.  You can't make a proc "special", i.e. auto-context to the current db, if the name doesn't begin with "sp_".  That's a vastly better option to me than having to create the same proc in every user db.  We have hundreds of dbs on our servers.  I'd rather create my utility-type procs -- such as sp_foreachdb (sic -- this is my version of MS's sp_MSforeachdb), sp_findcolumns, sp_listcolumns, and other custom utility procs -- once in master than repeat them hundreds of times and have to try to keep all of that in sync.

    I get the general rule -- "Never create user objects in master" -- but, as an expert, I accept exceptions if it's necessary for best productivity and performance.

    How would creating a user Utility database, however, impair productivity and performance? If they're objects that everyone needs access, you can change the permissions of the Public role, giving everyone those permissions.

    Using master for user objects, and then creating  those objects in there with names starting with "sp_" sounds more like a problem waiting to happen.

    A user db CANNOT provide auto-context, ONLY the master db can, and ONLY if the proc name starts with sp_. [The overhead of checking every name all the time in master first would obviously be too much overhead, so MS chose to use "sp_" as a "flag" to cause a check first in the master db.]  

    Keep in mind that MS's own "sp_" procs -- such as sp_help, sp_helpfile, etc. -- work the same way.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • I get why you would want to use master database and mark them as system objects. However, I believe this is a bad idea as that should be the exception for objects and not the main tool.
    I can't see a problem with deploying with 175 databases as long as you have a proper CI/CD process in place. The number shouldn't matter as you would do the same work for one or one thousand. You also gain the ability of deploying once to all the databases without worrying if they're on the same instance or not.

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • Automate with Powershell?

  • Luis Cazares - Thursday, January 11, 2018 10:42 AM

    I get why you would want to use master database and mark them as system objects. However, I believe this is a bad idea as that should be the exception for objects and not the main tool.
    I can't see a problem with deploying with 175 databases as long as you have a proper CI/CD process in place. The number shouldn't matter as you would do the same work for one or one thousand. You also gain the ability of deploying once to all the databases without worrying if they're on the same instance or not.

    Doesn't deal with the issue stated in the original q:

    Despite our best efforts, the metadata sometimes ends up out of sync between one database and another which can be a bitch to identify. 

    It's easy to say you can deploy it to all, much harder to enforce, esp. over time, and to handle all related issues. 

    What happens if you want *all* dbs to have the same version at the same time?  No real way to do that with multi-deployment -- it's going to take some time to create/recreate several procs in hundreds of dbs, period, no way around that.  And as it rolls out, different dbs will have different versions.  That might matter on certain occasions.  

    Also, how do you prevent customized changes to, say, 10 or 12 specific procs in every user db?  We have such a system in place here, but it's not trivial to set up and make work properly.  And all such names have to be "registered" ahead of time so the system "knows" that no changes of those procs are allowed in user dbs but other procs can be changed.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • ScottPletcher - Thursday, January 11, 2018 10:53 AM

    Luis Cazares - Thursday, January 11, 2018 10:42 AM

    I get why you would want to use master database and mark them as system objects. However, I believe this is a bad idea as that should be the exception for objects and not the main tool.
    I can't see a problem with deploying with 175 databases as long as you have a proper CI/CD process in place. The number shouldn't matter as you would do the same work for one or one thousand. You also gain the ability of deploying once to all the databases without worrying if they're on the same instance or not.

    Doesn't deal with the issue stated in the original q:

    Despite our best efforts, the metadata sometimes ends up out of sync between one database and another which can be a bitch to identify. 

    It's easy to say you can deploy it to all, much harder to enforce, esp. over time, and to handle all related issues. 

    Why would it be difficult to enforce? You just need to adjust the deployment environment any time you create a new database.

    What happens if you want *all* dbs to have the same version at the same time?  No real way to do that with multi-deployment -- it's going to take some time to create/recreate several procs in hundreds of dbs, period, no way around that.  And as it rolls out, different dbs will have different versions.  That might matter on certain occasions.  

    No, the point of having source control and CI/CD is that all dbs will have the same version at the same time. Any deployment would take time, and trying to speed up deployments by taking shortcuts is a recipe for future disaster.

    Also, how do you prevent customized changes to, say, 10 or 12 specific procs in every user db?  We have such a system in place here, but it's not trivial to set up and make work properly.  And all such names have to be "registered" ahead of time so the system "knows" that no changes of those procs are allowed in user dbs but other procs can be changed.


    How to prevent changes? That's why we have permissions available to control who can do what. Also, any custom changes made to procedures in production, could be overwritten by the deployments. This is just like the possibility of creating a system object that could be replaced by Microsoft with any update without notice.
    By the way, permissions are also part of the code in source control.

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • But the OP doesn't have all that in place, and, as I said before,  it's NOT trivial to do.

    Ssometimes the most practical solution is the best. By way of example, if relatiobal dbms vendors had waited until they could meet all of Codd's rules to release a product, we still wouldn't have any!

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

  • ScottPletcher - Thursday, January 11, 2018 12:46 PM

    But the OP doesn't have all that in place, and, as I said before,  it's NOT trivial to do.

    I agree. It's not trivial, but it's worth it.

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • ScottPletcher - Thursday, January 11, 2018 12:46 PM

    But the OP doesn't have all that in place, and, as I said before,  it's NOT trivial to do.

    Ssometimes the most practical solution is the best. By way of example, if relatiobal dbms vendors had waited until they could meet all of Codd's rules to release a product, we still wouldn't have any!

    @scottpletcher and @luis you both make excellent points. However, like Scott I accept that there are exceptions and that I and my team have the skills necessary. As for safeguards, I will script all of this and store it in our code repository in case Microsoft decides to overwrite the Master database during an upgrade or a catastrophic failure that involves re-installing the product.

    I haven't tried this yet but probably will later today or tomorrow. I could create a separate schema for my user defined procs, something like udef.sp_MyProc. This would immediately allow me to separate my stuff from the system stuff. If that doesn't work I'll use some naming convention that will be unique to the user defined stuff.

    I also agree that this is not a trivial undertaking. We'll have to scour every bit of t-sql and C# we have to identify when one of these procs or functions is called, change them, and then perform extensive regression testing.  One of the reasons I want to do this is because we're migrating from on-premise into the cloud and I have a brief window of opportunity to re-architect some of the crap in our product and simplifying deployment is high on the list.

    Thanks for your thoughts and ideas.

    "Beliefs" get in the way of learning.

  • The hard task, that's not trivial, is changing to use a sophisticated, automated scripting mechanism to push the same script out to hundreds of databases, or a selected sub-set of those databases.  They keep saying "that's what you should do", but that's far, far easier said than done.  In the meantime, you have actual code to maintain.

    That's why I support putting common procs into the master db if necessary.  It's perhaps not ideal, but it beats the immediate alternatives, and the current problems you have with "quickie" changes to selected procs in selected dbs only.

    SQL DBA,SQL Server MVP(07, 08, 09) A socialist is someone who will give you the shirt off *someone else's* back.

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply