Scripting Objects with PowerShell

  • Comments posted to this topic are about the item Scripting Objects with PowerShell

  • How would it work for 2 or more developers modifying objects in the same database?

    _____________
    Code for TallyGenerator

  • It depends. If they edit the same object, it's just as things are in SSMS, the latest item is scripted out. If they edit different objects, when you script things, you'll see changes in your VCS client, but you can pick and choose what to commit.

    For example, if I'd changed the stored proc, but another developer had changed the table, in a shared database, I see both changes. I script them out and see two changes. I can choose to stage and commit only my proc, then discard the table change from my local repo, and let the other developer commit that change.

    If I  had changed the proc and scripted my change, then another developer changes the proc (adds on to my work) and scripts it, the commits will be in the order that we chronologically commit them. It's entirely possible mine would be the later one. With a VCS, both are stored. Without one, if I recompiled my version, because it was open in SSMS, the other changes would be lost.

    Ideally this is why I like developers making changes in separate databases and using a VCS to coordinate and share changes. I know that isn't always easy or possible, but regularly using a VCS for your work at least lets you keep track of what's been changed, and also ensures you don't lose work. If you're constantly stepping on each other's toes, then you need verbal/written communication outside of code.

  • This is a great intro to this workflow. And I agree that each developer should (strong should) have their own local database to work against and use Git as the collaboration tool via pull requests to push changes to a shared database for testing/integration, etc. The part that needs further write-ups is the data management in all of this and the reason database devops is so much trickier than applications.

    Keeping track of database objects in separate files requires a desired state configuration (DSC) approach to publishing changes. I use that approach via the MS provided SQL Server Data Tools (SSDT) via dacpacs. Works well and is my preferred approach over managing change scripts.

  • There is definitely a lot more to this. Lots of the specifics for using scripting requires cultural/communication changes as well as thinking forward to how you will deploy changes.

  • It looks like awful amount of manual work / human analysis involved for every object involved in coop development.

    And it not just about the same object, as Steve pointed out, if you changed a procedure, then my changes to a table queried by that procedure will effectively invalidate your work.

    Having a separate DEV DB for each developer is always a "bug factory".

    I always considered using DB Compare a lousy way to control deployments, and this scripting approach is even worse.

    _____________
    Code for TallyGenerator

  • qbrt - Thursday, May 3, 2018 9:17 AM

    This is a great intro to this workflow. And I agree that each developer should (strong should) have their own local database to work against and use Git as the collaboration tool via pull requests to push changes to a shared database for testing/integration, etc. The part that needs further write-ups is the data management in all of this and the reason database devops is so much trickier than applications.

    Keeping track of database objects in separate files requires a desired state configuration (DSC) approach to publishing changes. I use that approach via the MS provided SQL Server Data Tools (SSDT) via dacpacs. Works well and is my preferred approach over managing change scripts.

    I don't have huge databases by a lot of standards but I suspect that providing each Developer with 3TB worth of databases from one instance is going to be a little bit tough, especially when a lot of it contains encrypted data.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Sergiy - Thursday, May 3, 2018 5:02 PM

    It looks like awful amount of manual work / human analysis involved for every object involved in coop development.And it not just about the same object, as Steve pointed out, if you changed a procedure, then my changes to a table queried by that procedure will effectively invalidate your work.

    Having a separate DEV DB for each developer is always a "bug factory".I always considered using DB Compare a lousy way to control deployments, and this scripting approach is even worse.

    Been there, done that, never want to see any of it happen ever again.  "Easy, frequent, automatic, rapid database deployments" lead to the incredibly fast deployment of serious mistakes and the promotion of "what if" code because someone didn't want to take the time to do the manual work/human analysis.  They just wanted to "get (or Git :D) it off their plates".

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I'll say this. Tracking changes in db development has worked very well for me. There can be collisions, but in small teams, we can usually work things out. However, I also see delays because I need to experiment with changes. Often in the past I've branched by creating a copy of a db to try something.

    Modern, rapid development does require separation, but it also requires coordination and lots of integration. You cannot, meaning CANNOT, have your own database environment that you control for any length of time. I'd argue that anything beyond 2-3 days is too long. All changes need to be synced within that time.

    Jeff, Certainly giving everyone 3TB is hard, but with data virtualization (SQL Clone from Redgate or other products), we can use a little over 3TB, usually less than 3.1TB, but let 3, 4, 5, or 10 developers all appear to have their separate 3TB database. If you haven't looked at data virtualization, then you should.

    Sergiy, what is the way to do development and track your changes? Or do you now track changes? HAving worked in teams of 10, this type of tracking the changes each day works well. There shouldn't be so many changes that we have an issue. We don't commit every change, we commit every working change. Meaning the developer things something is done.

    What is your alternative?

  • Steve,

    My way is to save a script for an object which I create/modify/drop into a CVS/VSS or whatever is in use as soon as it's open in my SSMS.

    If anybody else is making changes to the same object I'll know about it immediately, not after an investigating where did my changes go. Don Corleone prefers to know bad news right away.

    All the scripts developed by any developer go to the same repository.

    If anybody made a mistake and screwed up the DEV database - he/she can simply start the job which restored DEV DB from backup and re-run the scripts in repository.

    If somebody failed to save some script(s) in VC - it's up to him/her to do some soul digging.

    If any change is not finished prior to the time of the next deployment - it's simply moved to the next version folder. Any traces of unfinished job will disapper from the DEV DB with the next restore/roll cycle.

    Database is ready for a unit test. It has all the finalised changes intended for the current deployment, and no unfinished work.

    That's what works smoothly and reliably, without endless stand-ups discussing who is working on what and not touching what today. Even with a lead developer working remotely from Nepal using a dial-up connection, when it's available.

    I wonder - with object scripting - how do you roll back bad changes you've made in the database, without affecting good ones?

    _____________
    Code for TallyGenerator

  • Jeff Moden - Thursday, May 3, 2018 5:26 PM

    Been there, done that, never want to see any of it happen ever again.  "Easy, frequent, automatic, rapid database deployments" lead to the incredibly fast deployment of serious mistakes and the promotion of "what if" code because someone didn't want to take the time to do the manual work/human analysis.  They just wanted to "get (or Git :D) it off their plates".

    Totally agree with this point of view. The issue with a lot of 'Agile' teams is that they never look at the bigger picture. Spent two years trying to get my last team to see this and having to reject code because it didn't allow us to x. Probably the most frustrating way of working I have seen in over 20 years..

  • Very Interesting to read the last several posts against database development on the local dev box. I still, wholeheartedly and respectfully, disagree that development should be done against a centralized database. There are many benefits to having a local database, a deploy pipeline in place, and database development treated like any other development code. Yes, it takes a bit of time to get to that, but the final product has much better visibility and goes through a lot better review and testing before it winds up in a production environment.

    Having a developer change tables, sprocs, etc against a centralized database that many other developers code against is disruptive to everyone. Many times a table and sproc goes through many iterations before a coder is happy with the result. (ie: the column name and/or type is changed 10 times before the coder is happy with it  :D) The rest should not suffer through that process. With a distributed code sharing, the changes can be deployed and made visible to others when the coder is ready, and the rest of the team can grab these when they are ready to integrate it into their own work. That includes DBAs, who can grab the same changes and start analyzing them after the developer deems he's ready for others to see the new changes. The DBAs can do their review and provide input by rejecting/accepting pull requests, etc. Yes, DBAs that can analyze code are also involved and need to learn how to use these tools, heh, yep, and get involved before the changes go to production. Also, rarely does a developer require the full production data set to develop against. Being able to rebuild a new database from source and adding type data to it and then loading very specific test data to run automated tests provides a more testable, repeatable and reliable process. 

    In my experience, having been part of large teams that worked with centralized databases and dev silo station, the dev silo environment was always a better and more enjoyable process for everyone involved. The key is implementing the pipeline and making sure that everyone on the team is very comfortable using the tools in it. I find that the initial learning phase is the most frustrating to the team until that hurdle is overcome.

    There are other benefits, and the internets is filled with articles on it. Huge benefit when working with contractors that do not have access to corporate networks and teams whose members are dispersed throughout the globe. The collaboration is invaluable.

  • Jeff Moden - Thursday, May 3, 2018 5:26 PM

    Sergiy - Thursday, May 3, 2018 5:02 PM

    It looks like awful amount of manual work / human analysis involved for every object involved in coop development.And it not just about the same object, as Steve pointed out, if you changed a procedure, then my changes to a table queried by that procedure will effectively invalidate your work.

    Having a separate DEV DB for each developer is always a "bug factory".I always considered using DB Compare a lousy way to control deployments, and this scripting approach is even worse.

    Been there, done that, never want to see any of it happen ever again.  "Easy, frequent, automatic, rapid database deployments" lead to the incredibly fast deployment of serious mistakes and the promotion of "what if" code because someone didn't want to take the time to do the manual work/human analysis.  They just wanted to "get (or Git :D) it off their plates".

    Yes, fast database deployments need to go through a vetting process. Proper source control can enforce that. People that just want to (Git) stuff of their plates and keep pushing bad code and don't learn from those mistake eventually look for other teams to work on. 🙂 And, coders like that will push changes to centralized databases just as easily, or even more easily.

    I also find that once the initial database (objects/code) has been deployed to production, the changes to it don't come that rapidly and any changes to the database go through a lot more code review and analysis when the devs work in their silo dev environment. They don't just throw it out there in the middle and use the other developers as the testing ground.

  • Sergiy - Thursday, May 3, 2018 10:39 PM

    Steve,My way is to save a script for an object which I create/modify/drop into a CVS/VSS or whatever is in use as soon as it's open in my SSMS.If anybody else is making changes to the same object I'll know about it immediately, not after an investigating where did my changes go. Don Corleone prefers to know bad news right away.All the scripts developed by any developer go to the same repository. If anybody made a mistake and screwed up the DEV database - he/she can simply start the job which restored DEV DB from backup and re-run the scripts in repository.If somebody failed to save some script(s) in VC - it's up to him/her to do some soul digging.If any change is not finished prior to the time of the next deployment - it's simply moved to the next version folder. Any traces of unfinished job will disapper from the DEV DB with the next restore/roll cycle.Database is ready for a unit test. It has all the finalised changes intended for the current deployment, and no unfinished work.That's what works smoothly and reliably, without endless stand-ups discussing who is working on what and not touching what today. Even with a lead developer working remotely from Nepal using a dial-up connection, when it's available.I wonder - with object scripting - how do you roll back bad changes you've made in the database, without affecting good ones?

    That's not terribly dissimilar from this. I can script and capture changes as soon as I've made them, and certainly catch any conflicting changes right away. The scripting doesn't have to rare, it can be ad hoc.

    However, if you have something open in SSMS, you are not necessarily saving every version immediately. After all, you're trying things or experimenting in development. That by definition means you'll write and save code that isn't going to production. You'll try something and undo it. There is always a potential for collisions.

    Saving your scripts means that you have a way to re-run those. That's a migrational, script by script execution. A fundamentally different way of proceeding with database development. Here, I'd unwind a specific change, after all, I have the previous version of the code in my VCS. I don't need to restore a dev database and apply all
    changes. That's a disruptive operation. It works for some people, but it is disruptive.

    Here I am definitely using compare technology to migrate databases forward. There are holes and problems with that, for sure, but it is also what works well for many people. I wouldn't say there is anything wrong with your method, but it isn't necessarily the best method. There are friction points and limitations with that method. Scripts need ordering, since changes to a database require state maintained. If you push off some scripts, there can be problems with other scripts, unless you're resetting the dev database often, which again, disrupts a shared development database. You also don't necessarily capture all the interactions between scripts, unless you are spending time ensuring that any scripts moved don't affect others.

    That being said, I prefer tracking  each script as it's written.

    If you have a good development team and you have separation of development, then you have no issues either way, compare or migrations, shared or separate databases. If you don't, nothing is going to prevent issues.

  • Sergiy - Thursday, May 3, 2018 10:39 PM

    Steve,My way is to save a script for an object which I create/modify/drop into a CVS/VSS or whatever is in use as soon as it's open in my SSMS.If anybody else is making changes to the same object I'll know about it immediately, not after an investigating where did my changes go. Don Corleone prefers to know bad news right away.All the scripts developed by any developer go to the same repository. If anybody made a mistake and screwed up the DEV database - he/she can simply start the job which restored DEV DB from backup and re-run the scripts in repository.If somebody failed to save some script(s) in VC - it's up to him/her to do some soul digging.If any change is not finished prior to the time of the next deployment - it's simply moved to the next version folder. Any traces of unfinished job will disapper from the DEV DB with the next restore/roll cycle.Database is ready for a unit test. It has all the finalised changes intended for the current deployment, and no unfinished work.That's what works smoothly and reliably, without endless stand-ups discussing who is working on what and not touching what today. Even with a lead developer working remotely from Nepal using a dial-up connection, when it's available.I wonder - with object scripting - how do you roll back bad changes you've made in the database, without affecting good ones?

    For the way we work, the databases are built from source control node in a build environment, where it is also unit tested, then a versioned package becomes available for deploy to a test environment. This package (we use dacpacs - DSC) is deployed to a test and/or integration environments. If things break in test, the developer goes and fixes them in dev and a new deploy cycle is started after the changes go through proper vetting via code review and analysis if needed by a DBA. There is no rollback. There is no centralized database to worry about. If someone wants a previous version, they can go and grab the published versioned package from a package repo like nuget, or an S3 bucket, etc. Or, they can go and pull a specific commit from source control and build it themselves. Devs can grab a latest published, data cleansed, production database from a package repo as well into their local dev system at any time. This is done via a PoSH script. Once they have that database, they can do integration testing on their local box as well, before publishing their work for everyone else to see. The coders can develop their own analysis code and run it on their local systems and make that analysis code available to everyone else when ready. This code can become part of either a unit, integration or validation/verification, etc tests. And others can contribute to that test as well. Including DBAs. I've seen junior developers learn from DBAs many db skills through this process and that made them a better database developer in the process.

    I'm not even going to get into dev enviornment provisioning via docker/virtual box and developing that way. Lots of benefits.

    I understand that not everyone has resources to work like this, and so we have other options. Options are good. But, whenever possible, these days with the tools available, I would advise that development of any code should be done on a local development system.

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply