Production Scripts

  • Comments posted to this topic are about the item Production Scripts

  • I read the article about the need for source control for SQL Agent jobs, and I fully agree. At my current shop, we're using SQL Server Data Tools and SQL Projects in Visual Studio to check in our database objects (tables, indexes, procs, etc.) and our source control choice is Team Foundation Server... however we have yet to find a good strategy for source control for Agent jobs.  While we could always just script them out to files via SQL Server Management Studio and then check those in to TFS, we've held off for now: we're hoping to find a better way. Source control alone would be good, but maintenance and deployment problems include:

    1) SSDT's import tool won't pull current definitions of agent jobs...
    2) ...and even if it did, the Job Id's themselves change from environment to environment (we have a few development and test environments on the road to Production)...
    3) ...which would be confusing unless we manually edited the scripts to be "environment aware" (some kind of IF...ELSE block using @@SERVERNAME, for instance), or pulled the "full stack" (Dev all the way to Prod) and source control all jobs, from each deployment environment.
    4) And lastly, there is not any automatic deployment option (via SQL Project Publish / SQLPACKAGE.EXE) for these objects (to our knowledge). We'd have to write something custom in PowerShell and then stitch that into our deployment scripts.

    So, I would like to know: for those of you out in the community placing SQL Agent jobs under source control, what's your environment/method? Do you just store source, or do you have a solution for upkeep and deployment?

    Many thanks.

  • The people that I know doing this with a VCS, write code in an idempotent where. There is a wrapper to check for the job and create/alter it. They often have hand written scripts that execute fully from SQLCMD. The ideal way is to ensure you have a process that can build these so that people know if they drop a set of files in a folder, the process will execute them on the correct server.

    Depending on how environment specific you have to be, perhaps you want to try and store that environment stuff on the instance itself, rather than worrying about changing scripts.

  • I have not really thought about source control SQL Agent jobs with a VCS as such.

    What we tend to do is to keep the SQL jobs very simple - a single step that calls a stored procedure in a database.
    The jobs are prefixed with the database name so that one can easily see where they belong.
    General jobs, like backups either don't have a prefix, but sometimes we do add something like 'General_' or 'Maintenance_'.

    So most SQL job really lives in a stored procedure inside the database, which would by definition be the correct version.
    Jobs are scripted to file for backup purposes with separate files for 'General' and database-specific jobs.

    I am interested to see what other people are doing!

  • Jobs are something that tends to get modified in production without going through the normal change control process. For example, at 2:00am the DBA might disable the job or alter the execution steps or tinker with the proxy account to work around a production issue, perhaps with the intention that it's a temporary fix that will be replaced back to it's original state later. I have a Git repository just for jobs and a PowerShell script that iterates across all (100+) servers, scripting out jobs to the working folder, and then checking in a commit of what changed. Eventually, I may do the same with all user defined objects like stored procedures and views. This would be an automated Git repository that is intended for disaster recover, operating parallel to the official repository that the developers are supposed to be maintaining themselves.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Stefan LG - Monday, February 12, 2018 9:32 PM

    I have not really thought about source control SQL Agent jobs with a VCS as such.

    What we tend to do is to keep the SQL jobs very simple - a single step that calls a stored procedure in a database.
    The jobs are prefixed with the database name so that one can easily see where they belong.
    General jobs, like backups either don't have a prefix, but sometimes we do add something like 'General_' or 'Maintenance_'.

    So most SQL job really lives in a stored procedure inside the database, which would by definition be the correct version.
    Jobs are scripted to file for backup purposes with separate files for 'General' and database-specific jobs.

    I am interested to see what other people are doing!

    That would be the way that keeps logic out of the jobs if you can do it. If that's the case, then I think you're mostly OK from a VCS standpoint, except for scheduling. I'd consider that part of the job. Maybe include the code as a comment in the proc?

  • Eric M Russell - Tuesday, February 13, 2018 7:59 AM

    Jobs are something that tends to get modified in production without going through the normal change control process. For example, at 2:00am the DBA might disable the job or alter the execution steps or tinker with the proxy account to work around a production issue, perhaps with the intention that it's a temporary fix that will be replaced back to it's original state later. I have a Git repository just for jobs and a PowerShell script that iterates across all (100+) servers, scripting out jobs to the working folder, and then checking in a commit of what changed. Eventually, I may do the same with all user defined objects like stored procedures and views. This would be an automated Git repository that is intended for disaster recover, operating parallel to the official repository that the developers are supposed to be maintaining themselves.

    I've seen something similar in the past. Same thing. More a DR thing than watching code.

  • Eric,

    Do you track some logins, Linked servers, with this as well? Those server level objects are often just maintained by DBAs, but easy to forget that audits, XE sessions, DDL triggers, etc. need to be captured for DR purposes is for no other reason.

  • Steve Jones - SSC Editor - Tuesday, February 13, 2018 9:25 AM

    Eric,

    Do you track some logins, Linked servers, with this as well? Those server level objects are often just maintained by DBAs, but easy to forget that audits, XE sessions, DDL triggers, etc. need to be captured for DR purposes is for no other reason.

    I'm referencing objects collected under the .NET namespace: Microsoft.SqlServer.Management.Smo.Server

    https://msdn.microsoft.com/en-us/library/microsoft.sqlserver.management.smo.server.aspx

    Currently: databases, backup devices, server level triggers, end points, SQL Agent properties, jobs, linked servers, logins, alerts, and operators. There is so much more it could be scripting to be a complete solution, and I need to go back and flesh this out even more. The Git repository contains folders based on server name and then beneath each server are folders for each object type, having one separate .sql script for each object.

    In addition to being a comprehensive versioned archive of what's currently in (or has been) in production, from a monitoring and auditing perspective it can also be used to keep tabs on changes (planned or unplanned), because each commit represents some type of modification to an object since the last time the PowerShell job ran, which is scheduled nightly.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply