Basic Git for DBAs: Getting Started with Git

  • Comments posted to this topic are about the item Basic Git for DBAs: Getting Started with Git

  • Thank you for this Steve - I have just started using GIT and using GITbash as my client at the moment so was useful to get some independent feedback on clients.

  • Thanks Steve.

    How would you sync the changes with a SQL instance? E.g. imagine you have a DEV environment and would like to have a single master version of procedures etc. that will be synced automatically.

    For download of changes from SQL server into .sql files in git, I would think of a DB trigger that will snapshot the objects, dump the files into GIT, and run GIT command line.

    For upload of changes to SQL server from git, maybe a callback from git to SQLCMD as soon as the file is updated?

    Your ideas?

    • This reply was modified 3 years, 10 months ago by  iz.
  • Thank you Steve, for this tutorial! I've known how to create the global settings for user and email, but never knew how to check what they are. I've tried saerching, but couldn't find an answer. Your tutorial, showing git config is exactly what I've wanted to know for a long time.

    I'm going to be following this series closely!

    Rod

  • iz wrote:

    Thanks Steve.

    How would you sync the changes with a SQL instance? E.g. imagine you have a DEV environment and would like to have a single master version of procedures etc. that will be synced automatically.

    For download of changes from SQL server into .sql files in git, I would think of a DB trigger that will snapshot the objects, dump the files into GIT, and run GIT command line.

    For upload of changes to SQL server from git, maybe a callback from git to SQLCMD as soon as the file is updated?

    Your ideas?

    This is a much more complex problem. For me, as a Redgate Advocate, I'd use our tools to make these changes. We specifically have built solutions around making changes and then moving them through some process that evaluates them and deploys them.

    For what you're asking here, I think you are essentially creating a backup and restore of code, potentially with a circular endless loop. The capture of code in a VCS isn't designed to be done automatically. It's designed to happen at points in time, when a human things code is working.

    It would be better to think through what you want to accomplish with this, and be careful of automation here. Humans often still need to be in the loop for a VCS to be effective.

  • Doctor Who 2 wrote:

    Thank you Steve, for this tutorial! I've known how to create the global settings for user and email, but never knew how to check what they are. I've tried saerching, but couldn't find an answer. Your tutorial, showing git config is exactly what I've wanted to know for a long time.

    I'm going to be following this series closely!

     

    Thanks, working on more articles to address simple things to get you moving. I hope to expand this into more the test/deployment area later.

  • Thanks for the article.  It was very informative and you demo'ed the tool that we actually use at work (sourcetree).

    There are some things that I think are not clear in sourcetree that I still do from the terminal (that terminal button in the top right corner of sourcetree is awesome) like rebasing.  Still not 100% sure which way sourcetree does the rebase.  A nicer interace where you pick the head and branch from dropdowns is the way I'd have designed that, but the command line version works pretty well too.

    One question I do have is what code do you store in source control from SQL?  That is something I have been thinking about a lot recently.  Do you store ALL object creation code in source control or do you just store a subset of it (like just the stored procedures) or do you only store your administrative scripts that do not live on the SQL instances at all (like diagnostic scripts or user/logon cleanup scripts)?

    Visual Studio (for example) lets you export all objects into individual scripts, but doing comparisons between systems is a slow and painful process with that.  RedGate SQL Compare can do the same thing, and is faster, but you still need to dump it out to a file before you can put it into Git.  I can see the benefits of putting all of the code into git (if you need to spin up a clone of a system but without actual data, you have all of the scripts to do so), but I also seem some pain points (one dev forgets to merge their code to master and the future devs are working off an old code base unless they pull from live before they begin work which will result in a slow start to doing any changes).

    Thoughts?

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Version control is important.  Not using it is unprofessional.

    GIT is everywhere.  It is a good version control system for people working on open source projects or where multiple companies are co-operating on a project.

    However distributed version control systems (like GIT) are over-complicated for teams that are co-located or working in a company big enough to do common infrastructure tasks with a fast intranet.  In that sort of situation use something like Apache subversion with a tortoise front end.  It is much more intuitive than GIT and is less likely to get into a tangle.

  • Mr. Brian Gale wrote:

    Thanks for the article.  It was very informative and you demo'ed the tool that we actually use at work (sourcetree).

    There are some things that I think are not clear ...

    On rebasing. I'll cover that later, though I'm a little concerned. I am not entirely sure I understand this well enough to explain it. I'm going to reach out to some friends and get some help here and experiment a bit in git and ST to see how things work.

    For code from SQL Server, it depends. I'm sorry, I hate saying it, but it does. For instance level stuff, script out manually (or with a script doing the scripting, ala dbatools) and store in a repo.

    For dbs, all code, but I'd use tooling here. I work for Redgate and we make tools to help here, but there are others. I don't know it's worth doing this on your own, as the use of the code matters. Since it's often to deploy elsewhere, the deployment schema matters. I'll expand on this series and get into DevOps and deploying code, but for now, you really start with how you deploy code, then work backwards until you get to "how do I get the stuff into VCS". Format/structure depends on how you deploy.

    git is file management. That's it. The equivalent of the file system capturing versions and keeping them. Dropbox does this for you automatically. There isn't a good way to do this without a tool, and a tool will have its own protocols.

    Does that help?

  • john.denniston wrote:

    Version control is important.  Not using it is unprofessional.

    GIT is everywhere.  It is a good version control system for people working on open source projects or where multiple companies are co-operating on a project.

    However distributed version control systems (like GIT) are over-complicated for teams that are co-located or working in a company big enough to do common infrastructure tasks with a fast intranet.  In that sort of situation use something like Apache subversion with a tortoise front end.  It is much more intuitive than GIT and is less likely to get into a tangle.

    Maybe. I think the issues of quickly diverging from the happy path, like being disconnected, quickly become a problem Honestly, I think using git is as simple as SVN is you limit yourself to simple flows. You can have everyone work on one branch and just do commits, which is how SVN is often used. You can branch if need be and merge back, but I do think tortoise git is as easy to use as tortoise svn.

  • Steve Jones - SSC Editor wrote:

    Mr. Brian Gale wrote:

    Thanks for the article.  It was very informative and you demo'ed the tool that we actually use at work (sourcetree).

    There are some things that I think are not clear ...

    On rebasing. I'll cover that later, though I'm a little concerned. I am not entirely sure I understand this well enough to explain it. I'm going to reach out to some friends and get some help here and experiment a bit in git and ST to see how things work.

    For code from SQL Server, it depends. I'm sorry, I hate saying it, but it does. For instance level stuff, script out manually (or with a script doing the scripting, ala dbatools) and store in a repo.

    For dbs, all code, but I'd use tooling here. I work for Redgate and we make tools to help here, but there are others. I don't know it's worth doing this on your own, as the use of the code matters. Since it's often to deploy elsewhere, the deployment schema matters. I'll expand on this series and get into DevOps and deploying code, but for now, you really start with how you deploy code, then work backwards until you get to "how do I get the stuff into VCS". Format/structure depends on how you deploy.

    git is file management. That's it. The equivalent of the file system capturing versions and keeping them. Dropbox does this for you automatically. There isn't a good way to do this without a tool, and a tool will have its own protocols.

    Does that help?

    Yep, that helps!

    Rebasing is an interesting beast.  I have not found a tool that works well with it from an ease of use perspective and I find doing it from the terminal/command prompt is quick and easy and mostly painless.  Painless until you hit a merge conflict.  SOME merge conflicts are easy to fix, others are painful.

    Thanks for the input on what to put in source control too.  That was my thought, and then you just need a good process to ensure it remains up to date.  It is very quick and easy for git to get out of date by a few missed commits/merges. But having a tool (like RedGate's) fixes that problem.  Doing database coding in Visual Studio helps with it too as you can dump the whole database into 1 script per object pretty easily.  Not always quickly if you have a LOT of objects, but it is pretty easy to do.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Thank you precise and to the point article :).  A follow up question, lets say i have GIT software installed on my local home computer and have repository on my hard drive, obviously at work laptop i wouldn't be able to download the software or any tools, can i access the entire folder and have the option to download over the internet or browser? Just like many others, i work across many different computers, i have always struggled maintaining a single source of truth, i think this solves my problem, i just don't know exactly how, any tips highly appreciated. TIA

  • You can't download git to a work laptop? I wouldn't remotely understand that as a technical person. This is highly stable, free, well tested software that millions, likely hundreds of millions of people use every day.

    Assuming you install git, then you do what I did in the article. You make an https connection to github (or Azure DevOps, BitBucket, etc.) and git will copy your files down. The git protocols and software handle downloading the versions to your machine, and uploading changes.

    This is similar to how OneDrive, DropBox, etc. work, with file based sync. The difference here is versions are tracked and visible from your local repo, merge conflicts are detected and displayed to you, and you can control what changes to send, which to abandon, and easily return to previous versions.

    On GitHub, you can see the files, you can certainly cut/paste from a browser to a text file, and you can edit things online. That's actually a good article to add to this series.

     

  • Curious_sqldba:

    I can see where your IT team may not want you installing software on your PC without their approvals, but something like git (or SVN or CVS) should be deployed for all developers at every company (my opinion).  Having no source control system at work while developing code is just asking for trouble.  Especially when there are free, cloud based options (github) or inexpensive on-premise options.  Probably even some free on-prem options too; I've never looked.

    By not having any source control system at your workplace, you run the risk of changes breaking a critical tool and having no easy way to undo it.  Or in the event of a disaster, how do you recover your code?  Lets say your desktop hard disk (or SSD) bites the dust; how do you recover the source code changes?

    If you are talking about developing code on your home PC and using GIT to pull it across the internet to your work PC, I think you can do this, but it is going to be complicated to set up as you would need to set up port forwarding on your router and have various ports open on your home network, plus you'd need to install tools, but there are a lot of tutorials on how to do that with a quick google.  A MUCH easier option would be to take your source code from your home PC, dump it into a cloud git server (github, bitbucket, azure devops, gitlab, etc) and then pull it down from the cloud to your work PC.  Just make sure to PUSH from both your home PC and work PC before you leave or you won't see your changes on the other computer.

    I would work with your IT team to get git installed on your machine if you cannot install it yourself.

    To solve the "single source of truth" problem that you have, git can definitely handle that, but you need a git server to push your git changes to and that server will hold the source of truth and all client machines will NEED to have git installed AND be able to access the server so they can pull the data down.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply