It’s T-SQL Tuesday time again, and this is a good one. The host this month is Kennie Pontoppidan, who I had the pleasure of running with one morning in Copenhagen. His theme is The daily (database-related) WTF.
You can read about T-SQL Tuesday at tsqltuesday.com and see all the archived invitations. Join in the fun by watching the site, or the #tsql2sday hashtag on Twitter. If you want to catch up on old invites, feel free.
I’m not sure if I have a great story, but I certainly have had a few “WTF” items in my career. Some of them caused by me, some by others, and often not a good thing.
The one item that I’ll relate here goes back quite some years in history, but it was one that affected me for many years in my career. I do quite a bit of work with DevOps and software development these days, trying to help people build better software.
Part of that is ensuring your code, including your database code, is in a version control system (VCS). I’ve worked in a variety of environments, using different source code control methods, and I have to relate a story that taught me how important it can be to manage code and track the versions in production.
A long time ago I went to work for a small startup. The company had been in business for 5 or 6 years, and had a number of customers that depended on our service. However, our developers and admins were a mess. I was brought in to try and straighten out the the database and infrastructure side of our codebase.
One of the first things I found was that all our stored procedures in the production server were encrypted. I wasn’t sure why, since we hosted our machines, but that wasn’t a big deal.
Until it was.
One day we had an issue on one of our SQL Server 2000 servers (we had two, supposedly identical). In troubleshooting and putting some sample data in both systems for a fake customer, we got different results. Hmmm, not what I wanted to see.
I checked the VCS (SourceSafe at the time) and checked out the code. I then loaded my test data and … got a third, different result. Now I was concerned as this was a production bug that was delaying work for a customer.
This was early in my tenure at the company, so I asked the other DBA for some information. He had been passed over in favor of me by management and wasn’t a great deal of help. He told me that he likely had the source code for server 1 on his machine.
You might start to guess why this is a WTF post. Rather than get upset, I wanted to see the code, which he produced. His version was one one of the servers, but not the other. Where was that code? He thought it was on the lead developer’s machine, who conveniently, was on vacation. We checked, and it wasn’t in his checked out folder structure.
What about our former developers, who had moved out of state? We still had their machines, which were supposedly going to be shipped to them so they could continue working for us. The checkout folder wasn’t right, but we found 2 other folder structures, copies of the source code tree, with different code. At this point, I just needed to fix the issue, and managed to find a copy of the stored procedure that matched my issue, built a correction, and deployed it to both servers. Unencrypted.
At this point, I paused some work that the DBAs were assigned. We spent a day or so pouring over all developer’s machines, finding 6 or 7 different copies of source code (plus our SourceSafe install). Multiple files were different, and we essentially had no idea what was running in production.
Apparently our developers had decided that when they checked in code, they would leave it on their machines. If they wanted a branch, in essence, they’d copy this folder over, make changes, and sometimes copy things back, but mostly just assume they knew what was good, and leave the updates on disk, uncommitted to the VCS.
I couldn’t come up with a rational reason for this. Back then, we didn’t have any sort of integration like SQL Source Control and we checked code in and out of the VCS manually. I had previously set all my machines to remove the code from my local disk, ensuring I’d go to the VCS to get code when I needed to change something. I suspected this was one way of creating ”job security” by a few developers and DBAs. Hint: it didn’t work with me.
Solving the Issues
This necessitated a complete reboot of our system. Together with a new lead developer (you can guess where the old one went), we paused development for 3-4 days. I found a routine to decrypt SQL Server 2000 stored procedures, and we all spent time decrypting the stored procedures from both instances and committing them back to a new tree in SourceSafe. If we had an discrepancies, then we had at least two developers examine the code, merge the differences and commit code.
After a week of lost work, we had a clean source code tree of what was running in production, and both database servers synchronized. From there, we could start new development. Learning from the issues, I also would only deploy changes from the VCS (manually back then), which ensured the developers had an incentive to commit their changes and not just alter the database objects on the development server.
Of course they still did, and we had numerous deployment issues. If only we’d had a few better tools back then for CI and CD in the database world.
Filed under: Blog Tagged: syndicated, T-SQL Tuesday