Early in my career as a developer, I was required to follow a formal process to get the code I'd written deployed to production environments. Each time I'd written code, I had to document what I'd tested and then print out the relevant sections of the codebase. I needed to find two other developers to spend 10-15 minutes with me in a meeting, going over my code, each of us with our own paper copy. Almost like a dissertation, I had to answer questions and defend my work, with problems being marked on paper for me to go fix.
Over time, I learned that different developers reviewed code in different ways. Some spent more time on standards and formatting. Naming and the visual structure were more important to them, so if I wasn't confident in my work, I'd pick them and spend time ensuring the formatting was correct. Or sometimes, I'd mis-format it, so they would tell me to go fix that and not look at what the code actually did. Others were better at examining algorithms, and I often used them to help me learn, with them digging into my logic and helping me understand whether I'd included enough error handling, considered edge cases, or written code that performed well.
Often the mood I was in, and the pressure to meet a deadline might have me leaning one way or the other. Of course, there were plenty of times I just had to go with whichever two developers had time to review the code.
There was an inconsistency in code reviews, and I breathed a sigh of relief in future jobs where we didn't formally review code. In fact, in quite a few positions where I wrote C++, VB, or FoxPro/Clipper, my code was never reviewed, nor was there formal testing. Other developers and I often had to rework sections of code regularly, which led me to implement better testing of my own code. I didn't adopt formal frameworks for some time, but I did save off test scripts for code in our Visual SourceSafe repository to ensure I could test code.
These days pull requests and code reviews are commonplace, at least among many software developers. Not so much in the database world, but I do find customers that believe in testing and I regularly preach this to others. I'm also glad that Redgate has built-in static code analysis and linting into its products, though I wish we had better (and easier) unit testing support for database code.
For those of you out there writing code, do you go through any sort of code review process? Is it consistent? Is there a checklist of sorts? I have found that different people have their own internal checklists, but I rarely see anyone with a more formal checklist, or even a set of lists for what to check in different types of code. Even in unit testing, I don't often see people approaching their tests in a methodical manner.
Checklists have been shown to be beneficial in the healthcare field where staffers are overworked and handoffs are frequent. Using a set of checklists can improve patient outcomes. I suspect that a good set of checks for code might do the same thing. However, I find that a lot of database developers are reluctant to adopt any formal testing practices.
A view that reminds me of .NET and Java developers in the early 2000s.
Do you have a checklist (actual or mental) that you go through for your own code? For anyone else's code? Or do you think formal testing of SQL code is even worth the effort. Let me know today.