How Much Code Can You Review?

  • Comments posted to this topic are about the item How Much Code Can You Review?

  • Unless you are a proof reader the answer is "not very much".

    Git diff and the UI on Github does at least tell us what has changed but it is hard to get the bigger picture or context for those changes.

    The key thing is to ask "what is a code review supposed to achieve"?
    Next take a step back and ask "is a code review the best way of achieving this"?  What you might find is that certain things that a code review is intended to achieve can be done in better ways with less human intervention.

    The next step, having boiled away some of the tasks, is to ask"What would make the peer review process more effective"?  One answer is to have smaller incremental releases.  That does mean designing apps and planning for small incremental releases.  It's unlikely that you can just jump to small releases without improving other processes

  • It is deceptively simple: every line of code is reviewed to at least a minimal level and there is no way of circumventing that.

    "What?" I hear some cry out. "How can you allocate the people to achieve that?"

    As many are likely to have guessed the minimal level I am referring to is the static code analysis which is done over absolutely all of the code and  forms part of the Continuous Integration (CI)/Continuous Delivery (CD) pipeline that is the only route to live. You can also get statistics regarding code coverage by your automated tests. That all said there is value in manual peer reviews. This can keep track of best practices, sensible commenting, use of design practices, meaningful identifiers and adherence to the architecture and design of the piece of work. The combination of the two means that the manual peer review process can be targetted to the most sensitive areas of the system and the developers who need the most guidance (reviews are not there to attack anyone but to ensure high quality both now and in the future i.e. to improve individuals not to knock their coding).

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • We have a review process here that goes something like this:

    * Issue raised by the client
    * Incident log is then created
    * Development item created and assigned to a developer
    * When the development is completed, the changes are committed to our global tool with an incident number tag in the filename (to distinguish it from the live version of the script)
    * A full test of the script is then done against at least 2 different databases/websites
    * If the code works as expected, the developer then assigns it to one of their peers for review.
    * The reviewing developer will then run their own test to make sure it works
    * It is then passed back to the orignal developer, who commits the changes to our live tool of scripts
    * A new live tool of scripts is then compiled and released.

    We comment the code with the incident number so it's easy to see what changes were made. Our repository also contains logs of what was committed , when , by whom and contaisn the incident /development item number.  So can easily be tracked back.

    When I joined the team I'm in, the code was all over the place. It worked, but I could tell it was put together by various people with varying ways of writing the code.  I have been trying to clean it up and format it all in the same way, and am slowly getting everyone else to do the same.  It's a work in progress 🙂

    Regards

    Steve

  • In another role, the shop I worked in required line by line code review of all scripts before something could be submitted for release to the client. Sometimes, it was valuable, but that was the exception rather than the norm. The majority of the time the original developer would finish something late in the morning that was due to the client by close of business and they would go around the office to find someone, ANYONE, to look at it. Often this person had no familiarity with the project or the deliverable, code was not commented, and the developer would just try to rush them to rubber stamp it with repeated "this needs to go out today" statements. 

    As you can probably guess from my tone, I was not a fan. Quality of review also varied wildly depending on who was doing it. My favorite example was someone had developed an interactive Excel dashboard. The main page filtered and updated subtotals and then there was basic drill through to detail capability. Nothing too fancy, but lots of data, lots of notes, and definitions. The developer had two identical spreadsheets in terms of functionality, but with different markets and products. I reviewed one and someone else reviewed the other. The other guy came back in about 45 minutes and said "looks good". I took 2-3 hours and gave a list of approximately 70 issues ranging from spelling errors to functionality issues.

    I think review should be done, but someone should be a designated secondary from the outset where possible and appropriate time allowed for review and testing.

  • I am a database analyst and review T-SQL. In our shop we check every bit of T-SQL and the developer's code. We use TFS for all code with no exceptions. Our check-in is gated so if the code doesn't build after a review it still doesn't get checked in. We deploy out of TFS every 3 weeks.

  • As the DBA where I work, I had to introduce my own code review step into the developer's process, because I got sick of seeing bad queries and patterns at release time when it was too late to change them.  I do my review after the developers do their peer review, and I still often find things like incomplete joins, bad use of subqueries, or complex queries either broken down too much or not enough.

  • Chris Harshman - Friday, January 27, 2017 6:56 AM

    As the DBA where I work, I had to introduce my own code review step into the developer's process, because I got sick of seeing bad queries and patterns at release time when it was too late to change them.  I do my review after the developers do their peer review, and I still often find things like incomplete joins, bad use of subqueries, or complex queries either broken down too much or not enough.

    May I suggest a proactive approach such as using a free add-in like SQL Code Guard to help developers find 90% of common errors.  Such a tool allows a developer to learn while working and prevents embarrassing moments at code review time.  This also liberates the reviewer to focus on finding critical errors.

  • One of my previous jobs had a very rigorous code review process. It wasn't really a joke that if you made it through code review without crying it was a good review.

    The main focus of the review was HOW things were being done. Yes it would be sent back for formatting (since we had a standard and a couple people were lazy coders in some respects) but mostly it was looking at the functionality and performance as well as whether or not it gave the desired results.

    Was it worth it? I would say yes. The code was at least superficially known by more than the person who wrote it and when it was complex the coder had to explain why they chose the approach they did.

  • Steve Jones - SSC Editor - Thursday, January 26, 2017 9:48 PM

    Comments posted to this topic are about the item How Much Code Can You Review?

    We review every line of code submitted to the build.  Period.  The code review is explicitly linked to stories in TFS and the code reviews implicitly state which story they are associated with.

    Typically we spend 1 to 4 hours reviewing code per story.

    A few more details:  we use StyleCop and the build will fail if there are StyleCop warnings so we don't spend any time in a code review insofar as how the code is laid out.  I personally put detailed comments when working on a bug if the basis for the solution is not obvious and I will include the change request number.  We don't shed too many tears in our code reviews either although that depends on the context of the code.

  • I can review every single line of code and configuration change made but the but I think the better question is how much code should I review, or to put it another way how much code do I need to review.

    And the answer to that depends on the scope of the deployment and the developer in question.  If I have to review every minutia of everything a developer does before deploying it there's a problem with that developer.  For a larger deployment a high level overview of the changes should be enough, for a small deployment there shouldn't be much more needed than X is changing.

  • Diane Davis - Friday, January 27, 2017 6:20 AM

    I am a database analyst and review T-SQL. In our shop we check every bit of T-SQL and the developer's code. We use TFS for all code with no exceptions. Our check-in is gated so if the code doesn't build after a review it still doesn't get checked in. We deploy out of TFS every 3 weeks.

    Is this automated or assigned review to a human? How are you checking in db code?

  • j_e_o - Friday, January 27, 2017 8:30 AM

    Steve Jones - SSC Editor - Thursday, January 26, 2017 9:48 PM

    Comments posted to this topic are about the item How Much Code Can You Review?

    We review every line of code submitted to the build.  Period.  The code review is explicitly linked to stories in TFS and the code reviews implicitly state which story they are associated with.

    Typically we spend 1 to 4 hours reviewing code per story.

    A few more details:  we use StyleCop and the build will fail if there are StyleCop warnings so we don't spend any time in a code review insofar as how the code is laid out.  I personally put detailed comments when working on a bug if the basis for the solution is not obvious and I will include the change request number.  We don't shed too many tears in our code reviews either although that depends on the context of the code.

    Does this include database code?

  • Steve Jones - SSC Editor - Friday, January 27, 2017 8:51 AM

    j_e_o - Friday, January 27, 2017 8:30 AM

    Steve Jones - SSC Editor - Thursday, January 26, 2017 9:48 PM

    Comments posted to this topic are about the item How Much Code Can You Review?

    We review every line of code submitted to the build.  Period.  The code review is explicitly linked to stories in TFS and the code reviews implicitly state which story they are associated with.

    Typically we spend 1 to 4 hours reviewing code per story.

    A few more details:  we use StyleCop and the build will fail if there are StyleCop warnings so we don't spend any time in a code review insofar as how the code is laid out.  I personally put detailed comments when working on a bug if the basis for the solution is not obvious and I will include the change request number.  We don't shed too many tears in our code reviews either although that depends on the context of the code.

    Does this include database code?

    Yes.  All code, including scripts and DBFit tests, are reviewed regarding the database.

  • Assuming you're using a version control system, a visual difference comparison across commits helps tremendously. Also, it helps when each commit has a descriptive comment, because (at least in TFS and I think in Git/SourceTree as well) the difference comparison highlights each commited change block along with tags displaying commit date and comment.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

Viewing 15 posts - 1 through 15 (of 22 total)

You must be logged in to reply to this topic. Login to reply