Automasean

Code Review Tips

I've been participating in code review as an author and a reviewer for quite some time now. These are some things I think about from both the author and the reviewer perspective.

Note: I use terminology and make references specifically to features of GitHub but this can all be applied when using GitLab, Bitbucket, etc.

As an author

As software developers, there are code review implications of our actions outside the scope of the actual review. It's important to consider what we do before, during and after the actual code review process.

While coding

Be mindful of how your changes are organized. Small, atomic commits are best.

When I worked with MojoTech our standard practice was to craft code branches in such a way that each commit could be independently deployable and the app would not be broken. Furthermore, the commits told a story. Rarely would we add code in one commit just to remove it in the next. This made PR reviews very efficient and also made the commit history extremely easy to understand.

It's not reasonable to expect this level of discipline on every team. At the consultancy we had teams composed entirely of senior-level developers. Rarely does that team composition happen at product, contracting and enterprise companies. For those teams I'd say, "Please keep the reviewer in mind." This might mean splitting the work for a ticket into multiple PRs to make the review easier.

Before opening a PR

Once I've finished the development I have a mental checklist that I go through before opening a PR:

[ ] manually test the branch

[ ] review the code changes as if they're being scrutinized by the team

[ ] ensure automated test are written and properly cover the changes

[ ] run all CI checks

  • locally or by opening a draft PR before marking as "ready for review"

[ ] write an adequate PR description

  • describe the changes, any dependent PRs in other repositories, whether there will be follow-up PRs to finish the work for the ticket, screenshots when applicable, etc.

After opening a PR

The job isn't done once the PR is open for review.

Be open-minded to feedback. There seems to be a natural human tendency to get defensive when someone offers constructive criticism of our code changes. Give reviewers the benefit of the doubt. Sometimes intent can be ambiguous since it's so hard to infer tone in text-based mediums but try not to assume that the folks reviewing your code have malicious intent.

In addition to being open minded, it's important to address feedback promptly. Some items are better addressed via a call or slack thread. For situations where PR comments suffice, include all supporting details in dialogue to speed up the code review cycle. Overall, just respect your reviewer's time.

Adapt to the feedback when applicable to maximize productivity. There are times when PRs cannot be merged because a blocking issue was identified during review. In this case, try to identify any possibilities to extract unblocked work into a separate PR.

Learn from feedback and try not to make the same mistake twice. Early in my career I would write a list of PR feedback that I'd received. I would refer to the list before opening PRs to ensure I didn't overlook anything. Eventually, I didn't need the list anymore.

As a reviewer

Reviewing code is essential for building quality software. How we review code can be the difference between having a clean codebase or a mess.

Review timing

Code may need to be reviewed multiple times. Don't feel pressure to merge problematic changes just because you missed the issue(s) the first time you reviewed the code.

Provide feedback and respond to PRs in a timely manner. Just as authors have a responsibility to respect the reviewer's time, the same goes for reviewers.

Understand the intent of the changes

The most important aspect of code review is understanding what the desired outcome of the changes are. Be sure to review JIRA tickets or any other sources of background information about the problem being solved.

Understand the changes

I like to pull the changes locally when reviewing large code updates. Sometimes reviewing in the comfort of my editor is more efficient than trying to navigate the codebase on GitHub.

Unless the commit history is atomic, I'll run the following command to squash all commits so I can review the entire git diff at once.

$ git reset $(git merge-base main $(git branch --show-current))

Another advantage of having the code changes on my machine is that I can verify my code suggestions locally. This saves the author from spending time fixing my broken code snippets.

Once the code changes are pulled locally there is a great opportunity to run the code for manual testing. When I worked with Credit Karma we had many contributors per repository. There would be at least two reviewers for every PR; one to confirm that the manual testing passes and one to confirm that the code is adequate.

Provide helpful feedback

Feedback should be constructive. If you're going to criticize, don't just complain. Offer an idea for a solution or links to references.

If willing, use emojis to better communicate tone. As stated earlier it's hard to infer tone in text-based mediums. Some folks might take code criticism personally. Emojis can help soften the blow a bit.

Look to help create and maintain consistency throughout the codebase. Try to recognize when there are opportunities for new patterns1 to be introduced and when established patterns erroneously diverge.

Types of feedback

I've found it's best to be as objective in my reviews as possible. Do the changes introduce bugs? Do the changes have a high potential to introduce bugs in the future? Are we being consistent? Are the changes covered by automated tests that avoid testing implementation details?

In addition to looking for objective downsides, we should be curious when reviewing code. If something isn't clear to you, ask about it. Chances are you aren't the only person who will be confused.

Clarification feedback gives the team an opportunity to get on the same page with low overhead. Answering the question often doesn't require the author to spend time revisiting the code. As a bonus, the Q&A leaves great historical context for future contributors.

Not all feedback is created equal. Some items are more important than others. It's often beneficial to indicate feedback significance. I use the "nit:" prefix for take-it-or-leave-it items. For example, "nit: we could probably make this logic simpler by replacing the scattered if statements with a map or switch statement."

While nitpicks can help improve the quality of the codebase, more time should be spent on the higher severity concerns. I've seen folks use "low:", "medium:", or "high:" prefixes to indicate the severity of feedback items.

Last but certainly not least, giving praise when something is done well is equally as important as providing constructive criticism.


  1. "Patterns" could mean algorithmic patterns, code organization (e.g. directory structure / file naming), code structure, etc.↩