This post is adapted from a GitLab Unfiltered blog post written by David O'Regan.
Feedback matters to our personal and professional lives and software is no different. We deliver most if not all of our feedback to one another at GitLab using code reviews. We’re sharing some of our tools for you to add to your toolbelt when it comes to code reviews. In this post (the first of a three-part series) we share communication strategies for authors and reviewers.
Remember: Details matter for self-reviews
At GitLab, the responsibility for the code lies with the merge request author. We suggest code authors create a checklist to ensure that your i’s are dotted and your t’s are crossed before requesting a review. Here is an example MR checklist:
Before every feedback cycle:
- Re-read every line.
- Test your code locally.
- Write a test for every change (or as many as you can).
- Write a clear description and update it after each feedback cycle.
- Include at least one screenshot per change. More is better.
- Check and re-check your labels. Then check them again.
- Consider using a ~"workflow::refinement" label for issues ahead of time as we do in the Monitor:Health team. Read the documentation to learn more about scoped labels.
- Review the code as if you were the reviewer. Be proactive, answer the likely questions, and open follow-up issues ahead of time.
- If you want to see the last and most important part in the action, see how one of our frontend maintainers Natalia Tepluhina pre-answered a question she knew would be asked in one of her merge requests.
Communicate with good intentions
One of the hardest parts of getting a code review right is communicating the human touch. When we give and receive feedback, human habit can create cognitive distortion by defaulting to the most negative aspects of that feedback. At GitLab, we try to highlight the importance of assuming positive intent by incorporating it in our value system.
How the conventional comments system can help in code review
To give feedback more effectively, try the conventional comments system, which was developed by senior frontend engineer, Paul Slaughter. The conventional comments system calls for writing comments in a way that is useful for the reviewer and author of the merge request. It's so popular that one person made a browser extension (Chrome, Firefox) for it.
The convention comment system calls for starting a comment with a single, eye-catching word that defines the intent and tone for the comment. This method gives the reader a chance to understand where your comment is coming from.
Let's try an experiment. If you submitted code for review, which comment would you prefer to read?
Option one: What do you think about X instead?
Option two: suggestion (non-blocking) What do you think about X instead?
You likely chose option two because it provided context for the comment, communicated empathy, and was framed as an invitation to try a different approach, instead of being written as a command or mandatory change.
The magic part of this comment is the first line suggestion (non-blocking). Straight away, before you even read the comment, you know the two most important things about it:
- It's a suggestion from the reviewer
- It's non-blocking. Which means it's more of a friendly suggestion than a hard change that's necessary for the stability of the code.
Another advantage to this style of commenting is it allows merge request authors to understand the reviewer is not blocking their work. By highlighting what counts as a blocking and non-blocking comment, merge authors get the full context of what the reviewer is trying to communicate.
For example, you have submitted a merge request for review and your review comes back with eight comments.
The first option has no context in the comments. All comments are treated equally because they lack context for what counts as a blocker and what doesn't.
Option two contextualizes comments using the conventional comments system. The comments can be treated by priority:
- Blockers: What needs to get the merge over the line.
- Non-blockers: What can be a separate merge or perhaps will spark a discussion.
Next time you're reviewing code, try using the conventional comments approach. Pay attention to how it affects the way the merge request author responds to the review but also how you, the reviewer, feel leaving the review. We are considering integrating this feature directly into GitLab because we believe in making GitLab the best possible place for code reviews.
If you want to see a real-life example of some of Paul's work using conventional comments, check out his reviews of my community contributions here at GitLab – you’ll see his empathy really shines through.
The role of "fairness" in code review
In many ways, code review is a form of negotiation, where the result of the negotiation is a selection of code that's valuable and held to a high standard. Central to being a good code reviewer (and good negotiator) is fairness. In fact, being a fair negotiator is often the most useful tool for code authors and code reviewers.
Fairness is actually mentioned twice in the permissions to play guidelines at GitLab:
- "Be dependable, reliable, fair, and respectful."
- "Seek out ways to be fair to everyone."
How to be a fair author
In many ways, being a fair author is the easiest. Here are a few simple Dos and Don'ts to remember:
Do:
- Write a proper description with screenshots (can't stress this one enough!)
- Understand a reviewer’s point of view when they make suggestions
- Address any strange parts of your merge upfront (we all have them)
- Be open to collaboration on your work
Don't:
- "plz merge"
- Be closed off or take offense to suggestions
- Forget to include any steps necessary for the build to run (or in other words, reduce the burden where possible)
Honestly, it's pretty simple to be a fair author of a merge request. Even the smallest amount of empathy goes a long way, particularly when you remember that the person reviewing your code gets nothing extra for their efforts. They just want to help take your merge to the next level.
How to be a fair reviewer
Being fair as a reviewer is a bit more challenging because every individual has opinions or biases about how a piece of code should be written. Bias is something we all deal with when it comes to how we want things to be because we all have our own styles, preferences, and ideas about how software should be written.
Bias can create problems when it comes to code reviews because it's common for personal preferences to emerge when reviewing someone else's code. The typical reviewer might catch themselves thinking in absolutes, and the number of unresolved comments grows.
If you have ever reviewed a merge request and found yourself thinking things like:
- "It should be written like this"
- "Why would they do it like that?"
- "I would have done it this way"
- "That's not how that should be done!"
If this sounds familiar, then you may have fallen victim to a common cognitive distortion: Should/must statements.
It is important for any reviewer to remember that just because a code author wrote code in a different style or manner from you, doesn't mean that the code is written incorrectly. If you catch yourself writing a review comment that includes the words "should" or "must" then you ought to take a step back and think about whether your suggestions are coming from a place of fairness or a place of bias. Ask yourself: Is the use of absolutes warranted here? Sometimes it will be both fair and warranted. One example is if your company follows a set of coding conventions like we do at GitLab. Stay vigilant for times when those statements are a thin veil for a personal preference.
If you do need to use a should/must statement, be sure to back up your assertions with documentation to help the code author understand why a change must be made.
Typically, the fair response to something you don't agree with is to ask why an author wrote code this way, instead of saying it must be another way.
This is part one of a three-part series on code review. Up next we will be explaining why patch files are a useful tool for reviewers.
If you have questions or comments about code reviews, creating smaller MRs, or iteration, leave us a comment on this blog post!
Sara Kassabian contributed to this blog post.
Cover image by Jackson Simmer on Unsplash.