Best Practices for Peer Code Review

General discussion about the service offering

Best Practices for Peer Code Review

Postby emence » Mon Mar 22, 2010 4:15 pm

This is a summary topic about the case study of 'The World’s Largest Code Review Study at Cisco Systems®'.

This is the world’s largest-ever published study on code review, encompassing 2500 code reviews, 50 programmers, and 3.2 million lines of code at Cisco Systems®. For ten months, the study tracked the MeetingPlace® product team, distributed across Bangalore, Budapest, and San José.

At EMence, we use these best practices as part of the foundation for our Premier Peer Code Review Service.

We'll cover just the highlights of the study below with excerpts taken directly from the study. Source: SmartBear software.
1. Review fewer than 200-400 lines of code at a time.
The Cisco code review study showed that for optimal effectiveness, developers should review fewer than 200-400 lines of code (LOC) at a time. Beyond that, the ability to find defects diminishes. At this rate, with the review spread over no more than 60-90 minutes, you should get a 70-90% yield; in other words, if 10 defects existed, you’d find 7-9 of them.

2. Aim for an inspection rate of less than 300-500 LOC/hour.
Take your time with code review. Faster is not better. The study shows that you’ll achieve optimal results at an inspection rate of less than 300-500 LOC per hour. Left to their own devices, reviewers’ inspection rates will vary widely, even with similar authors, reviewers, files, and review size.

3. Take enough time for a proper, slow review, but not more than 60-90 minutes.
After about 60 minutes, reviewers simply wear out and stop finding additional defects. This conclusion is well-supported by evidence from many other studies besides our own. In fact, it’s generally known that when people engage in any activity requiring concentrated effort, performance starts dropping off after 60-90 minutes.

4. Authors should annotate source code before the review begins.
The study showed that authors might be able to eliminate most defects before a review even begins. If we required developers to double-check their work, maybe reviews could be completed faster without compromising code quality. This idea specifically had not been studied before, so the Cisco study tested it.

5. Establish quantifiable goals for code review and capture metrics so you can improve your processes.
As with any project, you should decide in advance on the goals of the code review process and how you will measure its effectiveness. Once you’ve defined specific goals, you will be able to judge whether peer review is really achieving the results you require. It’s best to start with external metrics, such as “reduce support calls by 20%,” or “halve the percentage of defects injected by development.” This information gives you a clear picture of how your code is doing from the outside perspective, and it should have a quantifiable measure – not just a vague “fix more bugs.”

6. Checklists substantially improve results for both authors and reviewers.
Checklists are a highly recommended way to find the things you forget to do, and are useful for both authors and reviewers. Omissions are the hardest defects to find – after all, it’s hard to review something that’s not there. A checklist is the single best way to combat the problem, as it reminds the reviewer or author to take the time to look for something that might be missing. A checklist will remind authors and reviewers to confirm that all errors are handled, that function arguments are tested for invalid values, and that unit tests have been created.

7. Verify that defects are actually fixed.
Keep in mind that these bugs aren’t usually logged in the team’s usual defect tracking system, because they are bugs found before code is released to QA, often before it’s even checked into version control. So, what’s a good way to ensure that defects are fixed before the code is given the All Clear sign? The study suggests using good collaborative review software to track defects found in review. With the right tool, reviewers can logs bugs and discuss them with the author. Authors then fix the problems and notify reviewers, and reviewers must verify that the issue is resolved.

8. Managers must foster a good code review culture in which finding defects is viewed positively.
Code review can do more for true team building than almost any other technique we’ve seen – but only if managers promote it at a means for learning, growing, and communication. It’s easy to see defects as a bad thing – after all they are mistakes in the code – but fostering a negative attitude towards defects found can sour a whole team, not to mention sabotage the bug-finding process. Managers must promote the viewpoint that defects are positive. After all, each one is an opportunity to improve the code, and the goal of the bug review process is to make the code as good as possible. Every defect found and fixed in peer review is a defect a customer never saw, another problem QA didn’t have to spend time tracking down.

9. Beware the “Big Brother” effect.
“Big Brother is watching you.” As a developer, you automatically assume it’s true, especially if your review metrics are measured automatically by review-supporting tools. Did you take too long to review some changes? Are your peers finding too many bugs in your code? How will this affect your next performance evaluation? Metrics are vital for process measurement, which in turn provides the basis for process improvement. But metrics can be used for good or evil. If developers believe that metrics will be used against them, not only will they be hostile to the process, but they will probably focus on improving their metrics rather than truly writing better code and being more productive.

10. The Ego Effect: Do at least some code review, even if you don’t have time to review it all.
The “Ego Effect” drives developers to write better code because they know that others will be looking at their code and their metrics. And no one wants to be known as the guy who makes all those junior-level mistakes. The Ego Effect drives developers to review their own work carefully before passing it on to others.

11. Lightweight-style code reviews are efficient, practical, and effective at finding bugs.
There are several main types, and countless variations, of code review, and the best practices you’ve just learned will work with any of them. However, to fully optimize the time your team spends in review, the study recommends a tool-assisted lightweight review process.


As you can see there is a lot involved in conducting good code reviews. With over 15 years of corporate project experience, EMence can deliver a quality code review solution for your project.

Feel free to comment or ask questions. :geek:
Image EMence eCommerce Architect
Contact me: ImageImage services@emence.com
emence
Site Admin
 
Posts: 5
Joined: Sun Mar 14, 2010 9:42 am

Return to General

Who is online

Users browsing this forum: No registered users and 11 guests

cron