Code Reviews

From Apache OpenOffice Wiki
Jump to: navigation, search

Varieties of Code Reviews

In Code Reviews one or a few persons read some source code and try to find pieces that could be improved. Depending on goal, topic, means, frequency and number of persons involved there exist various types of Code Reviews. A few examples are:

Fagan Inspections
This process is very formal, needs lots of time and several people. Fagan Inspections try to find as many defects as possible and to correct them immediately.
Complete Code Reviews, covering all code of a project
This is practiced for example by the Mozilla community. Every code is reviewed by another developer before committed into the source code repository. Can be only one or a few reviewers. Goal is still to find as many defects as possible immediately.
Lightweight Code Reviews
Need only one reviewer. The main goal is learning through the process of communication and of revisiting code consciously. Finding all defects at time of review is no priority.
Self Reviews
The author herself reviews her own code a while after writing it. Preferably there should be at least a night inbetween. Needs no other person. While this kind of review finds less defects and triggers less learning than reviews where other people are involved, they still can be quite valuable. Some defects are only found by the author, because she understands best what is going on.

Means and Terms


A Code Review needs rules to decide what is undesirable, what should be improved and what is okay. Rules could be quite general (like "The code has to be understandable."), but need to be specific enough that a reviewer can decide, if a rule is met or not. If there are no rules, but reviewers decide only by personal intuition, the outcome of a review is too unclear.

Rules regarding source code may cover various aspects:

  • correctness
  • avoidance of dangerous constructs
  • maintainability (readability, clarity, ease of change)
  • compliance to local conventions of a project
  • portability
  • ...

An example rule set are the Cpp Coding Standards of


Definition: A defect is a violation of a rule the participants of a review had agreed on.

This does not necessarily mean, it is a functional bug, but could also be - e.g. - a matter of clarity or conventions. If code that is obviously quite well, shows a defect, this can be a hint the related rule needs to change. But it would still be a "defect", as long as the rule stays.


The person that wrote the code or changed it. It makes sense, if the author reviews her code herself simultaneously with the reviewer(s), because often just by re-reading the author gets new insight. The author shall never be criticized in reviews. Not the author is reviewed, but the code!


A person that reviews code. Typically not the author. Reviewers need the skill not to get into discussion with the author while reviewing, but to just observe the code.

The reviewer can be a person with problem domain expertise or without, both has advantages. A domain expert is more likely to spot domain-related bugs. But a person from outside may even have a clearer view on general problems or questionable habits that have been introduced into a team.

Example: Leightweight Code Reviews within OOo

Since the beginning of 2007, some OOo developers (located at Sun in Hamburg, Germany) practice code reviews. Among the above described varieties, these would be Lightweight Code Reviews.


  • Making the OOo Coding Standards alive in daily work.
  • Triggering a continuous information exchange between the participating developers.

So, the purpose is learning and information exchange. It is not intended to cover all code and find all defects, but to prevent future defects by continuous training.


  • Each week the developers meet in pairs, reviewing each other's code.
  • One is "author" for one hour and "reviewer" for another hour per week.
  • Partners are chosen across borders of projects/problem-domains.
  • For each review 1-3 topics from the OOo Coding Standards are chosen, which sum up to about 10 rules. Normally the author decides which topics/rules are used for reviewing her code.
  • Part of the review checks the code for the chosen standard's rules, part is for spotting bugs or finding other issues.
  • The reviewer notes (writing them down) his observations. The author may accept them or decide, the code better stays the way it is.


Most reviews have these phases:

  1. Author tells which parts in which files to review, and tells generally what the code is about.
  2. Reviewer takes about 20-25 minutes to check the code for the chosen rules. She tells what she observes, but generally there are no discussions here, just clarifying questions.
  3. Reviewer takes about another 20-25 minutes to spot bugs, check algorithms, look for anything he would do different or has questions about - whatever catches her eye. Here discussions may arise.
  4. Author and reviewer look back on the review and note, if they have remarks about the rules or the reviewing process that should be communicated to the other review-participants.
  5. Author checks the found defects, if in her view they are really defects and gathers the results. If appropriate (priority, severance) the defects will also be fixed soon, because doing something triggers deeper learning than just talking about it.


  • In the first six weeks by nine participants about 300 defects were found.
  • Participants liked the information exchange. Several useful tips and tricks were learned that had not even to do with the rules.
  • After the first six weeks, experience caused nine rules to be changed or removed.
  • After six weeks of testing these reviews, seven of nine people wished to continue.
  • Except of those rules that have been changed, the acceptance rate (author agreed) of found defects was about 80%.
  • Some topics are more popular to be checked than others, for example "Security" and "General Coding".
  • Rules in some topics need more time to review, especially in topic "Design"

This kind of Code Reviews now is adopted by more developers in Hamburg. For further information contact Nikolai (np AT openoffice DOT org).

Developers in different Locations

When practicing code reviews like in this example, in an open source community, the procedure may have to change a bit, because the two partners might reside at different locations. While in the example the reviews have been done face-to-face, it seems to be possible to work with IRC instead, or even do some of the phases by mail.

Personal tools