2. Inspiration for this talk
+ = JRB
Competitive advantage today can only be achieved through
better man/machine cooperation.
No huge surprise to Jenkins users.
Race *with* the machine.
3. What we’ll cover
➢ Benefits of peer code review
➢ Our experiences
➢ Others’ experiences
➢ Real-world code review
➢ Causes of noise
➢ Jenkins to the rescue!
➢ Demo/ Screenshots
➢ Further development
➢ Call for sponsors & test pilots
4. Key Benefits of Peer Code Review
➢ 90% reduction in shipped defects often
reported in industry studies
➢ 25% net productivity increase often reported
in industry studies
➢ Knowledge sharing
➢ Silo reduction
➢ Training new employees
➢ Best practice propagation
8. So, what do we know ?
➢ Average defect detection rate is only 25
percent for unit testing, 35 percent for
function testing, and 45 percent for
integration testing. In contrast, the average
effectiveness of design and code inspections
are 55 and 60 percent. [McConnell93]
9. So, what do we know ?
➢ Basic code reading is ~96% as effective at
finding defects as holding a formal
heavyweight inspection meeting [Votta1993]
➢ Technical code review checklists are a
powerful help (especially against omissions)
[Dunsmore2000]
➢ Defect detection drops dramatically after ~60
minutes, to zero after ~90 minutes [Dunsmore2000]
10. So, what do we know ?
➢ The longer a reviewer spends on the initial
read-through, the more defects will ultimately
be found [Uwano2006]
➢ Long methods are very time consuming to
understand [Uwano2006]
➢ Loops are very time consuming to
understand [Uwano2006]
➢ Reading time has ~3x higher correlation with
defects found than number of lines under
review [Laitenberger1999]
11. So, what do we know ?
➢ Maximum effective review rate ~400 lines
per hour [Cohen2006]
➢ Disproportionately more defects are found
when code changes are under 200 lines
[Cohen2006]
➢ Beneficial for “quality of review” if the author
pre-reviews and leaves comments for
subsequent reviewers [Cohen2006]
12. So, what do we know ?
➢ Review 100 to 300 LOC at a time, in 30-60
minutes chunks, with a break between each
sitting [Cohen2006]
➢ Spend at least 5 minutes reviewing a single
line of code [Cohen2006]
➢ Limit reviewing to 1 hour per day [Ganssle2009]
13. Real world
➢ Is messy...
➢ No clear pattern between time spent in the
review state, total number of reworks
required, or total number of lines changed
➢ Some studies compensate by dropping
unreasonable data points and assuming
informal out-of-tool review
15. Why no clear pattern?
➢ “Production pressure”
➢ “Lack of review guidance”
➢ Hard to justify time
➢ Just give up on large reviews
➢ Hard to delay 1000 lines of new code the 3
days (300 lines/h, 3x1h, 1h/day) it should
take to review effectively
16. What outcome would we like?
➢ Smaller reviews should merge faster
➢ Larger reviews should merge slower
17. Race *with* the machine
➢ Guidance on time to use doing review
➢ Remind author to do pre-review
➢ Up-front determination of reviewers
➢ Links to review checklists
➢ Load balance reviewers
➢ Help developers justify time investment
➢ Automatically add reviewers
➢ Automatically add developer as pre-reviewer
22. Further work
➢ Also consider previous reviewers, not only
previous developers
➢ Make language and domain specific review
checklists easily accessible
➢ Weight and score commit message size
against nr files/ lines changed
➢ Inline comments from warnings, static
checkers
➢ OO review expansion, ie. “you also need to
look at these 3 unchanged files”
23. Call for Sponsors and Test Pilots
➢ Try it out in your own Gerrit Review/ Jenkins
environment
➢ Sponsor development into a full-blow,
feature rich, configurable “Jenkins
ReviewBuddy” plugin
➢ Catch me in the break