Developers and testers will often be on the same team, but accomplish drastically different tasks. A developer will typically write code, test the code, and document the feature. A tester will verify that the feature conforms to its requirements and find any bugs. Some of these tasks can happen at the same time whilst others can’t.
Does code review happen before or after QA?
This question inevitably comes up after a team has been working together for a few weeks. There are arguments for both sides and at the end of this section there will be an alternative proposition.
Code Review Before QA ensures the code base is as solid as possible before it gets into the hands of a tester. Therefore, once QA is done, we can say with a degree of certainty that the feature is stable. There’s no need to do a final verification since no more changes need to be done to the code.
QA Before Code Review gets the code into a tester’s hands as soon as possible. Early feedback from testers allows developers to track down bugs earlier in the process. Early involvement from testers can also help detect missing functionality or incorrect assumptions. Testers must do a final pass after code review fixes have been applied since the code will have changed, resulting in some duplication of work.
Teams should experiment with both methods and choose what works best for them. As a potential alternative, considering code review as part of the QA process presents some interesting benefits. It implies that code review and QA can be done at the same time. You’re looking for defects when reviewing code, which happens to also be the goal of quality assurance. Code defects can be treated like any other bug, increasing the visibility of required changes. The code review and QA are considered complete when all issues are triaged.
Should code reviewers test?
The answer to this question depends on the thoroughness of the test suite that is running as part of your Continuous Integration process.
It’s unlikely that manual testing will uncover any issues if you have good coverage from a unit, integration and functional test perspective. Each of those layers of testing addresses different points of failure: the individual method layer, the end-to-end layer, and the deployment layer. Put together they limit the chances of a major issue slipping through the cracks.
Doing a quick sanity check when code coverage is low is more important. The tests should be limited to verifying the happiest of paths are working. The sanity check avoids a situation where the system is handed over to QA and the feature is not working at all. A quality assurance specialist’s job is to find defects within the feature, not to ensure the feature works in the first place.
All automated tests (unit, integration, functional) must be reviewed in detail during the code review. The tests should be green, cover all important test cases, and be easy to understand.
Should tests written by Test Engineers be code reviewed?
In many organizations, Test Engineers will write their own suite of tests. The tests will generally be functional tests that are not part of the build pipeline. They are either run on demand or part of a build that gets triggered once the “real” build completes.
These tests should be code reviewed as well. Functional tests are often much more flakey and will benefit from having an extra set of eyes look at them. It’s also a good time to validate assumptions that are made in the tests. Breakdowns in communication can cause requirements to be misunderstood and implemented differently than the expected behaviour.
Bringing it all together
There is no single answer to the above questions, but a few guidelines have become apparent:
- Have a complete suite of automated tests (unit, integration, functional) that run on every commit.
- Get developers and testers involved in the code review process so that everyone has the same understanding of the code.
- Experiment with the timing of the code review to find what works best for your team.