I mention static code analysis tools on a regular basis. It’s an integral part of a well-oiled code review process that ultimately brings value to the product. Let’s do a deep-dive to understand them better, their limitations, and what we can do to get around those limits.
What’s so great about them?
The layout and format of the code is kept consistent across developers. Whether the team is working on a small or large product, consistency across the codebase will make the application feel like a whole instead of the sum of its parts.
The number of raised defects related to coding standards will drop significantly. We generate a lot of noise with formatting and style comments, so much so that it makes it difficult to spot other bugs. Eliminating these types of comments goes a long way to help us focus on finding the highest value defects, and not the most obvious ones.
Enforceable via the build process. Gone are the days that a single developer can wreak havoc on the team’s coding standards. If a rule is broken, the build fails, alarms go off, automated emails get delivered, and everyone immediately knows that a fix needs to be done to ensure the standards are upheld.
What’s not so great about them?
False positives stick out like a sore thumb. We’ve all seen code that is peppered with ‘exception rules’ so that a validation rule is ignored for a particular case. It’s crucial to always take the time to adjust the rules engine to keep exceptions throughout the codebase to a minimum.
Treating the rules as errors is painful. There’s a delicate balance to be had here. If you don’t treat them as errors and let the build succeed, no one will ever pay attention to the warnings. If you treat them as errors, then you’re saying the slightest broken rule must be fixed immediately. There is nothing more frustrating than pushing your changes, immediately realizing you’ve forgotten to run the analyzer locally and breaking the build.
It takes time to configure properly. You need the rules to be configured and constantly adjusted to your product’s needs. If they don’t change as the code base evolves, then at some point you’ll just end up turning it all off and losing the benefits altogether.
They’re not perfect. I’ve seen cases where the analyzer missed a whole chunk of code because of a mistake in the project configuration. They can only do what you tell them to do and they can’t replace a real human looking at the code.
For a long time, the de-facto standard for any .NET language has been Stylecop, often combined with FxCop. They do their job effectively and can be used reliably in old and new projects alike, from ‘traditional’ .NET Framework through to .NET Core.
Pre-Visual Studio 2015: https://github.com/StyleCop/StyleCop
Post-Visual Studio 2015: https://github.com/DotNetAnalyzers/StyleCopAnalyzers
An extremely popular extension to Visual Studio, Resharper, can also be configured locally and on a build server to analyze your sources. Although Resharper is more costly, it also feels like a more polished and easily configurable product. Here’s the link to configuring it to run on TeamCity.
A good place to start would be JSLint. It does not allow configuration, so you’re at the mercy of the author’s ruleset. Another similar product is StandardJs, which also prevents you from playing with its pre-defined rules. You should start with one of these two options, and move onto something more advanced once you’ve proven that they don’t do the job.
The only real option here is TSLint. There are a few pre-configured rulesets on GitHub to get you started. The default rules can be found here. There’s also a fairly popular independent alternative to it, which you can find here.
For those of you tied to the lingua franca of the web development space, the unchallenged leader in terms of static analysis is called checkstyle. It supports Google and Sun coding standards out of the box, and should be more than sufficient as a starting point for any development team. As all things in the Java world, it’s also highly configurable in case you need to start tweaking its behaviour.
What makes a good style checker?
The following guidelines will help you choose a code analyzer if your language of choice isn’t listed above.
It should come with a set of defaults. These defaults should be sufficient to get your team started. It’s likely that you’ll get 80% of the rules you want just by using the tool, leaving another 20% that can (hopefully) be tweaked to your specifications.
It should be highly configurable. It won’t take long before noticing a rule that keeps getting in the way, or that isn’t relevant for your team. Having the ability to tweak the configuration will go a long way to keeping the tool useful and productive in your code review process.
It’s easy to integrate to the build pipeline. This is crucial to ensure that everyone is following the rules set forth by the team. It’s too easy to forget to run it locally. It can be extremely frustrating for another team member to go through the code base to fix the errors. Having it as part of the build saves you from all of this annoyance.
It runs quickly. A long build is an unhappy build, so make sure that the analysis increases the build time by less than 10%. It should run quickly locally as well to encourage developers to run it often (possibly on every save of a file).
Using a static code analysis tool isn’t without its downsides. A properly configured code analysis tool outweighs those disadvantages, keeping in mind that a great deal of effort needs to be put into ensuring it is always up to date. Give it a try on your app and let me know in the comments how your team goes!