This post explores the ins and outs of building AWS Lambda serverless functions with .NET Core. The focus is on the programming model, the particularities of the .NET runtime, and tooling.
I mention static code analysis tools on a regular basis. It’s an integral part of a well-oiled code review process that ultimately brings value to the product. Let’s do a deep-dive to understand them better, their limitations, and what we can do to get around those limits.
You just finished putting the final touches on the feature you’ve been working on for the last day, and you’re ready to fire off a code review with your colleagues. The next question you may ask yourself is who to include on that list of reviewers. The first instinct is to go with people you’re familiar with, who may be less critical of your code. Unfortunately, that’s not in the code’s best interest, the organization that owns the code, or your own. A code review needs to find defects to bring value. You won’t do that if you don’t involve the best defect finders in your team.
About three months ago I was looking to expand my current playlist of software development podcasts. A quick search came up with a few lists of top development podcasts. There was one that caught my eye, on the Simple Programmer website, that was put together better than the others. I tried a few different podcasts from that list but the one that stuck was from the Simple Programmer himself, John Sonmez. I’d heard John’s name come up on a few other podcasts and had been meaning to check out his YouTube channel and podcast for a while.
An issue that I haven’t seen addressed in depth in is whether you should review every commit to your code base or only aim to cover critical parts of your application. As usual, there isn’t a single answer to the question, and there are advantages (and disadvantages) to both. The answer will often depend on your team’s context, so let’s dive right in to see if we can make some decisions on when to use a certain methodology over another.
There are two major schools of thought when it comes to how best to perform a code review. The first and simplest strategy is to sit down with a colleague, go through the code together, and discuss the finer points of programming. The other, less interactive way, is to use one of many tools available on the market to facilitate the discussion. But what is the best way to maximize value, improve the development skills of your team and promote collaboration? As with most things, the answer is usually somewhere in the middle.
We’ve all had to do code reviews on bits of code we weren’t very familiar with. Whether it’s part of the application you haven’t worked on before or a new hire who’s looking at the code-base for the first time, it’s never obvious what to look for, how to determine if the implementation details are bug-free, or how in-depth you should go. The natural instinct, at least for me, is to gloss over the higher-level concepts and look only for obvious mistakes in the implementation details. So how do you get up to speed quickly, figure out what needs attention and doesn’t, and find potential issues with the implementation? As it turns out, a lot of it relies on what the person creating the code review makes available to you as the reviewer.