Principles of High-Quality .NET Cloud Apps (Part 2)

Posted by

This post is part of a series on building high quality .NET cloud apps:

Delivering a cloud application on time and keeping a high standard of quality is a classic problem in software engineering. There’s no one-size fits all solution to this problem, but there are certain steps that can be taken to make sure that every cloud application can be designed, built, tested, and delivered on time as well as holding a standard of quality that the development team can be proud.

This is the 2nd post exploring the principles that I’ve found help achieve those goals. The principles are as following:

  • Planning Principles
    • Product Vision
    • Technology Vision
    • Developers and Processes
  • Development Principles
    • Code Patterns and Practices
    • Effective use of data storage
    • A thorough testing strategy
  • Operational Principles
    • Frequent delivery
    • Monitoring and logging
    • Cost efficiency

The first article in the series described how the product vision, technology vision, and developers and processes form the guiding principles upon which the development and operational goals are modeled. This post focuses on the Development Principles and the final post in the series addresses Operational Principles.

Keep in mind that these are guiding principles, and not rules that must be followed to the letter. They provide direction to a project, ensure it stays on a path to sustainability, and provide business value as frequently as possible. If at this point you’re wondering why these types of principles are needed, I encourage you to read my article explaining why quality is important in a software development project.

Patterns and Practices

Patterns and practices go far beyond a coding standard on how to name variables, methods, and classes. It extends to all sorts of things like application structure and layout, .NET version and tooling upgrades, retry logic on failures, error handling, security, and much, much more. Patterns and practices can be seen as the implementation details of the technology vision that was set out.

Here are some things that should be looked at from an application structure and layout perspective:

  • Every cloud application should have a standard layout for its projects, folders, and namespaces, based on the software architecture of choice. Two popular architectures in .NET are n-tier and onion (also known as ports and adapters, hexagon, or clean architecture, depending on who you talk to).
  • The SOLID principles be should applied to all software architectures so that the application can be built upon and extended as needed.
  • Design patterns should be used to solve common problems. Be careful here, it’s easy to go overboard applying design patterns; let them express themselves naturally instead.

With the basics covered, it’s time to look at putting in place a proper version control system and branching strategy, whether it be GitFlow, GitHub Flow, Trunk-Based Development, or anything else. In a similar vein, it’s important to have a strategy for handling the development of multiple concurrent features, such as feature branching, feature flags, or Trunk-Based Development.

There are a few system-wide elements that should be looked at to keep all components aligned on a core set of ideas:

  • Decide on the version of .NET that will be used and when upgrades should be done. How many major/minor versions are you willing to wait before upgrading? The longer you wait, the more painful the upgrade will be.
  • Is it worthwhile to build a common library that is re-used across each cloud application to implement shared code?
  • Is there a need for a global error handler that can be applied to all applications?
  • Decide if NuGet dependencies should be updated as often as possible or only when a regression of the system is planned.
  • When should retry, circuit-breaker and other similar patterns be implemented?
  • Are there aspects of the system that need to be optimised for performance, security or support reasons? Should performance be taken into account now or later?

This list is meant as a guide only and isn’t exhaustive by any means. These guidelines align the team on its core principles so that every part of the system, while unique, has a common base that makes it feel similar to the other components.

Effective use of data storage

Not so long ago, almost every data storage decision came down to a choice between Oracle or SQL Server. But the last ten years have seen a proliferation of data storage options. Document databases, blob storage, and graph databases are all on their way to becoming commonplace, even in the Microsoft stack, where SQL Server is traditionally king.

Four of the major types of data storage in use today are:

  • Relational databases, the de-facto standard. This is the type of storage that most developers are familiar with. It’s highly versatile, and can be used in a wide variety of use cases. There are many RDBMs out there, but the most popular in use with .NET are SQL Server, Postgres, and MySql.
  • NoSQL and non-relational databases like Azure CosmosDb or AWS DynamoDb are used to store documents that have few relationships between them. The popular search engine Elasticsearch is also a non-relational database at its core that can act as normal data store.
  • Storage of binary or “flat” data should be left to AWS S3 or Azure Blobs. It’s also possible to use this type of storage for reading and writing the simplest of JSON documents.
  • Graph databases are best at modeling highly associative data between objects. Such a database is represented much like in graph theory, with vertexes and edges.

Use a hosted data storage service to get going quicker. Products like CosmosDb, DynamoDb, Azure SQL let a development team focus on the business problems at hand instead of building the infrastructure needed to host the data. Look at self-hosting only when the needs grow beyond what is feasible or affordable with a managed service.

Finally, a team needs to decide how they will access the data. Should direct queries to a database engine be written into a data layer of the application, or should an ORM or fluent library be used to make interfacing with the data simpler? Regardless of the choice, it’s best to stick to a single approach throughout. It standardizes how data is accessed for anyone working on the code base, not to mention that mixing different approaches is a recipe for race conditions and deadlocks.

Testing Strategies

An automated testing strategy is a necessity to deliver quickly, reliably, and frequently. Unit tests lay the groundwork for such a testing strategy by running each part of the code in isolation. But unit tests aren’t sufficient to say with certainty that the application, once deployed to production, is running properly. You need functional tests for that, and that’s where the layers of testing a cloud application come in.

I’ve written an entire series of articles on layered testing, that explains in detail how to write each type of test for a .NET cloud application:

  • Unit Tests validate that each class runs as expected in isolation from all of its dependencies.
  • Acceptance Tests verify the requirements of the application are satisfied, validates that the application fulfills those requirements once deployed, and ensures that no breaking changes are introduced.
  • External Contract Tests act as an early warning system for breaking changes to a dependency that you don’t control.
  • Security Tests check that no obvious code exploits are present within the application.
  • Load and Performance Tests ensure that the application can keep up to speed under heavy load.

A majority of the above tests should be run everytime a solution is built and deployed. A mix of happy path and common failure paths tells you with much greater certainly that the cloud application is deployed, runs correctly, and is protected against contract changes, traffic spikes and basic security flaws.

Writing these types of tests is easy, but getting them to run without false positives is much harder. It’s not uncommon to spend 80% of your time getting the last 20% of your tests to run reliably. Aggressively eliminate random test failures, otherwise they will spread quickly and make the entire test suite brittle and weak.

Looking To Operational Principles

These principles were mostly focused on the development and test engineers that work on building the product itself on a daily basis. The next set of principles are instead focused on delivering and monitoring a cloud application on a continuous basis, and are more applicable to Cloud and DevOps Engineers.

2 comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s