Building a cloud application has never been easier. Delivering it on time while keeping high standards of quality, on the other hand, has never been easy. The concepts that help us achieve those goals are grounded in nine core principles that I’ve been exploring over the last couple of posts.
The first article in the series describes how the product vision, technology vision and developers and processes form the guiding principles upon which the last six principles are based.
The second article focuses on development principles that form the core of the application’s implementation through code. They include, but are not limited to, patterns and practices, effective use of data storage, and a thorough testing strategy.
Finally, this article focuses on the operational principles that make an application usable and financially viable in the long term. Those principles are:
- Frequent delivery
- Monitoring and logging
- Cost efficiency
Let’s dig in to gain a better understanding of this last set of principles.
Today’s fast moving world makes delivering a working product as quickly and frequently as possible a necessity. The same is true for cloud, web, mobile and desktop applications.
And that’s where Continuous Deployment and Delivery come into play. The distinction between the two Ds is subtle. Both build upon Continuous Integration by deploying a build artifact onto a development, staging and then production environment. That’s why it’s often refered to as a CI/CD pipeline, since it’s quite literally a pipeline from development all the way to production.
Continuous Deployment has no human intervention. It deploys the build and then runs a series of tests to ensure everything is operating as expected. The artifact is deployed to each subsequent environment through to production unless there is a failure along the way. Continuous Delivery, on the other hand, has a person approving the release before it’s promoted to the next environment in the pipeline.
Achieving Continuous Delivery and Deployment isn’t easy, especially if an application was not originally designed with it in mind. It requires both an investment in automated deployment, and, most importantly, it implies that every cloud application is independent of all others.
The creation of the infrastructure and deployment of the application must be a process that is repeatable once, twice, or a hundred times a day. Both tasks are easy to automate. The creation of the infrastructure, though, is often seen as a nice-to-have. This attitude is starting to change with the rise of Serverless and Containers, but there’s still a long way to go in many organizations.
Keeping every cloud application independent of each other is crucial for a CI/CD pipeline. There are many things that go into keeping applications independent from a deployment perspective — too much to cover here — and will be the topic of a separate article.
It almost goes without saying that all infrastructure creation and deployment code is checked in to version control alongside the code it’s meant to deploy. That makes it easy for anyone to find the deployment template, as well as promote the idea that the deployment code and application code are part of the same whole.
Tools of the trade
There are many tools that can help you on the road to CD in .NET Core and .NET Framework. The top choices boil down to Visual Studio Team Services (VSTS), Jenkins, TeamCity and Octopus Deploy. VSTS is a source control, build server, release manager and project management tool all wrapped into one. Alternatively, a combination of GitHub Enterprise, Jenkins or TeamCity, and Octopus Deploy can be used to accomplish the same tasks as VSTS. Keeping tooling to a minimum is always preferable, so VSTS is my one-stop-shop of choice.
Don’t build your own deployment tools unless you’ve reached a point where you have highly specific needs. Products like VSTS and Octopus Deploy can deploy applications to pretty much any kind of hosting or cloud provider. Their extension marketplaces are very active, and in the worst of cases, you can write your own extension to fill any additional needs.
Monitoring and Logging
Log All The Things
It’s literally impossible to write error-free code at all times. In every application there are bound to be bugs, unexpected impacts and random failures that need to be understood and fixed. Monitoring and logging work together to make diagnosing issues easier.
Logging consists of writing informational, warning, and error messages that help developers understand what is happening within an application. Er on the side of too much information when writing logs. The cost of writing a message is minimal when compared to the time it might take to pinpoint a bug deep within the source code.
One of the best suited applications for this type of work is the Elastic Stack. Elasticsearch’s document-based indexes are flexibile enough to log anything imaginable. Kibana, the visualizer for Elasticsearch data, makes it easy to explore and search through the logs in a structured way.
Keeping An Eye Out with APMs
Application Performance Monitoring is the process of tracking performance metrics related to the application at a given point in time. APMs track metrics such as response time, throughput, error rate and much more. The data is available in near real-time and lets you see the historical performance of the application as well.
Monitoring helps operations teams detect when something is behaving abnormally within a cloud application. They display what’s going on from the point of entry right down to the database and back up the stack, which makes finding the culprit that much easier.
The development team gains the most from insights provided by APMs. Traced requests can be used to pinpoint bottlenecks within the system well before they become a large enough problem to cause failures or downtime.
An APM can also be used to detect which parts of an application are or aren’t being used. This info can be leveraged to know that a deprecated feature can be safely removed from the application’s codebase, or to know that more marketing effort needs to be put behind a feature.
Azure and AWS both have their own custom monitoring tools in AppInsights and X-Ray. AppInsights is a more polished product than X-Ray, most likely since it’s been around for much longer. NewRelic and DataDog are both highly popular options in the APM market for monitoring .NET applications. Each have their pros and cons and the best fit is highly dependent on needs. My article on monitoring .NET APIs outlines some of the key factors in choosing an APM.
Keeping cloud bills in check is a great way to increase the ROI of a cloud application. Unfortunately, because every project is different, there’s no magic formula to keep costs down.
Compute Is A Big Spender
The largest chunk of cloud costs come from the compute side of things. Choosing the right hosting for a cloud app, be it Virtual Machines, PaaS, Containers, or Serverless is the first step to ensuring that costs stay under control.
- Don’t run multiple instances of an application unless there is a need for it.
- Don’t run an application across regions unless there is a specific reason to do so.
- Setting up auto-scaling up and down goes a long way to reduce compute cost.
- Use the smallest machine size as possible until you need to up-size them.
- Consider serverless (AWS Lambda or Azure Functions) for very low traffic situations.
Evaluate on a regular basis whether the choices made are still the right ones and adjust accordingly.
A Couple Other Cost Hogs To Watch For
Some deployment strategies negatively affect monthly costs if not managed properly. Blue/Green deployment in particular incurs poor ROI if all instances, even those not in service, stay running at all times.
Keep a close eye on the cost of non-production environments. There should be at least one environment that mimics production, but it should use much smaller instances to reduce costs. You also don’t need the same number of instances or scaling rules for non-production environments. Aim to have the minimal number of redundancies as possible.
Feature branching environments can also quickly increase monthly cloud costs. There’s little gain to having multiple machines running development builds that are called upon to perform barely any work. Find a way to combine feature environments and keep costs low.
Wrapping it all up
And that’s it. I hope these principles can help others guide their .NET cloud apps down a path to higher quality whilst also delivering on time. I’ll continue to write more supporting articles for these nine principles and continue to refine them as time goes on.