This article continues the testing layers series, covering in detail a proposal on how to test a modern cloud application. The previous post in the series focused on the first and largest of the testing layers: unit tests. The next layer up, acceptance tests, validates the cloud app in a way that is equally as important as unit tests.
These types of tests don’t get nearly the attention that unit tests do and often end up as afterthoughts. But they should be part of any project that wants to achieve Continuous Delivery or Deployment. Without further ado, here’s the what and how of acceptance tests in .NET Core 2.
All code samples below can be found in more detail here.
What Are Acceptance Tests?
We’ll start by defining what an Acceptance Test is because there is inevitably more than one term and definition for everything in software development. You’ve probably seen these types of tests called integration tests. Although they do test the application in an integrated way, the requirements they satisfy go well beyond a normal integration test. They are also commonly known as Contract Tests.
The purpose of Acceptance Tests are to:
- Verify the requirements of the system are satisfied, typically based on acceptance criteria in an Agile environment.
- Ensure that the application fulfills those requirements once deployed, whether it be development, staging or production.
- Ensure that no breaking changes are introduced to the customer-facing contract.
There should be as many acceptance tests as are needed to fully validate that the deployed cloud application is running as expected.
Most cloud applications today get deployed in a semi-automated way (typically triggered via a tool like Octopus Deploy) followed by a team member performing sanity checks to ensure that everything is working correctly. This approach is slowly changing as more organizations adopt Continuous Delivery and Deployment. Deployments in both these models happen on a highly frequent basis (sometimes even on every commit), so you need a quick, thorough, and efficient way to validate the deployments from a business and operational perspective. Acceptance tests provide you with all this.
In other words, automated acceptance tests enable us to build faster and safer with the confidence that our deployed application is running as expected.
How To Write Acceptance Tests
You can find the full source code for the examples below in the Supermarket application source code, which can be found here.
The first step to writing good acceptance tests is to have them live alongside the production code they are meant to test. To that end, the Supermarket sample splits the production code from the test code via the src
and test
folders. The test
folder is then further subdivided into unit
and deployment
folders to make the deliniation as clear as possible.
Acceptance tests should test the behaviours that the cloud application exposes. That means testing happy paths and the most common edge cases. The tests should be exhaustive enough so that a succesful run makes you confident that the code is running as expected.
Let’s look at what an acceptance tests look like. There are a total of 5 tests in the Acceptance folder of the Supermarket.Deployment.Tests
project. These tests validate that:
- A request with missing parameters is not accepted
- A valid postal code results in a successful response
- The edge cases of tax calculation are handled correctly
Each test has the same basic structure that is similar to that of a unit test. The test is set up by retrieving an access token on the environment on which it is running. The request is then sent to server and the response is checked for the correct result. The needed details, such as URL and credentials to use, should be provided to the tests via environment variables.
[Theory] [InlineData("J4Z 2B9")] [InlineData("H3Y 0J3")] public async Task GivenPostalCode_ThenHasCorrectResponseItems(string postalCode) { var token = AuthenticationHelper.GetToken(_clientApiUrl); var response = await HttpRequestFactory.Post(_clientApiUrl, _checkoutResourceUrlSegment, new { postalCode }); dynamic responseContent = response.ContentAsDynamic(); Assert.NotNull(responseContent.preTax); Assert.NotNull(responseContent.postTax); }
And that’s all there is to it. Having these tests run succesfully on every deployment tells us that the application is in an expected state and can act as a “first alert” when something goes wrong with a deployment.
Some of the code in the acceptance tests layer was sourced from elsewhere on the web. I used Tahir Naushad’s HttpClientFactory implementation to perform the API calls and the implementation of token-based authentication in .NET Core is based on a blog post by Anuraj from dotnetthoughts.net. Thanks to both of you!
Using Visual Studio or Code To Write Acceptance Tests
A cloud application should be in a working state by the time you’re writing acceptance tests. It should already be thoroughly unit tested and there’s been some manual testing of the application.
You can run acceptance tests locally, albeit with a small caveat: you can’t debug both the tests and the production code because Visual Studio doesn’t support launching the WebApi and running the tests simultaneously. Follow these steps to run the tests locally:
- Build the WebAPI project in Visual Studio
- Open a command prompt and
dotnet run
the WebAPI project - Execute the acceptance tests from Visual Studio
Another approach could be to have two separate SLN files at the root of the project. The first for the production code, and the other for the acceptance tests project. This would allow you to have two separate instances of Studio running that let you debug both the acceptance tests and WebApi project simultaneously.
Use test playlists in Visual Studio to configure which tests to run together. You can set up a “Unit Tests” and “Acceptance Tests” playlist which will allow you to easily run or the other set of tests. For those who are Visual Studio Code inclined, I’d suggest creating some tasks that run dotnet test
with filter attributes to run the test sets.
The real power of acceptance tests is running them against deployed environments. The image below shows how a CI/CD server executes the tests as the application is deployed across environments, ensuring that everything is running expected from feature development to production.
The Big Picture
Acceptance tests need to run on every environment to which you deploy the application. The best way to ensure that happens is to integrate them as part of your CI/CD pipeline. When a developer commits code, it gets compiled, unit tested, deployed, and then acceptance tested. Rince and repeat for every environment that the code gets deployed to.
Above all else, the tests need to be reliable. They need to run consistently, over and over with little to no false positives. This may seem obvious but is really hard to achieve. Network issues, downstream service failures and configuration problems can result in all sorts of false-positives. Spend the time needed to fix any brittle tests or you’ll quickly lose the value that they provide.
Acceptance tests should not change once they are stable. It’s possible to add on to them but a change to an existing acceptance test implies introducing a breaking change. Without even trying, they protect you from breaking downstream clients.
Writing acceptance tests in this way ensures that your application is deployed succesfully, fulfills its requirements, and that you don’t accidently break a contract. Most importantly, it will enable you to build higher quality cloud applications.
What library do you use to write tests?
LikeLike
I try to keep the tests as plain as possible. MsTest gives me all that I need from a testing perspective. I haven’t used any libraries for frameworking the acceptance tests themselves, but I have created a few custom helper classes to simplify API calls.
LikeLike
Nice post, have seen this approach before – the issue is, those are pretty weak assertions, i.e. you’re not doing much verification there, and that’s an indicator that you’ve not got much *controllability* over the back end. “False negatives” is an issue here. What will make the test fail? Only a complete mis-mapping. If you swap pre- and post- tax values around accidentally (a common mistake), those assertions will still pass, and so it doesn’t give any confidence that the application is working correctly.
In general, I’d suggest that an acceptance test should be pinned back to acceptance criteria and examples in user stories, see techniques like Example Mapping https://cucumber.io/blog/example-mapping-introduction/
Doing that will cause you to develop the appropriate control / observe functions – this might be something like see a __testrequest__ header and then pass that value to a stub repo (Netflix do something along those lines, e.g. to inject canned responses at any point down their call graph). So, you might add a non-existent postcode to a DB, perhaps, that always returns a known value, e.g. to an expected level of rounding, etc.
Also, are you missing a layer of integration tests? See L2 tests here – https://docs.microsoft.com/en-us/azure/devops/learn/devops-at-microsoft/shift-left-make-testing-fast-reliable. If you are doing verification there, then sure, on a post-deploy test you’d do a litmus test that checks the values are not null
End-to-end tests (e.g. verifying user journeys) would also happen post-deploy – so you’d do that probably before you would run performance / security tests. You then have synthetic transactions and journeys in production – also dependent upon the ability to seed repeatable data.
More types of testing discussed here – https://queue.acm.org/detail.cfm?id=2889274
LikeLike
Hi Ken, thanks for the in-depth response! You’re totally right, the assertions are fairly weak. My goal was to demonstrate the technique rather than the exact content of the tests. Mapping the assertions to the acceptance criteria is definitely one of the best ways to approach it, and had I spent a bit more time on the post, that’s what I would have done. I’ll add a note in the post for anyone else who happens to come across it.
Cheers,
Marc
LikeLike