Every developer should have a basic knowledge of the networking concepts an application uses to interact with the outside world. Back in the hayday of IIS, a developer had to understand how its components worked together to expose an application to the Internet. Today’s modern apps use Docker to run ASP.NET Core applications, and while there are differences, the underlying networking paradigms aren’t all that foreign.
This post explores how ASP.NET Core and Docker work together to enable a containerized application to receive and respond to requests. The main concepts are summarized in the diagram below:
The left-most box in the diagram represents the Dockerfile that tells Docker how to build an image that contains the ASP.NET Core application. Visual Studio’s default Dockerfile prepares a few network-related pieces that may not be obvious at first glance. Here are the first two lines of the Dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 EXPOSE 80 ...
The first line hides a detail a few levels deep. Every Dockerfile inherits from a parent image, until the top level of the hierarchy is reached. Each of those parent images contains instructions that together provide the application everything it needs to run, from runtimes, to operating systems, and even environment variables. Speaking of which, one of those parent images has the following instruction:
By default, ASP.NET Core listens on port 5000 (for now, let’s ignore HTTPS, I’ll cover that in a separate post). That behaviour can be overridden by setting the ASPNETCORE_URLS environment variable, which is what the ENV instruction does. So the ASP.NET Core application will instead listen on port 80 for incoming requests when it starts running.
That somewhat explains the second line of our application’s Dockerfile. The
EXPOSE instruction tells anyone looking at the Dockerfile that the application is expecting to listen on port 80. The instruction doesn’t open any ports itself. It’s there as metadata to indicate that the application is somehow configured to listen on that port.
You can run the application on some other port by overriding the environment variable with your own
ENV instruction. In 99% of cases though, the default of port 80 will work fine.
Build & Run The Image
The Dockerfile tells the
docker build command how to create an image of the application we want to run. I encourage you to read my previous post which details the entire build process for more information. With the image built, it’s time to run it in a container.
docker run command is where we pick up the networking story. The most basic execution of an ASP.NET Core container looks something like this:
docker run -p 32767:80 imagetorun
-p 32767:80 is what’s called a port mapping. It maps port 80 on the container to port 32767 on the host. The host is the machine running Docker. On a Windows machine, that’s the VM that gets created in Hyper-V (pre-WSL2). On Linux, it’s the physical machine itself. In essence, the port mapping allows traffic to flow from the host into and out of the container.
The host port number is assigned by Visual Studio when starting a container from the Debug→Start Debugging menu. They’re fairly easy to identify since they seem to always be in the thirty two thousand range.
As we saw, the ASP.NET Core application is listening on port 80 in the container, and can process any requests that are sent to it. Here is what the typical request-response flow looks like:
- Send a request from a tool like cURL or Swagger:
- The host receives the request on port 32767 and re-directs it to port 80 inside the container environment.
- The controller route is matched as normal, any business logic is executed, and a response is sent back over port 80.
- The container host takes the response and forwards it back onto port 32767 of the host computer.
By now you should have an understanding of how the traffic from your local machine gets to your application and back when running inside a Docker container.
What we looked at was focused on running containers locally, but applies just as well to containers running in the cloud. Containers are typically run in some kind of orchestrator, like Azure Container Instances or Kubernetes, bringing with them a whole other set of networking concepts that need to be mastered.