12 April 2017

Using Docker to build and deploy .Net Core console applications

Using Docker with .Net Core is initially straightforward, but to get beyond basic image building you will need to handle more than the simple scenarios demonstrated in the quick-start guides.

You also need to bear in mind that in using Docker for Windows you are some way from being a first class citizen. This is a Linux ecosystem that is being adapted for Windows. That much becomes obvious with the glitches, undocumented workarounds and breaking changes that are part and parcel of building and running Windows containers at the moment.

The basics: building an image

Microsoft provide different types of base image that are optimised for build and production scenarios. This allows you to use Docker as part of your build process so any dependencies are installed in container images rather than a build agent. It also helps to ensure that your production images are as small as possible by excluding any source code or build tools. This makes them easier to manage and quicker to spin up.

This implies a two-stage process to building an image: the first stage builds the application in a specific build image while the second copies the build output to a separate run-time image. This is shown in the example Dockerfile below:

# Create the build environment image
FROM microsoft/dotnet:1.1-sdk as build-env
WORKDIR /app
 
# Copy the project file and restore the dependencies
COPY *.csproj ./
RUN dotnet restore
 
# Copy the remaining source files and build the application
COPY . ./
RUN dotnet publish -c Release -o out
 
# Build the runtime image
FROM microsoft/dotnet:1.1-runtime
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "ExampleApp.dll"]

This file does several things:

  • It creates a build environment based on Microsoft’s .Net Core SDK image
  • It copies the project file to this environment and restores the packages via a dotnet restore command
  • The remaining project files are copied over and the application is built using a dotnet publish command
  • Finally, it adds the built application to a more compact .Net Core run-time image and sets the entry point for running a container based on this image.

In the interests of keeping all your images as small as possible you’ll want to avoid copying any cruft onto the build image. This can be done by adding a .dockerignore file to the solution folder and copying the following lines into that ensure that your build outputs are ignored:

bin\
obj\

You can create your docker image using the build command as shown below:

docker build . -t example

This will create an image with the name “example” based on the docker file in the current directory (specified by the period at the end of the command). It will take a while to build, particularly on the first run as it is downloading the base images for the first time.

Layered builds and image caching

Once the build is complete you can list all the current images using the following command:

docker image ls

You’ll notice that a lot of images are created as part of the build process. This is because Docker caches images for every COPY and RUN command used in the build process to speed up successive build operations. If you repeat the build without making any changes you should see “Using cache” messages appearing for the majority of operations.

This is why you only copy the project file before restoring dependencies. If you copy all the files over then external dependencies will be downloaded every time you make a change to any source code files. By layering your build you speed things up so the restore only happens if there are changes to the project file.

Running a container in Docker for Windows

To run the latest version of your image in a container you just invoke a run command, i.e.

docker run example:latest

If you’re working on Docker for Windows then you’ll need to find out the container’s IP address before you can access it directly. The easiest way of doing this is to use the docker inspect command with the container’s identifier as shown below:

docker inspect -f ‘{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}’ [CONTAINER ID]

Specifying configuration for applications in containers

If you are building an application to run in a container then you should be taking a twelve-factor approach to configuration items such as connection strings and host names by storing them in the environment. This means that everything that is likely to change between deployments is defined outside of the actual container as an environment variable.

The configuration system in .Net Core makes this relatively straightforward by allowing you to define configuration settings as environment variables using the extension methods provided by the Microsoft.Extensions.Configuration.EnvironmentVariables package. The configuration builder statement below shows how to set up a bunch of defaults with a local JSON file that can be over-written by environment variables.

var configuration = new ConfigurationBuilder()
  .AddJsonFile("appsettings.json")
  .AddEnvironmentVariables()
  .Build();

Docker provides you with several different ways to set environment variables. They can be added to the dockerfile using the ENV command, specified as part of a docker run command using the -e flag or used in Docker Compose configurations.

Building multiple projects

While Docker samples tend to focus on a single project most real life projects compile multiple projects in a solution. This is straightforward to implement by making sure that the dockerfile lives in the solution directory.

You will want to optimise your image caching so your build doesn’t attempt to restore dependencies for multiple projects every time it runs. This means copying and restoring each individual project file in turn before copying the source files and building the main output. The example below demonstrates this for a two project solution where “Contracts” is a dependency of “ServerApp”.

# Copy the project files
COPY Contracts/*.csproj Contracts/
COPY ServerApp/*.csproj ServerApp/

# Restore all the dependencoes
WORKDIR /app/Contracts
RUN dotnet restore
WORKDIR /app/ServerApp
RUN dotnet restore

# Now copy all the remaining files
WORKDIR /app
COPY Contracts/*.* Contracts/
COPY ServerApp/*.* ServerApp/

# Publish the app
WORKDIR /app/ServerApp
RUN dotnet publish -c Release -o out

Running unit tests as part of the image build

You can make unit test execution part of your container creation so any failing tests cancel the build. To do this you need to make sure that you are using a .net Core SDK image for your build image. You can then use the dotnet test command to execute any unit tests as shown below. If anything fails then the dockerfile stops executing.

# Restore and run the unit tests
WORKDIR /app/ServerApp.Tests
RUN dotnet restore
RUN dotnet test

Managing the application life-cycle

If you’re running a service based application in a container you’ll need to ensure it is kept alive so that it is able to receive requests. This is taken care for you in an ASP.Net application by the web host, but for console applications (e.g. a gRPC service) you will need to implement this yourself. The most straightforward (and thread efficient) way of doing this is through a ManualResetEvent instance.

You may also want to consider how to handle a graceful shutdown so you can implement some kind of tear-down logic for any non-managed resources. If your container is running in Linux then you can trap the SIGTERM signal using the AssemblyLoadContext. The only equivalent mechanism for containers running in Windows is to capture the CTRL+C keypress, though this will only work if you’re running the container in interactive mode.

The sample below shows how two ManualResetEvent objects can be used to manage the application life-cycle for a console application. The first stops the application from exiting as it runs, while the second will allow any resource clean-up to complete before the application ends:

public static class Program
{
    public static ManualResetEvent _Shutdown = new ManualResetEvent(false);
    public static ManualResetEventSlim _Complete = new ManualResetEventSlim();
 
    public static int Main(string[] args)
    {
        try
        {
            var ended = new ManualResetEventSlim();
            var starting = new ManualResetEventSlim();
 
            Console.Write("Starting application...");
 
            // Capture SIGTERM  
            AssemblyLoadContext.Default.Unloading += Default_Unloading;
 
            // Wait for a singnal
            _Shutdown.WaitOne();
        }
        catch (Exception ex)
        {
            Console.Write(ex.Message);
        }
        finally
        {
            Console.Write("Cleaning up resources");
        }
 
        Console.Write("Exiting...");
        _Complete.Set();
 
        return 0;
    }
 
    private static void Default_Unloading(AssemblyLoadContext obj)
    {
        Console.Write($"Shutting down in response to SIGTERM.");
        _Shutdown.Set();
        _Complete.Wait();
    }
}

Filed under .Net Core, Development process, Microservices.