16 September 2018
Building Twelve Factor Apps with .Net Core
Twelve factor apps provide a methodology for building apps that are optimised for modern cloud environments. The intention is to provide autonomous, stateless components that can be easily deployed and freely scaled up. This means designing for easier automation and maximising portability between different environments by removing any dependencies on external frameworks or tools.
The methodology is based on twelve tenants that provide a mixture of process guidelines and implementation detail. Although the methodology has been around for years, it’s only really been achievable in the Microsoft world since the advent of .Net Core.
One codebase tracked in revision control, many deploys.
This defines the basic unit of organisation for an app, i.e. a single code base that can be deployed to multiple environments. This should mean hosting the app in a single repository, though this is not enough on its own. An app needs to be a cohesive unit of code rather than several different applications hosted in the same repository.
This might seem rather basic, but it does help to define the scale and scope of an application. It encourages smaller, more manageable units that have clearer responsibilities and are easier to automate.
Explicitly declare and isolate dependencies
You should be able to define all the dependencies that an application uses in a manifest that lives within the application. There’s no room for system-wide frameworks or external tools. This makes it easier to set up new environments, particularly for developers as any dependencies can be automatically acquired through a packaging system such as NuGet.
This used to be all but impossible with .Net as you relied on the existence of a system-wide framework. This created a prerequisite for an environment to be configured with a specific set of framework dependencies. Applications written in .Net Core on the other hand can be packaged to all their dependencies, including any underlying framework.
Store config in the environment
In this context, configuration is defined as anything that is likely to vary between deployments and environments. There should be explicit separation between any configuration settings and code. A good litmus test for this is whether you can open source the code base at any moment without compromising any credentials.
Ideally, configuration should be factored into environment variables where they are easier for operational teams to manage. Above all you need to avoid relying on JSON based configuration files that ship with code or storing multiple configurations for different environments.
This approach is supported by the configuration APIs in .Net Core as they provide a hierarchy of configuration sources. This allows you to define a baseline configuration in an appsettings.json file and override any setting with environment variables and an external secrets store. This is demonstrated in the bootstrap code below:
WebHost.CreateDefaultBuilder() .AddJsonFile("appsettings.json") .AddEnvironmentVariables() .AddUserSecrets<Startup>()
IV. Backing services
Treat backing services as attached resources
A backing service is anything that is consumed over a network, such as databases or external APIs. There is no distinction between services you manage locally and those that belong to third parties – they should all be treated as resources that can be attached and detached without requiring code changes.
V. Build, release, run
Strictly separate build and run stages
This requires an explicit separation between the processes that build, release and run software.
A build process should create an executable bundle and execute any tests you have along the way. A release is an immutable combination of the built artefact and configuration. The process of running an application should be handled by a process orchestrator and have as few moving parts as possible. Suffice to say, each of these stages should be automated rather than relying on human intervention.
Execute the app as one or more stateless processes
A “process” is defined as the single execution of an application. An application should run in a separate process that does not share information with other running applications. This means not using techniques such as sticky sessions or caches to share state between processes.
VII. Port binding
Export services via port binding
A twelve-factor app should be completely self-contained and not rely on the run-time injection of an external host or server. If you want to expose an API over HTTP then an app should export the service and bind it to a specific port rather than having it implemented via a web server.
In .Net Core you expose an HTTP service using Kestrel, which is a self-contained HTTP server embedded within the application. You set up the service and bind it to an external port using bootstrap code below:
It then becomes the responsibility of the environment to route requests to the port that the application is bound to. This is a very different approach to using a web server such as IIS to create an HTTP-based service.
Scale out via the process model
This follows on from the notion of treating running applications as stateless processes. It gives you the freedom to increase throughput by adding more of these processes in response to load. There is some nuance in here, as ultimately your scaling will be affected by other parts of your infrastructure, particularly any data storage infrastructure. That said, the running process should provide a basic unit of scale that enables more elastic scaling.
Maximize robustness with fast startup and graceful shutdown
Genuinely responsive elastic scaling requires that you can spin up instances quickly and kill them off without much ceremony.
You will need to take the time to ensure that applications shut down gracefully. Once they receive a command to close – e.g. a SIGTERM signal from a process manager – they should stop accepting new requests and return any unprocessed items back to wherever they came from.
In .Net Core you can use an IApplicationLifetime instance to implement a graceful shutdown by hooking into events such as ApplicationStopping and ApplicationStopped. Note that these will not necessarily fire in a debug environment but they will fire when being managed by an orchestrator. You can also use CancellationToken in your processing to check that a shutdown has not been signalled before doing any work.
Consideration should also be given to “sudden death” scenarios where there’s no opportunity for graceful shutdown. This is more to do with the way that workloads are organised, such as using transactional messaging that will return messages to a queue if they are not explicitly terminated.
X. Dev/prod parity
Keep development, staging, and production as similar as possible
There are several different ways in which environments diverge. Code can get stuck in “testing hell” so the version in test environments starts to diverge from production. It can be difficult to keep tools and frameworks in sync. There’s also a difference in personnel between each environment which can give rise to subtly differences in configuration.
Twelve factor applications help to reduce this divergence by producing stateless, self-contained applications that depend less on external frameworks. Containerisation can help further by creating more certainty over run-time environments. Ultimately, you need to ensure that environments have a similar topology and only really vary in size.
Treat logs as event streams
A twelve-factor app should treat logs as a time-ordered sequence of events and shouldn’t worry about how they are routed or stored. Logs entries should be written directly to stdout. The aggregation, storage and processing of these entries is a separate concern that should be taken care of by the environment. This approach tends to be associated with tools such as the ELK stack or Splunk that capture and process log streams.
This can be difficult for application developers who are used to being able to configure the shape and destination of log files, including rotation and rollover policies. This decoupling gives you more freedom to change the way that logs are processed without having to make corresponding changes in application implementations. It also helps with elastic scalability, as manual log aggregation can quickly become a real pain when you scale up to dozens of instances.
The .Net Core logging extensions support this abstraction by providing a standard interface for writing log statements and a hierarchy of providers that can route these statements to the console output stream. The example below demonstrates setting up a log instance that writes to stdout using the .Net Core console logging provider:
var factory = new LoggerFactory(); factory.AddConsole(LogLevel.Debug); var log = factory.CreateLogger<MyClass>();
Many implementations of the .Net Core logging abstraction involve more explicit routing to a log store. For example, libraries such as Serilog have implementations that explicitly route output to Azure’s AppInsights. This is not in keeping with a twelve-factor approach as it couples services more explicitly to an external service.
XII. Admin processes
Run admin/management tasks as one-off processes
This refers to upgrades or one-off configuration tasks and it implies they should be bundled with a release. A good example is using code migrations with entity framework to make changes to a target database. One thing that is missing from .Net Core in this respect is a REPL shell that allows you to run arbitrary code, such as the rails console command or irb in Ruby.