Hosting .Net Core containers in Service Fabric
You can host containers in Service Fabric, but it is first and foremost an application server. Service Fabric projects typically use services hosted by the Service Fabric runtime and are dependent on APIs defined in the Service Fabric SDK.
This can give rise to a very specific style of application. Base classes such as StatelessService and StatefulServiceBase tend to undermine portability , while patterns such as reliable actors tie services irrevocably to a run-time application server. The SDK also encourages application code to be combined with deployment details in a single Visual Studio solution. This is some way from the kind of cloud-native, twelve factor applications typically associated with container services.
Given the recent rise of services such as Azure Kubernetes Service, the container support in Service Fabric seems to be targeted more towards lifting and shifting existing .Net applications. You can use it as an orchestrator for cloud-native services, but you are inevitably made to feel like a second-class citizen in doing so. The process of configuring and deploying container-based applications to Service Fabric does not compare well with a “pure” orchestrator like Kubernetes.
Using Visual Studio tooling. Or rather not.
Visual Studio provides a tool for adding container orchestration support for Service Fabric, though it may do more harm than good. If you right-click on the project and select Add -> Container Orchestrator Support the following changes will be made to your project:
- A “PackageRoot” directory will be added containing the service manifest and configuration file
- A (pretty basic) Dockerfile will be added that attempts to push the build artefacts into a nanoserver-based ASP.Net Core image
- An IsServiceFabricServiceProject element is added to the project file.
This has the effect of embedded configuration detail about the orchestrator into your service code. Ideally you should avoid adding this kind of run-time implementation detail as it undermines the portability of the service.
The good news is that you don’t need this guff and can develop containerised services in .Net Core that have absolutely no knowledge of Service Fabric. You will need to take care of the runtime images that you use though, as you may find that your cluster only supports nanoserver based images as opposed to the more generic core ones (i.e. dotnet:2.1-aspnetcore-runtime-nanoserver-sac2016 rather than dotnet: 2.1-aspnetcore-runtime).
Creating Service Fabric applications for containers
This is where things start to get hair-raising. A Service Fabric application is analogous to the Kubernetes pod in that it is the main unit of scaling that can host one or more containers. You use the SDK templates to create a project that deploys one or more container to a cluster.
This template boasts nine XML files, a PowerShell script and a configuration file. This configuration system has been adapted to support containers, so details will be scattered across the two main XML-based manifest files: a separate ServiceManifest.xml for each individual running image and an ApplicationManifest.xml for the application.
Setting the container image and registry
The CodePackage element in the ServiceManifest.xml file defines the actual service executable. This contains a ContainerHost element where you specify an image that is hosted in a container registry, i.e.
<ContainerHost> <ImageName>mycontainers.azurecr.io/mycontainer.serviceexample:1.4.1</ImageName> </ContainerHost>
To define the credentials for a custom container registry you’ll need to add them in a RepositoryCredentials element in the ApplicationManifest.xml file. You should encrypt any passwords using the Service Fabric SDK, though this is dependent on the specific certificate that you install on the target cluster. The example below shows an unencrypted password which is not a recommended approach outside of any development environment:
<RepositoryCredentials AccountName="mycontainers" Password="1s4lWCsq9bWfnskQDlqGwE=" PasswordEncrypted="false" />
Ports and binding
The external port that a running application uses is defined in an EndPoint section in the ServiceManifest.xml file. The example below creates an endpoint on port 8081 that the Service Fabric runtime will treat as an entry point for requests to the application:
<Endpoints> <Endpoint Name="ServiceFabricContainerTypeEndpoint" Port="8081" /> </Endpoints>
This external port must be bound to the port that the running container will be listening on (i.e. port 80). This is done through a PortBinding element in the ApplicationManifest.xml file:
<PortBinding ContainerPort="80" EndpointRef="ServiceFabricContainerTypeEndpoint" />
Assuming you are developing a well-behaved twelve-factor app and defining configuration via environment variables, you can set these in the CodePackage section in the ServiceManifest.xml file:
<EnvironmentVariables> <EnvironmentVariable Name="VariableName" Value="VariableValue"/> </EnvironmentVariables>
Containers running on the same host will share the kernel with the host by default on Windows Server hosts. You can choose to run them in isolated kernels by specifying the isolation mode in the ContainerHostPolicies section in ApplicationManifest.xml as shown below:
<ContainerHostPolicies CodePackageRef="Code" Isolation="hyperv">
Resource governance in Service Fabric is used to control the “noisy neighbour” problem of services that consume too many resources and starve other services. You can set policies that restrict the resources that a container can use, mainly around CPU and memory usage. This is similar to the resource limits that you can request for containers in Kubernetes.
This is defined in the ApplicationManifest.xml file using a ResourceGovernancePolicy element:
<ResourceGovernancePolicy MemoryInMB="512" CpuPercent ="25" />
Supporting Docker HEALTHCHECK
The Docker HEALTHCHECK directive allows you to bake health check monitoring into your container image definition. You can wire this up to Service Fabric’s health check reporting by adding a HealthConfig section to the ApplicationManifest.xml file:
<HealthConfig IncludeDockerHealthStatusInSystemHealthReport="true" RestartContainerOnUnhealthyDockerHealthStatus="false" />
Running the container application
The most difficult aspect of running a container application to work is in getting a Service Fabric cluster up and running in the first place. Setting up a local cluster can be infuriating. The tooling does not feel particularly mature and troubleshooting it can be a frustrating experience.
One option is using the Azure-based “party clusters” that Microsoft maintain so you can experiment with Service Fabric. These are free clusters that are provisioned to you for an hour. Alas, this is unlikely to be enough time to provision anything meaningful without prior experience of managing a cluster.
[Feb 2019 Update: It appears that party clusters are being discontinued by Microsoft]
You can provision a Service Fabric cluster in Azure but be aware that you will be charged by the hour for all the VMs, storage and network resources that you use. The cheapest test cluster will still require three VMs to be running in a virtual machine scale set. Cost management is tricky. De-allocating the set of VMs stops the clock ticking on VM billing but it effectively resets the clusters (and the public IP address), forcing you to redeploy everything when it comes back up.
The act of publishing your application is relatively simple due to the magic of PowerShell. You will be expected to have a certificate installed locally and this credential needs to be specified in the Cloud.xml settings file in the Service Fabric project.
When you first deploy an application, be prepared for Service Fabric to report that the service is unhealthy while it downloads and installs the container image. Once the service is running you can use rolling upgrades to push new versions without downtime, though be aware that upgrading requires you to change the version number on your container image tag. If you tag your images with “latest” then service fabric does not download them when you run an update.
Be warned that Service Fabric can be mysterious to the uninitiated and it takes time to deciphering some of the error messages. For instance, the frequently encountered “Partition is below target replica or instance count” error can mean an unhandled exception in service start-up, a configuration error, or an environment problem such as a lack of disc space. You need to play detective to find out what might be wrong.
Is it worth the effort?
Let’s face it – you wouldn’t choose Service Fabric as a container orchestrator unless you had to. it feels like you are running applications in an environment that was not explicitly designed to host containers. Sure, it works, but getting there takes more effort than it ought to.
Perhaps Service Fabric’s support for containers could be seen in the context of supporting a longer-term migration strategy. If you’ve already made a significant investment in Service Fabric, then you can start to migrate towards a more “cloud native” style of service without having to replace your runtime infrastructure.