Comparing serverless C# and .Net development using Azure Functions and AWS Lambda

Recently released toolsets for AWS Lambda and Azure Functions are finally making “serverless” application development available to C# developers. They also open up the entire Visual Studio eco-system up to serverless development including code analysis, NuGet packages, Intellisense and unit testing.

Although neither toolset could be described as mature there is enough meat on the bones to get an idea of what might be involved in developing large-scale serverless applications using C# and .Net.

What they have in common

Both platforms allow you to create applications composed to functional, stateless, event-based services. Servers are still involved, of course, but developers don’t have to worry too much about them. These services run on environments that provide you services such as fault tolerance, automated scaling, logging and monitoring “by right” rather than having to build these concerns in yourself.

They also have similar limitations. They are designed for short-lived processes and both impose a hard five-minute limit on function execution. If you want to execute any long-running processes then you need to host the functions in a different context – for Lambda this can mean a container-based workaround, while for Azure Functions you need to host them on a dedicated virtual machine.

Transparent scaling can also be problemmatic for downstream resources such as database servers unless they have similar scaling characteristics. You cannot apply patterns such as circuit breakers to protect against cascading failure and shed load. This tends to tie serverless applications into an entire ecosystem as you have to use services that are designed to scale in sympathy with serverless functions.

An important thing to understand is that you're not just adopting a service runtime - you are selecting an entire ecosystem. Both platforms are designed to extend an existing set of cloud services. There's no point adopting AWS Lambda unless you are also happy to use S3 for file storage, SMS for messaging and DynamoDB for data persistence. Likewise, you'll want to be content to use Azure's storage and messaging implementations if you adopt Azure Functions.

Under the hood

Perversely, the AWS Lambda .Net implementation is based around .Net Core (albeit an old version) while Azure functions require .Net Framework 4.6.1. This makes sense as under the hood Amazon are leveraging the cross-platform capabilities of .Net Core to execute code on a Linux environment. Azure Functions meanwhile have evolved out of the web jobs functionality that has been part of the App Service platform for a while. There are plans to port Azure Functions to .Net Core though this is unlikely to happen until after the release of .Net Standard 2.0

AWS Lambda provisions a container and deploys code on a first request which makes for a relatively long cold start-up time (normally more than five seconds in my experience). After this it normally keeps the container available for subsequent invocations, but container re-use and provisioning is at the sole discretion of the Lambda environment.

Azure Functions are a different beast and are managed by an open source host that can be run on premise or in the cloud. These hosts are part of Azure's App Service infrastructure and are provisioned as required, but it means that functions are deployed as group rather than individually. This can make functions a little less prone to a cold start-up penalty once the first host is up and running.

Programming model

If you’re working with Node.JS then AWS Lambda provides a neat separation between the function implementation and the way it is invoked. This separation is lost in the C# implementation as the method signatures vary depending on the type of trigger being used. For example, the following is a simple handler for an S3 event:

public async Task<string> FunctionHandler(S3Event evnt, ILambdaContext context)
    context.Logger.Log("C# S3 trigger function processed S3 event");

The functions themselves have to be mapped to event sources that publish events which can be picked up by the Lambda runtime. These event sources include Amazon services such as S3 buckets, DynamoDB streams or HTTP requests made through the API Gateway. Configuring these is where the going gets tricky as any function code tends to be accompanied by complex JSON-driven configuration.

Azure Functions use a more rigid structure and the SDK automatically looks after the underlying configuration. Each function is a static class with a Run() method providing the entry point. Attributes are used to specify the configuration of the function and input parameters. This declarative approach does save you from getting bogged down in configuration hell, even though it still involve a modest amount of JSON fiddling.

The following example returns a BrokeredMessage object from an Azure service bus queue:

public static void Run([ServiceBusTrigger("MyQueueName", AccessRights.Manage)]BrokeredMessage myQueueItem, TraceWriter log)
    log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem.MessageId}");

The trade-off here is that the configuration-heavy approach of AWS does can rise to a more flexible development model.

The “AWS serverless application” template is a good example of this flexibility. It allows you to run a Asp.Net WebApi project using Lambda as a host with the AWS API Gateway exposing the HTTP endpoints. The catch is that it requires every controller action to be explicitly linked up to an API Gateway endpoint using JSON configuration. This generates a pretty meaty stack of JSON on larger applications.

Debugging and local execution

AWS Lambda does not have any facilities to run and debug functions locally. There are numerous attempts out there at providing runtime tooling but nothing that executes .Net Core functions built for Lambda as yet. It wouldn't be too much of a stretch to build a locally hosted harness into your project, but you really shouldn't need to provide this tooling.

By contrast the Azure Functions host has an on premise implementation and can provide a full local development environment. The Azure Functions CLI allows developers to develop, test, run and debug functions locally while integrating with Azure services hosted either on premise or in the cloud. This does make for a much more convenient development experience where functions can be fully tested in a locally-hosted runtime before being deployed to an Azure-based host.

Unit test and deployment automation

One advantage of using compiled C# is that you can integrate your preferred unit test framework into function code. It is pretty straightforward to mock out the various dependencies in Lambda, though it is much more difficult in Azure Functions due to the insistence on using static classes and methods to define functions.

You can integrate both platforms into full code production pipeline using your build server of choice. AWS provides command line tooling to deploy compiled functions, but you can also use their CodeBuild service to manage the entire pipeline within the Amazon ecosystem. Likewise, Azure has its equivalent services in terms of source code integration, being able to push compiled functions directly or an integration with Visual Studio Team Services to manage their entire pipeline.

There is also support for a “configuration as code” approach to provisioning. Azure Functions lets you create a deploy an Azure App from the ground up via an ARM Template, while Amazon's AWS Serverless Application Model allows all the services required by an application to be detailed in one place.

Pricing and costs

AWS Lambda and Azure Functions both offer a consumption-based pricing model where you pay for what you use.

Azure functions also offers an App Service plan where you can provision dedicated virtual machines to run the host. This may seem a little counter intuitive for a “serverless” application, but it can make costs more predictable by decoupling them from running time and memory used. You can still auto-scale your instances for elastic load, but you not to the same finely-grained extent as with consumption-based pricing.

Where the real choice lies

One of the more interesting aspects about running C# in Lambda is that it demonstrates.Net Core's cross-platform capability. AWS Lambda is a Linux based runtime where you can develop equivalent functionality using Node.js, Python or C#. It is an example of how C# code can now be used to leverage a much wider range of platforms and ecosystems.

It's got to be said, it can be confusing talking about Lambda functions as opposed to Lambda expressions. That's part of the problem with developing AWS serverless applications in the Microsoft ecosystem. It's some way from being a native experience and is likely to compare poorly with Azure Functions in time as tooling for the latter continues to evolve.

A choice between the two really comes down something much wider than your preferred function runtime or development tooling. The real decision here is in the surrounding eco-system that your functions interact with. If you have made a significant investment with either AWS or Azure then it makes sense to go with their serverless implementation.