How big is a microservice?
We know that micro services are small and focused by design – just how small is this in practice?
Microservices are described in terms of the single responsibility principle as they should just do one thing really well. That’s fair enough, but precise explanations of what this “one thing” amounts to tend to be pretty thin on the ground. The answer always seems to be "it depends".
Some ambiguity may be inevitable here as it’s difficult to size anything as abstract as a service. For example, Sam Newman described the ideal size as “small enough and no smaller” in his recent microservice book while Michael Feathers has suggested that they should be “created by no more than a handful of people”. Stefan Tilkov advises that you should not be seeking to make services as small as possible, while Udi Dahan has suggested that services will be larger if you are going to align service boundaries with problem domains.
The risks of premature optimisation
Why does this matter? After all, microservices are supposed to support a more agile approach to service development where you refactor as your understanding of the system evolves. The problem is that if you go too far down the wrong road it can be difficult to turn back.
In the desire to create small and focused services there is a risk that you may decompose things before you fully understand them. This can give rise to services that are not fully autonomous. They end up sharing data across service boundaries or have to be updated en masse in response to a change in requirements. The cost of change starts to escalate and the "distributed ball of mud" is born.
Services inevitably involve some degree of overhead and the fallacies of distributed computing still apply here. Each service brings cost in terms of maintenance and performance hits around areas such as serialization and security. If services are too fine-grained then there is a risk of tipping over into a "nanoservice" anti-pattern where the cost of a service outweighs its utility.
In service development, autonomy is much more important than size. It is much easier to reduce a monolithic service down to autonomous components than it is to unpick a web of complex service integrations.
It’s important to stress how important refactoring is here. You have to start somewhere and the more your build the more obvious it becomes where the boundaries between your services should be. Whenever you encounter something that looks like it should be separate then you can spin it out as a new service.
Identifying bounded contexts
Domain Driven Design (DDD) does provide a number of helpful tools for grappling with the kind of complexity inherent in designing distributed systems. It is almost impossible to describe a large and complex domain in a single mode, so DDD breaks this down into a series of bounded contexts. Each of these bounded contexts has an explicit boundary and contains a unified and consistent internal model.
A bounded context is characterised by what is called a ubiquitous language where every term has a precise and unambiguous definition. This tends to set a limit on the size of a bounded context as the more you enlarge it, the more ambiguity creeps in.
This kind of definition lends itself to service modelling as you are identifying manageable, discrete collections of data and behaviour that have clear purpose and explicit boundaries. There is a clear focus on capabilities which is particularly useful as services are more than just collections of data entities that expose CRUD-style methods.
Broadly speaking, service boundaries can be aligned to these bounded contexts. This helps to guarantee the kind of autonomy and cohesion that we need from loosely-coupled services.
Hold on… how “micro” can bounded contexts be?
This doesn’t necessarily tally with a vision of lots of finely grained services collaborating together to provide business services. Bounded contexts contain clusters of different data entities and processes that can control a significant area of functionality such as order fulfilment or inventory. They can actually be quite large.
A more finely grained DDD unit is the aggregate, which describes a cluster of objects and behaviours that can be treated as a single cohesive unit. An aggregate is regarded as the basic unit of data transfer – a classic example is an order that contains a bunch of line items.
Given that an aggregate is a cohesive unit perhaps this could be a reasonable indicator of the smallest meaningful scope for a microservice? After all, it will be difficult to split an aggregate across service boundaries without compromising service autonomy.
The problem with this level of granularity is that tends to give rise to anaemic services that do little more than expose CRUD-style methods. You’re grouping data entities together rather than encapsulating capabilities. The end result can be a very “chatty” service infrastructure where the majority of processing time is spent on remote calls between services.
Size doesn’t actually matter
In my view a single deployable service should be no bigger than a bounded context, but no smaller than an aggregate. I would also suggest that it’s better to start with relatively broad service boundaries to begin with, refactoring to smaller ones as time goes on.
Perhaps “micro” is a misleading prefix here. These are not necessarily “small” as in “little”. These are services in the classic SOA mould, except created with more of an agile mind-set including lightweight infrastructure, decentralized governance and greater emphasis on automation. Perhaps it’s these more agile aspects of microservice design that distinguishes them as opposed to size.