Entity services: when microservices are worse than monoliths

Entity services are finely-grained services that look after a single data entity, normally exposing nothing more than simple CRUD methods. They are the end result of a trend that has seen services get smaller as the single responsibility principal gets interpreted in ever more literal terms. We now have services that literally do one thing and one thing only.

They seem to pop up everywhere and seem to be advocated by some pretty authoritative sources. Microsoft have published guidance for creating single entity microservices using ASP.NET Core, albeit with a fair amount of boiler-plate. A similar tutorial has been created for Spring while RedHat have produced a reference architecture based on this style of CRUD service.

It may be that the intention of these tutorials is to get people up and running rather than grapple with the subtleties of service design. However, they do help to propagate the idea that “micro” means entity-sized in the context of a service.

This kind of entity service has been described as an anti-pattern. In this style of architecture any business process involves a collaboration between numerous very small service implementations. This creates a level of coupling between services that can undermine many of the intended benefits of the architecture.

This is a design problem related to how we organise data and behaviour in a system. It’s best summed up by a frequently quoted tweet from a couple of years ago:

https://twitter.com/architectclippy/status/570025079825764352?lang=en

What’s so bad about monoliths anyway?

Monoliths are accused of a number of crimes, most of them involving some combination of the following:

  • It’s hard to maintain any discipline over their internal organisation so they inevitably become a tangled mess that is hard to maintain
  • They provide a single point of failure where if a system fails it takes every business process down with it
  • They are hard to scale as it tends to be more difficult to distribute processing between different nodes or spin up more processing units in response to peaks in load
  • Keeping components, frameworks and operating systems up-to-date tends to be is more difficult on large systems so they have a natural tendency towards obsolescence
  • A “one size fits all” approach to problem solving tends to dominate where new features are adapted to the limitations of the existing monolith.

Services in general and microservices in particular are often proposed as the solution to these problems. In many respects they are, if only because you can reduce the scope of the problem. The impact of bad code, service failure, performance bottlenecks and technical obsolescence can be contained behind a single service boundary.

Why autonomy matters

This is only possible if your services are autonomous so these problems do not infect the wider system. The more finely grained your services are, the more they have to collaborate directly to fulfill any basic functionality. The more they collaborate directly, the more exposed each service is to problems of resilience and performance in other services.

This can give rise to a level of coupling between services that undermines many of the supposed benefits of decomposing a monolith. You end up swapping one set of problems for another that have similar effects on maintainability, scalability and resilience:

  • Unless you have a mature infrastructure with excellent log collation, monitoring and deployment pipelines then you will struggle to diagnose and fix problems
  • It becomes difficult to make changes to processes as they touch numerous different services that are looked after by different development teams
  • A single service failure can have a cascading affect that brings down numerous different processes
  • The overhead of handling numerous remote requests between services undermines performance by adding unnecessary latency
  • Some services will be involved in almost every process, leading to challenges of load distribution
  • Let’s not get started on how versioning works in a highly distributed and chatty service environment…

In many ways, entity services can resemble the distributed objects that old-timers like me wrestled with in the late 1990s on platforms like DCOM and CORBA. The endpoints may be smarter, but you still end up spending too much time on our-of-process calls that should be happening within an application boundary.

Neither monoliths nor microservices but… something in between?

Many of the problems associated with entity services can be avoided through sensible design decisions.

Techniques such as Domain Driven Design can help to identify how data and behaviour can be separated out between service implementations. Direct coupling can be reduced by enforcing event-based collaboration rather than creating a mesh of real-time request\response interactions. A mature infrastructure will support a level of resilience and manageability that service implementations cannot provide on their own.

The end result is likely to be a smaller set of larger services that are, first and foremost, autonomous. Processes are owned by specific service implementations rather than being shared out between entity services. Implementations are not so large and unwieldy that they become difficult to change or fall into legacy disrepair. Above all, problems are more easily contained within a single service boundary rather than leaking across an entire domain.