3 June 2018
Layers, onions, hexagons and the folly of application-wide abstractions
There are a well-established set of enterprise patterns that encourage us to think in terms of shared abstractions for data processing such as domain models and service layers. These tend to give rise to applications that are organised in layers where data and behaviour are arranged together according to these abstractions.
There are numerous variations on this theme. It is often described as an onion, presumably in response to the tears that are shed a few years down the line. “Clean” architecture extends this metaphor with a rule stating that dependencies can only point inwards, i.e. from the UI towards a central layer of data entities. Hexagonal architecture prefers a clean separation between the “inside” and “outside” parts of an application.
The central concern of these architectures is the separation of concerns. The architecture is not dependent upon any specific framework or technology. There is a separation between data processing and the UI. You should be able to test any significant functionality in isolation.
This kind of separation is a noble pursuit but establishing a common set of abstractions across an application can be very dangerous. Not only is it difficult to maintain over time, but it tends to give rise to inflexible applications that have serious scalability issues. It plays to the myth that a single universal architectural framework can be devised that will solve every problem.
An inflexible approach
Generic design approaches tend to be optimised for a very small number of requests in the system.
For example, establishing a common domain model to encapsulate all the business rules in one place sounds like a convenient way of organising processing logic. Over time any single implementation will struggle to meet the scale requirements of both large-scale data ingestion, tactical reporting and responsive interfaces.
Layered applications tend to be based on a very rigid set of common abstractions, e.g. a “controller” must talk to a “service” that must talk to a “repository”. This creates a mock-heavy code base that slows down development by insisting on applying the same solution to every problem.
These abstractions tend not be aligned with the business-orientated features that are implemented in systems. Feature implementations tend to get scattered between numerous layers of code, making them hard to understand, slow to implement and difficult to change.
Worse still, it can be difficult to protect and maintain these abstractions over time. Developers tend to implement features in the part of the code base that is most familiar to them. They don’t always understand the nuances of the abstractions that they are supposed to protect. Without rigorous policing you can find feature logic scattered incoherently around various layers as the abstractions are gradually eroded.
We need to solve a different problem now
There’s also something old fashioned about these layers and system-wide abstractions. Perhaps they belong in a different era where we were trying to scale client server applications for thousands of users. Layering a load-balanced presentation layer on top of data processing logic made a lot of sense as it helped to make applications more scalable and resilient.
Alas, this is not enough for the demands of more modern, high-volume applications. Layers tend to give rise to inefficient processing that is heavily dependent on a centralised database server. Most of the work is more about moving and translating data between layers rather than useful business processing.
The idea that design can be separated from deployment is a fallacy. Layers say nothing about how processing should be distributed. You need to consider how to handle exponential data growth, service peak loads and provide genuine resilience. None of these concerns are addressed by layered architecture.
How do autonomous services solve the problem?
Breaking an application down into smaller, autonomous service implementations addresses these challenges in two ways.
Firstly, it gives you much greater freedom in terms of the technology and design decisions you make when implementing features. You can adapt a solution that is tailored to the processing and scaling needs of each individual use case.
Secondly, it contains mess. This is one of the more darkly pragmatic benefits of service-based development. If you struggle to maintain the abstractions in any one implementation at least the impact is contained within a relatively small boundary. It does not create an enterprise-scale jumble of code.
It’s worth stressing that these patterns are still useful within individual service implementations. Repositories are a great abstraction that separate data processing code from the underlying data access technology. A service layer can provide a clarity over the available operations and the way they are co-ordinated.
The point is that you should have the freedom to pick and choose your abstractions, frameworks and technologies to suit each use case. A “clean” architecture based on layers tends to preclude this.