3 October 2016
What’s so bad about monoliths anyway…?!
Advocates of microservices often use a direct comparison with monolithic architectures to illustrate the benefits. This is a false dichotomy as there are many ways of designing a system to provide greater resilience, easier scaling and flexibility of implementation.
In this context “monolith” is often used as a pejorative term. This is unfair, as a well-written monolithic “core” can provide a perfectly scalable architecture. It doesn’t have to wind up as a big ball of mud that is impossible to scale or maintain.
Don’t get me wrong – I am an advocate of decomposing functionality into autonomous services. My reservation is that you need to have a lot of prerequisites in place before you can start leveraging microservices. For starters you need a mature deployment pipeline, rapid provisioning and reliable monitoring to be in place and that’s before you’ve even considered architectural issues around resilience, scale, traffic routing and data persistence.
In many cases it can be more effective to choose other, less glamorous solutions to your problems.
When you design a monolith you still have to consider how responsibilities are allocated between modules. Code has to be cohesive in that related functionality is grouped together. The difference with microservices is that they apply this approach to independent, autonomous services.
This can have numerous advantages. You can scale aspects of the system differently and adopt technology solutions that are highly specialised for a specific task. It’s easy to make changes as a single service can be refactored or replaced more easily than a monolith.
However, once you start to distribute these responsibilities extra overhead comes into play, mainly related to the fallacies of distributed computing i.e. failed communications, network latency, security implications, and so on. In this sense, microservices don’t eliminate the complexity inherent in systems, they merely distribute it.
Any architecture decision involves a trade-off and there is no “free lunch” with microservices. They come with significant operational overheads that require relatively sophisticated infrastructure automation and monitoring. Interfaces between collaborating components need to be managed, whether you do so explicitly or not. You may have to accept a higher level of duplication as your system boundaries evolve.
The microservice cargo cult
For early adopters such as Netflix microservices emerged from their specific development cultures. A heavily decomposed architecture offered relatively young and innovative businesses a means to encourage autonomy and facilitate faster delivery.
Now that adoption is beginning to accelerate the term has spread through the industry to represent something desirable, modern and agile. Many of the companies adopting microservices now are trying to assert them on very different environments. There are shades of cargo cultism here as companies start aping the organisational and architectural styles of the likes of Spotify in the hope of fixing their legacy technology problems.
Even vendors of large middleware platforms now have a microservice story as part of their sales literature. It’s difficult to tell what the likes of Mulesoft and Websphere have to offer a microservice ecosystem apart from centralisation, but they are claiming microservice enablement regardless.
There is a danger of latching onto “microservices” without necessarily understanding what they mean. Everybody wants to be seen as innovative and encourage teams to think for themselves. This doesn’t necessarily mean you have to break apart applications into small services.
Microservices are not the only solution
As with any development, it’s important to focus on what business outcomes you are actually trying to achieve. For example, if you want to provide safer and more rapid changes to systems then you may be better off investing in continuous delivery rather than racing to decompose a monolithic platform.
Phil Calcado’s account around the background to microservices at SoundCloud illustrates this. After analysing the process involved in taking ideas into production they found that features tended to spend a couple of months in purgatory, bouncing back-and-forth between different development teams.
Techniques such a continuous deployment helped to speed things up, as did pairing developers from different teams to facilitate more direct collaboration. Eventually they reached a point where the practical limitations of their monolithic architecture were in the way of any further reduction in time to value.
What’s interesting about this case study is that a) it was driven by genuine business need and b) the problem could be partially addressed by organisational and infrastructure solutions rather than radical architectural change.
Scaling a monolith
Scalability, resilience and agility can be achieved by monolithic architectures too.
A “cookie cutter” approach to scaling can be a simpler way of dealing with load in a system without all the complexity of distribution. High-volume services such as Etsy and Flickr have had success with architectures based on horizontally-scaled monoliths.
These organisations also put considerable faith in continuous deployment where small change sets are being pushed out all the time. This kind of agile, devops-orientated approach is generally associated with microservices but it is by no means exclusive to them.
Many of the problems associated with monoliths often have more to do with the kind of bad design and inappropriate process that can also affect microservices. If you’re not careful then microservices can just serve as a “Trojan horse” for the re-emergence of a monolith where problems of agility and scale are exasperated by the fog of distribution. As Simon Brown memorably pointed out:
If you can’t build a monolith, what makes you think microservices are the answer?
A “monolith-first” strategy?
The hype surrounding microservices can be unhelpful as no pattern is a panacea. Caution is the better part of valour here as microservice architectures can only flourish in a particular kind of environment.
You need strong, confident agile teams with the right blend of development skill and operational awareness. You need a decentralised approach to decision making and management that is strong enough to allow it. You also need a fair grasp of the domain that you are working in so you can decompose services sensibly.
Even with this in place there is the danger of premature decomposition to consider. In the rush to create small and focused services you may decompose a system before you fully understand it. This can give rise to services that are not properly autonomous or loosely coupled.
It could be that a carefully-built monolith is the best way to start a system. If close attention is paid to modular design then specialised services can be gradually re-factored away as the boundaries between them become clearer. After all, it’s always easier to decompose a monolith than it is to make sense of a bunch of tangled mess of prematurely optimised services.