How to decompose that monolith into microservices. Gently does it…

Microservices may sound like an obvious solution for the problems that typically bedevil legacy monoliths. After all, who doesn’t want to reduce the cost of change while improving resilience and scalability?

The catch is that decomposition is a slow and complex process. It can be difficult to know where to begin. Microservices can help to simplify change over the long term, but they don't necessarily make change simple. You are at risk of losing momentum and getting stuck in brand new, distributed quagmire.

Making peace with your monolith

You are unlikely to decouple a large and long-lived monolithic system in its entirety. These systems are often based on a Gordian knot of database tables, stored procedures and spaghetti code that has been built up over years. This is not going to be re-organised very quickly, particularly when there is on-going demand for new features or support incidents to deal with.

You rarely get given the opportunity to focus on transitioning an architecture to the exclusion of all else. You may have to get used to the idea that decomposing a monolith is a direction of travel rather than a clear destination. Your monolith is unlikely to ever disappear entirely. What you are really doing here is reducing the size of the problem, providing a means of delivering new features and solving old problems as you go.

It's worth bearing in mind that there are other ways to tackle the problems of a monolith. You can take a horizontal approach by removing concerns such as public-facing APIs or reporting into separate implementations. You can make a system more malleable by focusing on build, test and deployment automation, no matter how dire the legacy stack. Investing in the internal organisation of monolithic code can also pay dividends without the overhead of a microservice infrastructure. Microservices are not always the most sensible place to start.

Be aware of the pre-requisites

If you want to establish an ecosystem based on autonomous, collaborating services there are a host of technical, practical and cultural factors that need to be considered before you can write any code.

From a technical point of view this means more than just ensuring you have functional deployment pipelines in place. You’ll also need to make decisions about concerns that include orchestration, instrumentation, service discovery, routing, failure management, load balancing and security.  You’ll also need to consider how you will manage support, cross-team dependencies and governance in a distributed environment.

It’s surprising how quickly you can be overwhelmed by naïve service implementations. It only takes a few services to be in place before problems associated with monitoring and debugging a distributed environment can become unmanageable.

Starting small and safe

You should use the first few service implementations to validate your infrastructure. This is an opportunity to check that you are equipped to deliver services at pace, monitor them in production and deal with problems efficiently. It’s also a chance to negotiate any learning curves and get engineers used to any new ideas or technologies.

With this in mind you should avoid jumping in to the most difficult logic to begin with. The first few services should be the easiest. They will be small, their coupling to the rest of the system should be minimal and they may not even store any data. It doesn’t matter if they are scarcely used features, as the point is to establish a beachhead and get people used to delivering services.

Note that you can't put off the more difficult services forever. The disadvantage with starting on easier areas is that you aren't really delivering meaningful value. You cannot nibble away at the fringes of a problem forever in the hope that the core problem will somehow open itself up.

Managing dependencies with the monolith

You want to slowly pick off areas of functionality and reduce the size of the core application. This won’t happen if your new services have any lingering dependencies back to the monolith.

You should establish some hard and fast rules for the kinds of dependencies that you are prepared to support. If you really cannot avoid referring back to a feature in the monolith do it through a service façade. This can provide an architectural placeholder for a future service implementation or at the very least act as an anti-corruption layer.

Splitting up the data

Many monolithic systems have a gigantic shared database at their core. I have seen systems that contain hundreds of tables and thousands of stored procedures, not to mention the triggers and functions that help to bind them all together. The application database is generally the single biggest part of any decomposition challenge.

Getting round the problem by developing services on top of this shared database will not give rise to the resilience and flexibility that you would expect. You’re just re-arranging processing into modules rather than decomposing it. It's not even a valid interim solution while you ease into decoupling. Everything is still coupled together via the monolithic data store in the middle of your application.

You have to bite the bullet from the start and decompose the data. Services should be responsible for persisting and managing their own data, even if this requires a painful and drawn out migration process.

Take a tactical approach

Bear in mind that extracting capabilities from a monolith is difficult. It’s time-consuming, complex work that is laden with risk. It can also be difficult to sell to commercial stakeholders who tend not be sympathetic to work that does not yield any visible functional improvements.

You can overcome this to an extent by taking a tactical approach and wrapping decomposition into the delivery of new features. The order in which you decompose services can largely reflect the functional roadmap agreed with stakeholders. If you maintain a strategic view of how your service landscape should play out then you will be in better position to seize decomposition opportunities when the roadmap allows.

Understand the domain

You will need a strategic plan that maps out the services you want to build. This will always be a work in progress, but taking an overly tactical and reactive approach will give rise to an incoherent set of services. You need to invest time in understanding your overall problem domain. Some areas will present themselves as obvious as immediate candidates for decomposition, others will need more time to come into focus.

Adopting a common understanding of the domain is essential, preferably one that is allowed to evolve and mature. Some of the more strategic concepts in Domain Driven Design (DDD) can be useful here, particularly an understanding of sub-domains and bounded contexts. These provide a mechanism for identifying your service boundaries and building a map of how decomposition could play out.

Another advantage of DDD is that it can force you to consider decomposition in terms of data and behaviour rather than code. It’s the capabilities that you are trying to extract and implement here, not the old code. This can help to protect you from “lift and shift” style service implementations where old mistakes are ported directly over to new implementations.

Big can be beautiful, particularly to start with

The DDD concept of bounded contexts is a good tool for considering service boundaries. It also tends to imply that services are going to be reasonably chunky. After all, they describe clusters of data and behaviour that have an internal consistency to them.

The phrase “microservice” can be unhelpful as it implies that service implementations should be very small. The misguided idea that a service should reflect the single responsibility principal has given rise to a lot of anaemic entity services that offer little more than basic CRUD functions.

The single most important feature of a service is autonomy. A service should be completely independent in terms of release and execution. Too many small services tend to undermine this by creating a system that looks more like a set of distributed data objects. If you’re not sure where to draw a service boundary then make it big. You can always decompose it later.

Avoiding the feature parity trap

Service-based development should allow for incremental delivery where you can gradually migrate customers from the old to the new. This doesn't always happen in practice and new services often end up running side-by-side with legacy platforms. Newer customers use the new service, while legacy customers somehow never quite make it off the monolith.

A common problem with legacy migration is the perception that customers cannot be migrated until feature parity has been achieved between old and new. This is a trap. Legacy systems are often stuffed full of rarely-used edge-case features that will never be realistically migrated. To wait upon feature parity is to guarantee that your legacy platforms will remain in service indefinitely.

You will need a clear migration strategy to accompany any decomposition strategy. This is hard to achieve as it takes a brave business to migrate customers to a platform that appears to deliver less. Your customers may be unmoved by talk of loosely-coupled architecture and may even take the opportunity to look at competitors.