24 May 2019

Architecture patterns to support incremental change

People tend to see architectural change as a big transactional effort, so if any mistakes are made they tend to be huge mistakes. It is better to focus on delivering smaller changes that can demonstrate incremental value, but it’s not always immediately obvious how to do this.

A good architectural roadmap should describe a direction of travel. An ideal end-state is all very well, but like all predictions it will probably be wrong. You’ll need to change course long the way once you come up against evolving requirements, technical challenges and commercial reality.

Development teams that living and breathing legacy code often struggle to see any options beyond a “big bang” rewrite. There are patterns that can help them to find a way forwards and take more a pragmatic approach to freeing themselves from legacy obligations.

Decomposition

Organising an application into smaller, autonomous modules lowers the cost of change by reducing complexity and preventing unexpected side effects. One way of realising this is to break chunks off larger applications and spinning them out as autonomous microservices.

Microservices are no panacea. There are a whole host of technical, cultural and operational pre-requisites that need to be in place before you can make a decent fist of them. It’s surprising how quickly you can become overwhelmed with only a few services unless they are supported by comprehensive monitoring and pipeline automation.

Microservice decomposition can also be undermined by the gordian knot of complexity that is often found at the heart of a legacy system. It’s not realistic to imagine that you can decompose a complex database with hundreds of tables or pull apart thousands of cascading stored procedures.

Starting with the easier, outlying services can help to demonstrate progress and road test infrastructure. This isn’t really addressing the problem. Unless you have a clear sense of how you will attack the dark heart of the monolith you can be left with a few inconsequential services orbiting a legacy stack.

Refactoring

You don’t have to build a suite of microservices to reduce the cost of change in an application. Distributed systems are hard to build and are best suited to mature development operations that need to address genuine scaling challenges.

A gradual yet determined process of refactoring can also make a code base more manageable over time. In its strictest sense, refactoring means making improvements to a code base without changing its external behaviour. It involves a large number of small changes that have a cumulative effect of making the code base easier to test.

Refactoring does depends on you being able to write tests that baseline current behaviour to ensure that changes can be made safely. This is often easier said than done with legacy systems. Systems based on older procedural code or even database stored procedures tend to resist the kind of dependency isolation you need to create meaningful tests. In this case, your options for safe refactoring tend to quite limited.

Modularisation

There’s still much that can be achieved even if unit tests are impractical. A more pragmatic starting point can be to gradually re-organise code to improve separation. You can enforce interfaces or create shared libraries that define clearer responsibilities. You can establish continuous delivery to promote rapid change. Detailed logging and instrumentation can help to provide for easier diagnosis and recovery.

You don’t need to abandon a monolith until challenges of velocity and scale render it untenable. In the meantime, refactoring can be regarded as a staging post that allows you to deliver change while preparing the way for a more decomposed approach.

Strangler applications

Another approach to decomposition involves building a new system slowly around the edges of the old. Eventually the new system can takeover specific features until it strangles the older system. This has been likened to the behaviour of strangler vines that grow around a tree, take root and eventually kill their host.

This may sound like a neat metaphor, but the reality will require enormous long-term discipline to pull off. You are more likely to wind up with a partially completed mess. Long-lived code bases often have discernible geologic layers of failed rewrites that never quite mature sufficiently to replace their predecessors.

The autonomous bubble

A more pragmatic solution accepts that the legacy stack cannot be reformed that easily. Eric Evans described creating a “bubble” for new development that can sit alongside a legacy platform without being directly dependent on it. This uses an anti-corruption layer to separate new development that is not confined by the legacy code or technology.

In practical terms this could mean using a message bus to regulate communication between the legacy world and the new world of the “autonomous bubble”. This approach doesn’t do anything to address the problems of legacy platforms, but it does at least allow new development to take place quickly and safely.

Sacrificial architecture

Sacrificial architecture implies that you can design solutions with the assumption that they will be replaced if they are successful. An initial implementation can focus on delivering value early and be re-worked once the system is better understood.

This is a genuinely incremental approach that can free teams from the burden of having to build enduring replacements for legacy systems. It’s also worked for many of the larger companies out there, as the likes of eBay, Twitter and Amazon have all been through many iterations of their architectures as they have grown.

The catch here is that teams may not be given the time and space to redevelop solutions. Good intentions can often melt away under commercial pressure. Minimal viable products and proof of concepts have a nasty habit of turning into troubled production systems.

Using fitness functions to guide change

How do you know that you’re making progress? An evolutionary approach to architecture suggests that you can assess technical solutions using objective and repeatable tests. This borrows from the idea of “fitness functions” in genetic programming which are used to assess how closely a solution meets a set of aims.

This approach does risk collapsing into metric-driven development where priorities are distorted by a narrow set of criteria. The challenge is to come up with a set of objective and repeatable tests that measure the “right” things.

Architectural fitness isn’t something that can be measured by directly observing code, though it can be tempting to use technical attributes such as unit test coverage. A more meaningful approach may be to consider derived outcomes such as deployment frequency or the lead time required to create a new feature. These measures of the cost and duration of change are a better guide to the effectiveness of architectural refactoring.

Filed under Architecture, Design patterns, Microservices, Strategy.