22 May 2018
What do we mean when we talk about “legacy” software…?
There’s no universal definition of what constitutes “legacy” software. Some developers seem to regard it as anything that they did not personally write in the last six months. More seasoned executives will flatly deny use of the term even when gazing out across their estate of mainframe servers.
If legacy means anything at all it’s a system that is sufficiently outdated to undermine development velocity. The system is inherently unstable and difficult to change safely. Some features cannot be delivered at all, while others become unnecessarily expensive due to the amount of effort required to work around the inadequacies of the application and its environment.
Legacy doesn’t have to involve obsolete technology either. Obsolete architecture can have a similar impact on development velocity no matter what the underlying technology. Long-lived code bases tend to suffer from entropy over time, even if the underlying frameworks are kept up to date. Lifting and shifting a tangled code base to a more recent technology won’t suddenly make it more malleable.
Why does this matter?
Legacy systems don’t collapse overnight. There’s no sudden, insurmountable crises but more of a long, slow decline of development velocity.
A large, long-lived system inevitably loses its shape over time and becomes more difficult to work with. This accumulation of quick fixes, technical debt and evolving complexity is not restricted to older code bases. It’s surprising how quickly a badly-managed code base can sink into disrepute.
Bigger problems caused by technical obsolescence start to creep in if a system is based platforms that are not under active development. Vendors may claim on-going support for a platform, but that’s not the same as actively maintaining it. The underlying run-times are not kept up to date in response to evolving security threats or emerging protocols.
This also means that there is no wider ecosystem of components and tooling. The techniques and frameworks that developers take for granted on modern ecosystems are not available. Agile technical practices such as test-driven development or continuous integration are effectively closed off, trapping a platform in more uncertain and manually-intensive delivery.
Added to this are growing problems of recruitment and retention. You might get lucky and find great developers who are prepared to work on legacy systems, but they are few and far between. It becomes very difficult to maintain a stable team of developers who understand the system in any depth.
Dealing with legacy systems. Or not.
Many developers immediately lobby for a re-write when faced with legacy code. In most cases this is an expensive folly. You are left with a new mess in a different technical context, except without the years of hardening that has accumulated in the legacy code.
Some systems can be gradually improved over time as code is gradually re-organised into a more manageable structure. Much has been written about decomposing systems into microservice architectures, but this isn’t as simple as it looks.
It’s easy enough to peel off a few outlying services but it’s more difficult to confront the Gordian knot of data and code at a legacy system’s core. In most cases, any decomposition reduces the size of the problem without eliminating it. Comprehensive microservice transitions tend to be based either on small domains or have a legacy core lurking behind a curtain somewhere.
Eric Evan’s suggested side-stepping legacy code with what he calls the autonomous bubble pattern. A new context is created for new application development away from all the problems of your legacy stack. Technologies such as event streaming can be used to distribute data and behaviour between the two new contexts.
Establishing autonomy between “old” and “new” does require some work but it can quickly establish a path towards a more modernised architecture. That said, this is merely side-stepping the problem rather than confronting it directly.
Legacy code is often an unsolvable problem. In many cases a legacy system remains the most commercially viable way of delivering functionality. The sheer cost of replacement outweighs any cost associated with keeping the wheezing old rig on the road.
In this case, the commercially optimal strategy for a legacy system can be to let it run indefinitely under a form of palliative care. A small team of seasoned developers are made comfortable with large salaries and generous pension plans. Bugs get fixed, but any new feature development happens elsewhere. This is what a lot of organisations are doing with their legacy systems, even if they won’t admit it.