What we talk about when we talk about “legacy” systems
"Legacy" is often used a pejorative term to describe any long-lived code base that a development team finds distasteful to work with. What do we really mean by "legacy" and how should we be dealing with it?
In many environments, "legacy" systems are dismissed as the problem child that nobody likes to talk about. They are regarded as inconveniences that block the road towards a bright future based on cloud-hosted microservices. The catch is that these systems are often responsible for data and processes that are crucial to the business.
Long-lived systems can have positive aspects. They can be mature, battle-tested code bases that encapsulate many years of accumulated experience. The underlying technologies may be obsolete, but the systems may only require predictable enhancements in response to legislative or process changes. Believe it or not, they can provide a stable and cost-effective means of delivering functionality.
These systems only start to become a problem when they inhibit growth. This is what we really mean by "legacy" systems. They are difficult to change in response to an evolving business or technology environment. They cannot interact or integrate with other systems. They also make it difficult to leverage the benefits of emerging technologies and architectures.
Drifting towards obsolescence
"Legacy" problems are typically associated with technology obsolescence, though this is not the only underlying cause.
If you define "legacy" as an outdated system that is still in use, then pretty much every component-based system is outdated to some degree. Code bases do not stand still. There is a natural drift towards entropy as new versions of the underlying components become available.
For most systems it takes a lot of on-going effort to keep the various frameworks and libraries involved completely current. A degree of version drift is perfectly sustainable so long as it does not inhibit change.
However, there is a point at which the upgrade path closes. This can happen when a dependent technology reaches end of life and is no longer supported. Technical obsolescence can also creep up on you so that the amount of work required to update a system eventually becomes overwhelming.
This obsolescence affects more than chunky old mainframes running ancient payroll systems. Fast-moving web development ecosystems are vulnerable to obsolescence, as anybody who invested heavily in AngularJS will tell you. The Microsoft ecosystem has a track record for abandoning strategic technologies such as WCF or Web Forms as attention moves elsewhere. Modernizing a website built on Python 2.x is no picnic. Open source projects can fall into disrepair, and it may not be realistic to maintain your own fork of the code.
More than just aging technology
In this sense "legacy" implies systems that are being maintained with no hope of bringing them up to date. That said, technical obsolescence does not always matter. Out-dated systems can still be the most the commercially viable way of delivering functionality, especially if it is not economically viable to rewrite them into a more modern stack.
There is more to "legacy" than the technology stack. Given our definition of "legacy" as a system that inhibits growth, there are more subtle forms of obsolescence.
Some systems can be left behind by evolving requirements, even if they run on well-supported technical stacks. Desktop applications cannot always address the challenges presented by mobile devices. As a business grows it may start to place more value on flexibility and integration, leaving behind those systems designed for more self-contained scenarios.
It’s not just functional requirements that tend to evolve over time. Scalability challenges can start to overwhelm systems designed for much smaller workloads. The architecture may require expensive hardware or software infrastructure where more modern technologies might allow more efficient, elastic provision.
Sometimes a system can become just plain old fashioned. Expectations of user interfaces continue to evolve, so older systems tend to feel clunky and primitive. It can get to the point where usability is so poor that it inhibits the system’s ability to deliver.
There is also the influence of "technical debt" in its true sense, i.e. how code is undermined over time by evolving requirements. This causes a system’s design to become increasingly out of step with the business processes it implements. This affects every system to some degree, and it makes change increasingly expensive to implement.
What to do about "legacy"?
Given that "legacy" systems are often associated with high maintenance costs the instinct can be to start again. A rewrite is fraught with difficulty. It will always be an expensive and drawn-out process with no guarantee of success. You can get stuck in a quagmire of unhelpful debates around feature parity. Users may also prefer an evolutionary approach rather than a sudden move to a new platform.
A more realistic approach can be found in the numerous patterns that support gradual change. These include decomposing to microservices, building "strangler" applications and establishing an "autonomous bubble" for new development. All of them involve some element of learning to live with legacy systems, while freeing up development options sufficiently to be able to overcome their limitations.
In some cases, the most realistic approach may be to maintain a system through "palliative care". You can build a team of developers who are happy to see out the remainder of their careers on obsolete technologies in return for a good, stable salary. Some organisations reach this position as a conscious decision, while others default to it after a failed rewrite. If a system does not inhibit change, then it may be better to actively preserve the accumulated value rather than branding it as "legacy".