Sharing APIs in an organisation: challenges and pitfalls

Sharing APIs within an organisation can appeal to a desire to reduce duplication, consolidate effort and improve development efficiency. It’s a worthy ambition though the journey to shared services can be littered with costly traps for the unwary. Above all, successful sharing requires a genuine understanding that you should design services for external consumption from the very start.

Gaining agreement

The first barrier to effective service sharing is gaining consensus over how you're going to share. This can open a can of worms where you get lost in a debate over which standards, protocols and design patterns to adopt. Some people will want to assert a restrictive system of governance over service design in order to guarantee some level of interoperability. Others will prefer a looser regime in order to give them more freedom over how they design their own solutions. The one thing you can guarantee is that every architect will take a different approach to this - and be convinced that they are right.

Ownership and accountability

A service has to be nurtured and looked after if it’s going to remain viable over the long term. You wouldn’t use an external commercial service unless there was a clear sense of ownership for the service and accountability for its development roadmap. The same should go for internally shared services. After all, unexpected changes to functionality and interfaces can have a ripple effect that will break any integrated applications.

Whose bug is this?

Ownership is one issue. Finding the right team to fix any problems is another. If your escalations process is attempting to triage a high priority incident you might find yourself jumping through multiple teams before you can find somebody to fix it. If you want to be able to diagnose issues quickly and get an incident looked at by the right team then you will need a lot of monitoring and logging in place.

Service agreements are a two-way contract

Every service should have a clearly understood SLA, even if it’s for use by internal teams. This cuts both ways as there will be expectations on consumers as well as service providers. When you’re sharing services everybody has the potential to become a DOS attacker. Every service will have a different idea about capacity so quotas and throttling will need to be in place to stop your services from choking each other off.

Does sharing require harmonisation too?

There can be a huge difference between services within an organisation in terms of their approach to API design, implementation technology and supporting infrastructure. This can make integration more cumbersome so to what extent do you try and insist on a harmonised approach?

For example, if each service takes a different approach to security you may find yourself chaining together numerous credentials and authentication schemes. Even if you achieve consensus for some form of federated single sign-on, it can be difficult to converge onto a common implementation. Sharing services is one thing, sharing security infrastructure for entitlement or authentication is something else entirely.

One of the attractions of sharing services is that you can share functionality without having to share the code assets or physical infrastructure that comes with it. This may mean that you have to live with some inconsistencies over implementation.

Enabling service discovery

Sharing a few services is relatively easy as everybody knows what they can do and who is looking after them.  It can be surprising how quickly you can lose track of services, particularly when they are under active development with new versions being published regularly. Ultimately you’ll need a mechanism for registering, documenting and discovering services, otherwise developers will never know what’s available for people to use. Perhaps a service could be used to help discover and describe services?

Sand boxes not black boxes

A remote service does prevent a number of challenges for debugging. You can’t expect developers to want to integrate against a live service so you will be expected to provide them with a sandbox environment of some kind. Diagnostic and debugging tools would also be helpful rather than providing oblique error messages that keep them guessing as to what’s gone wrong.

Eat your own dog food

It’s not good enough just to expect people to use your services. If you’re serious about building a platform from shared services then you should not be draining value by developing your own implementations of something that already exists.

Designing services for external access

If you wouldn’t publish your service externally then you can’t expect it to be adopted internally. Every service should be designed for external publishing from the ground up. This is an important discipline as developing for external consumption forces you into composing more accessible and usable APIs. You are more likely to take have a clear roadmap for the service and consider backwards compatibility when releasing new versions. You may even start to think more in terms of an extensible platform rather than a bunch of unrelated services.

Accept that you are creating a distributed system

By sharing services you should recognise that you are embracing a loosely-coupled, distributed processing model. This comes with its own problems, many of which were summed up by Peter Deutsch in his eight fallacies of distributed computing. You will be at the mercy of unreliable networks, processing latency, bandwidth limitations, security vulnerabilities, hidden costs and conflicting policies. These can lead to expensive problems and painful learning experiences unless you are prepared for them.