Designing an event store for scalable event sourcing

An event is a collection of data that describes a change in state, e.g. an order has been placed.  An event store is just a database that stores events. This is a pretty simple idea with some very powerful implications.

Events can be used to facilitate loosely-coupled communication between services. Instead of communicating with each other directly, services can publish events when something significant happens and subscribe to events created elsewhere. This kind of event-driven architecture can create a rich fabric of events that can be used to build a picture of what’s happening around the system.

Event sourcing is the process of determining the state of something by playing a set of events through some iterative logic. For example, for you might receive one event when an order is placed, another when payment has cleared and a third when the goods have been shipped. Taken together, these events can be used to hydrate the status of the order.

This approach has a number of advantages. First and foremost it can provide a reliable audit trail of state. Events are immutable in that once they have been added to the event store they are never changed. If any data needs to be corrected then a further event should be issued. This means that you can retrieve the state of data as it was understood at any point in the past.

Event sourcing can also scale very well but this does depend on a number of design decisions on the event store.

Data design for commits

There are a number of specific event store implementations available but you are not duty bound to use any of them. There is no reason why you can’t implement your own data store using your data persistence technology of choice.

Note that an event store should model generic commits here rather than specific event data. Given that events are immutable you only ever need to insert data, which can serve to eliminate any locking caused by data contention at scale.

At its most basic an event store only needs to store three pieces of information:

  • The type of event or aggregate
  • The sequence number of the event
  • The data as a serialized blob

More data can be added to help with diagnosis and audit, but the core functionality only requires a narrow set of fields. This should give rise to a very simple data design that can be heavily optimised for appending and retrieving sequences of records.

Retrieving streams of events

One thing developers find difficult to grasp about event sourcing is that you don’t directly query the data in each event. Queries are run by retrieving a stream of events from the store and pushing them through some iterative logic. In this sense there is a complete separation between event persistence and the logic used to process data.

Many event store designs seek to include some form of metadata about the event to help filter output. Given that an event store should try and provide the fastest possible retrieval there is a performance trade off involved in deciding how much metadata to store separately from the event itself. It can be easy for domain-specific data to start creeping into your event store design which can start to dilute the notion of providing fast, data agnostic event storage.

One approach to metadata is to group events together into streams where a set of similar or related events can be returned in a single query. In practical terms streams can be identified by a single text field that encapsulates any relevant metadata. This is the approach taken by Greg Young’s Event Store where a large implementation can involve thousands or even millions of different streams.

Optimise with snapshots

Retrieving large event streams can be pretty inefficient. As data accumulates in an event store you will find that you are pulling back growing streams of events every time you need to query the state of something.

“Snapshots” are an optimisation that limits the amount of data you need to retrieve from the database. A snapshot stores the aggregated state of an entity at a particular point in time. Any query only has to include the most recent snapshot along with any new events that have been added since the snapshot was created.

Snapshots can be stored and treated in the same way as any other event, though some designs use a separate database table for them as a further optimisation.

The most efficient way of creating snapshots is to use a background process that monitors event input and creates new snapshots automatically as required. Optimising this kind of process can add a fair amount of complexity to an implementation.  A simpler though much less optimal approach is to create snapshots as a by-product of each query so they are available the next time a request accesses the stream.

Caching is your friend

Event sourcing should be pretty fast as most of the work happens in memory. The round-trip involved in fetching events from the underlying database is always the single most expensive part of any operation.

You can reduce the number of round-trips through careful planning of streams and snapshots. Further gains can be made by caching smaller, more commonly requested streams or snapshots. This caching can happen in memory or in a fast memory-resident database such as Redis and can have a real impact on more predictable workloads.

Don’t be afraid to couple to a database technology

If you want to optimise performance then it is difficult to meaningfully abstract an event store from the underlying data persistence technology. For example, nEventStore supports more than twenty different databases, it does this by focusing on abstraction at the cost of performance optimisation.

Maximising the performance of an event store inevitably involves getting your hands dirty with the underlying persistence technology. For example, in a relational database then you will need to tune your indexes to create the right balance between retrieval and update speed. Most platforms provide built-in caching features that could be used to target certain types of query. You should also be able to partition very large data sets so performance can be maintained at scale.

The more you leverage the features of a particular data persistence technology the more your implementation will become dependent on it. This isn't something to be afraid of as it is the price of good performance at scale.

Choosing serialization carefully

Given that events are stored as serialized blobs your choice of serializer can have a real impact on performance. Google’s Protocol Buffers lends itself well to event storage as they are very fast, version tolerant, yield very small payloads and can support immutable data objects.

The one drawback of Protocol Buffers is that the payloads cannot be read directly. If legibility of stored data is important then JSON-based compromise might be more acceptable.

Optimised inserts and batching

Fast inserts are just as important as fast reads. Your event store will be processing incoming events which will be susceptible to demand spikes. You need to ensure that an event store can keep up with the rate at which events are being published so that queries remain current.

Any persistence should be taken care of by a single round-trip to the server. You should also provide an optimised means of batching up inserts so they clients can push sets of events up in a single round trip.

Understand when to use event sourcing... and when not to

Event stores cannot necessarily be used to persist data across an entire domain. Event sourcing only adds value to a relatively narrow set of use cases where being able to provide an audit trail really matters. If this kind of historical record of state is not important then it’s unlikely to be worth the overhead in terms of increased complexity.