This post, authored by Mariusz Gil,  is a part of “Microservices architecture” series, related to our freshly published book – “Microservices architecture for e-Commerce”.

Check the code

Design patterns covered by this post (Event Sourcing and CQRS) are a scaffold upon wich we’ve built Open Loyalty rewarding and CRM platform. Please check it on the github: https://github.com/DivanteLtd/open-loyalty.

Event Sourcing

Data stores are often designed to directly keep the actual state of the system without storing the history of all the submitted changes. In some situations this can cause problems. For example, if there is a need to prepare a new read model for some specific point of time (like your actual address on invoice 3 months ago; it can be changed and now reprinting old invoice if you don’t store the time-stamped data snapshots – it’s a big deal :)).

Event Sourcing stores all changes in the system as a time-ordered sequence of events, each event is an object that represents a domain action from the past. All published by application object events are persisted inside a dedicated, append-only data store called Event Store. This is not only an audit-log for the whole system because the main role of Event Store is to reconstruct application objects based on the history of the related events.

Event Sourcing overview (https://docs.microsoft.com/en-us/azure/architecture/patterns/_images/event-sourcing-overview.png)

Event Sourcing overview (https://docs.microsoft.com/en-us/azure/architecture/patterns/_images/event-sourcing-overview.png)

Consider the following sequence of domain events, regarding each Order lifecycle:

  • OrderCreated,
  • OrderApproved,
  • OrderPaid,
  • OrderPrepared,
  • OrderShipped,
  • OrderDelivered.

During the recreation phase, all events are fetched from the EventStore and applied to a newly constructed entity. Each applied event changes the internal state of the entity.

Benefits of this approach are obvious. Each event represents an important action, which is even better if DDD is used in the project, and there is a trace of every single change in domain entities. But there are also some potential drawbacks here… How can we get actual states of tens of objects? How fast will object recreation be if the events list contains thousands of items? Fortunately, Event Sourcing has answers for these problems. Based on the events, the application can update one or more from materialized views, so there is no need to fetch all objects from the event history to get their actual states. If the event history of the entity is long, the application may also create same snapshots. By “snapshot” I mean the state of the entity after every n-th event. The recreation phase will be much faster because there is no need to fetch all the changes from the Event Store, just the latest snapshot and further events.

Event Sourcing with CQRS (https://pablocastilla.files.wordpress.com/2014/09/cqrs.png?w=640).

Event Sourcing with CQRS (https://pablocastilla.files.wordpress.com/2014/09/cqrs.png?w=640).

Event Sourcing works very well with CQRS and Event Storming, a technique for domain event identification by Alberto Brandolini. Events found with domain experts will be published by entities inside the write model. They will be transferred to a synchronous or asynchronous event bus and processed by event handlers. In this scenario, event handlers will be responsible for updating one or more read models.

Pros:

  • Perfect for modeling complex domains,
  • Possibility to replay all stored events and build new read models,
  • Reliable audit-log for free.

Cons:

  • Queries implemented with CQRS,
  • Eventually consistent model.

Event-driven data management

Microservices should be coupled as loosely as possible, it should be possible to develop, test, deploy and scale them independently. Sometimes applications should also work without particular services… To achieve these requirements, each microservice in the system should have a separated data store. Sounds easy but what about the data itself? How to spread the information between services? What about consistency within the data?

One of the best solutions is just events. If anything important happened inside a microservice, a specific event is published to the message broker. Other microservices may connect to the message broker, receive and consume a dedicated copy of that message. Consumers may also decide which part of the data should be duplicated to their local store.

Safe publishing of events from the microservice is much more complicated. Events must be published to the message broker if, and only if, data was stored in a data store, other scenarios may lead to huge consistency problems. Usually, it means that data and events should be persistent inside the same transaction to a single data store and then propagated to the rest of the system.

From a technical point of view that’s a very common situation to use RabbitMQ as a message broker. RabbitMQ is a very fast and efficient server written in Erlang with wide set of client libraries for the most popular programming languages. A popular alternative for RabbitMQ is Apache Kafka, especially for bigger setups or when event stream mining and analytics is critical.

Spreading data across multiple separated data stores and achieving consistency using events has some problems. For example, there is no easy way to execute distributed transaction on different databases. Moreover, there can also be consistency issues because when events are inside the message broker, somewhere between microservices, the state of the whole system is inconsistent. The data store behind the original microservice is updated but changes aren’t applied on data stores behind other microservices. This model, called Eventually Consistent, is a disadvantage and an advantage the same time. Data will be synchronized in the future but you can also stop some services and you will never loose your data. They will be processed when services will be restored.

In some situations, when a new microservice is introduced into a system, there is a need to seed the database. If there is a chance to use data directly from different „sources of truth”, it’s probably the best way to setup new service. But other microservices may also expose feeds of theirs events, for example in the form of ATOM feeds. New microservices may process them in chronological order, to constitute a final state of new data stores. Of course, in this scenario each microservice should keep a history of all events, which can sometimes bea subsequent challenge.

Microservices Architecture for eCommerce eBook. Download for free >

Piotr Karwatka

CTO at Divante eCommerce Technology Company. Open-source enthusiast and life-long builder. Co-founder of Vue Storefront and Open Loyalty. Now gathering engaged communities around new technologies. | LinkedIn | Twitter

Share your comment