NOTE: This is Part 3 of a five-part series in which I detail how a real-world ASP.NET Web API app using the Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) patterns and the Redis database might look. Here's Part 1 of this series. The corresponding repository is over on GitHub.

In Part 1, we talked about why we might want to use Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) in our apps, and in Part 2 we defined how the Write Model (Commands, Command Handlers, Events, Aggregate Roots) of our simple system behaves. In this part, we will define the system's Read Model; that is, how other apps will query for the data we use.

In this part of our Real-World CQRS with ASP.NET and Redis series, we will:

  • Discover what comprises the Read Model for CQRS applications.
  • Gather our requirements for the queries we need to support
  • Choose a data store (and explain why we chose the one that we did)
  • Build the Repositories which will allow our app to query the Read Model data AND
  • Build the Event Handlers which will maintain the Read Model data store.

Let's get started!

What Is The Read Model?

Quite simply, the read model is the model of the data that consuming applications can query against. There are a few guidelines to keep in mind when designing a good read model:

  1. The Read Model should reflect the kinds of queries run against it.
  2. The Read Model should contain the current state of the data (this is important as we are using Event Sourcing).

In our system, the Read Model consists of the Read Model Objects, the Read Data Store, the Event Handlers, and the Repositories. This post will walk through designing all of these objects.

Query Requirements

First, a reminder: the entire point of CQRS is that the read model and the write model are totally separate things. You can model each in a completely different way, and in fact this is what we are doing in this tutorial: for the write model, we are storing events (using the Event Sourcing pattern), but our read model must conform to the guidelines laid out above.

When designing a Read Model for a CQRS system, you generally want said model to reflect the kinds of queries that will be run against that system. So, if you need a way to get all locations, locations by ID, and employees by ID, your Read Model should be able to do each of these easily, without a lot of round-tripping between the data store and the application.

But in order to design our Read Model, we must first know what queries exist. Here are the possible queries for our sample system:

  • Get Employee by ID
  • Get Location by ID
  • Get All Locations
  • Get All Employees (with their assigned Location ID)
  • Get All Employees at a Location

Let's see how we can design our Read Model to reflect these queries.

Design of Read Model Objects

One of the benefits of using CQRS is that we can use fully-separate classes to define what the Read Model contains. Let's use two new classes (EmployeeRM and LocationRM, RM being short for Read Model) to represent how our Locations and Employees will be stored in our Read Model database.

public class EmployeeRM
{
    public int EmployeeID { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
    public string JobTitle { get; set; }
    public int LocationID { get; set; }
    public Guid AggregateID { get; set; }
}

public class LocationRM
{
    public int LocationID { get; set; }
    public string StreetAddress { get; set; }
    public string City { get; set; }
    public string State { get; set; }
    public string PostalCode { get; set; }
    public List<int> Employees { get; set; }
    public Guid AggregateID { get; set; }

    public LocationRM()
    {
        Employees = new List<int>();
    }
}

For comparison, here's the properties from the Write Model versions of these objects (Employee and Location):

public class Employee : AggregateRoot
{
    private int _employeeID;
    private string _firstName;
    private string _lastName;
    private DateTime _dateOfBirth;
    private string _jobTitle;

    ...
}

public class Location : AggregateRoot
{
    private int _locationID;
    private string _streetAddress;
    private string _city;
    private string _state;
    private string _postalCode;
    private List<int> _employees;

    ...
}

As you can see, the LocationRM and EmployeeRM both store their respective AggregateID that was assigned to them when they were created, and EmployeeRM further has the property LocationID which does not exist in the Employee Write Model class.

Now we must tackle a different problem: what data store will we use?

Choosing a Data Store

In any CQRS system, the selection of a datastore comes down to a couple of questions:

  1. How fast do you need reads to be?
  2. How much functionality does the Read Model datastore need to be able to do on its own?

In my system, I am assuming there will be an order of magnitude more reads than writes (this is a very common scenario for a CQRS applications). Further, I am assuming that my Read Model datastore can be treated as little more than a cache that gets updated occasionally. These two assumptions lead me to answer those questions like this:

  1. How fast do you need reads to be? Extremely
  2. How much functionality does the Read Model datastore need to be able to do on its own? Not a lot

I'm a SQL Server guy by trade, but SQL Server is not exactly known for being "fast". You absolutely can optimize it to be such, but at this time I'm more interested in trying a datastore that I've heard a lot about but haven't actually had a chance to use yet: Redis.

Redis calls itself a "data structure store". What that really means is that it stores objects, not relations (as you would in a Relational Database such as SQL Server). Further, Redis distinguishes between keys and everything else, and gives you several options for creating such keys.

For this demo, you don't really need to know more about how Redis works, but I encourage you to check it out on your own. Further, if you intend to run the sample app (and, like most .NET devs, you're running Windows), you'll want to download MSOpenTech's redis client.

We now have two pieces of our Read Model in place: the Read Model Objects, and the Read Data Store. We can now begin implementation of a layer which will allow us to interface with the Read Data Store and update it as necessary: the Repository layer.

Creating the Repositories

The Repositories (for this project) are interfaces which allow us to query the Read Model. Remember that we have five possible queries that we need to support:

  • Get Employee by ID
  • Get Location by ID
  • Get All Locations
  • Get All Employees (with their assigned Location ID)
  • Get All Employees at a Location

However, we also need to support certain validation scenarios; for example, we cannot assign an Employee to a location that doesn't exist. Therefore we also need certain functions to check if employees or locations exist.

For the sake of good design, we need at least two Repositories: one for Locations and one for Employees. But a surprising amount of functionality is needed by both of these repositories:

  • They both need to get an object by its ID.
  • They both need to check if an object with a given ID exists.
  • They both need to save a changed object back into the Read Data Store.
  • They both need to be able to get multiple objects of the same type.

Consequently, we can build a common IBaseRepository interface and BaseRepository class which implement these common features. The IBaseRepository interface will be inherited by the other repository interfaces; it looks like this:

public interface IBaseRepository<T>
{
    T GetByID(int id);
    List<T> GetMultiple(List<int> ids);
    bool Exists(int id);
    void Save(T item);
}

Now, we also need two more interfaces which implement BaseRepository<T>: IEmployeeRepository and ILocationRepository:

public interface IEmployeeRepository : IBaseRepository<EmployeeRM>
{
    IEnumerable<EmployeeRM> GetAll();
}

public interface ILocationRepository : IBaseRepository<LocationRM>
{
    IEnumerable<LocationRM> GetAll();
    IEnumerable<EmployeeRM> GetEmployees(int locationID);
    bool HasEmployee(int locationID, int employeeID);
}

The next piece of the puzzle is the BaseRepository class (which, unfortunately, does NOT implement IBaseRepository<T>). This class provides methods by which items can be retrieved from or saved to the Redis Read Data Store:

public class BaseRepository
{
    private readonly IConnectionMultiplexer _redisConnection;

    /// <summary>
    /// The Namespace is the first part of any key created by this Repository, e.g. "location" or "employee"
    /// </summary>
    private readonly string _namespace;

    public BaseRepository(IConnectionMultiplexer redis, string nameSpace)
    {
        _redisConnection = redis;
        _namespace = nameSpace;
    }

    public T Get<T>(int id)
    {
        return Get<T>(id.ToString());
    }

    public T Get<T>(string keySuffix)
    {
        var key = MakeKey(keySuffix);
        var database = _redisConnection.GetDatabase();
        var serializedObject = database.StringGet(key);
        if (serializedObject.IsNullOrEmpty) throw new ArgumentNullException(); //Throw a better exception than this, please
        return JsonConvert.DeserializeObject<T>(serializedObject.ToString());
    }

    public List<T> GetMultiple<T>(List<int> ids)
    {
        var database = _redisConnection.GetDatabase();
        List<RedisKey> keys = new List<RedisKey>();
        foreach (int id in ids)
        {
            keys.Add(MakeKey(id));
        }
        var serializedItems = database.StringGet(keys.ToArray(), CommandFlags.None);
        List<T> items = new List<T>();
        foreach (var item in serializedItems)
        {
            items.Add(JsonConvert.DeserializeObject<T>(item.ToString()));
        }
        return items;
    }

    public bool Exists(int id)
    {
        return Exists(id.ToString());
    }

    public bool Exists(string keySuffix)
    {
        var key = MakeKey(keySuffix);
        var database = _redisConnection.GetDatabase();
        var serializedObject = database.StringGet(key);
        return !serializedObject.IsNullOrEmpty;
    }

    public void Save(int id, object entity)
    {
        Save(id.ToString(), entity);
    }

    public void Save(string keySuffix, object entity)
    {
        var key = MakeKey(keySuffix);
        var database = _redisConnection.GetDatabase();
        database.StringSet(MakeKey(key), JsonConvert.SerializeObject(entity));
    }

    private string MakeKey(int id)
    {
        return MakeKey(id.ToString());
    }

    private string MakeKey(string keySuffix)
    {
        if (!keySuffix.StartsWith(_namespace + ":"))
        {
            return _namespace + ":" + keySuffix;
        }
        else return keySuffix; //Key is already prefixed with namespace
    }
}

With all of that infrastructure in place, we can start implementing the EmployeeRepository and LocationRepository.

Employee Repository

In the EmployeeRepository, let's get a single Employee record with the given Employee ID.

public class EmployeeRepository : BaseRepository, IEmployeeRepository
{
    public EmployeeRepository(IConnectionMultiplexer redisConnection) : base(redisConnection, "employee") { }

    public EmployeeRM GetByID(int employeeID)
    {
        return Get<EmployeeRM>(employeeID);
    }
}

Hey, that was easy! Because of the work we did in the BaseRepository, our Read Model Object repositories will be quite simple. Here's the rest of EmployeeRepository:

public class EmployeeRepository : BaseRepository, IEmployeeRepository
{
    public EmployeeRepository(IConnectionMultiplexer redisConnection) : base(redisConnection, "employee") { }

    public EmployeeRM GetByID(int employeeID)
    {
        return Get<EmployeeRM>(employeeID);
    }

    public List<EmployeeRM> GetMultiple(List<int> employeeIDs)
    {
        return GetMultiple<EmployeeRM>(employeeIDs);
    }

    public IEnumerable<EmployeeRM> GetAll()
    {
        return Get<List<EmployeeRM>>("all");
    }

    public void Save(EmployeeRM employee)
    {
        Save(employee.EmployeeID, employee);
        MergeIntoAllCollection(employee);
    }

    private void MergeIntoAllCollection(EmployeeRM employee)
    {
        List<EmployeeRM> allEmployees = new List<EmployeeRM>();
        if (Exists("all"))
        {
            allEmployees = Get<List<EmployeeRM>>("all");
        }

        //If the district already exists in the ALL collection, remove that entry
        if (allEmployees.Any(x => x.EmployeeID == employee.EmployeeID))
        {
            allEmployees.Remove(allEmployees.First(x => x.EmployeeID == employee.EmployeeID));
        }

        //Add the modified district to the ALL collection
        allEmployees.Add(employee);

        Save("all", allEmployees);
    }
}

Take special note of the MergeIntoAllCollection() method, and let me take a minute to explain what I'm doing here.

Querying for Collections

As I mentioned earlier, Redis makes a distinction between keys and everything else, and because of this it doesn't really apply a "type" per se to anything stored against a key. Consequently, unlike in SQL Server, you don't really query for several objects (e.g. SELECT * FROM table WHERE condition) because that's not what Redis is for.

Remember that we're designing this to reflect the queries we need to run. We can think of this as changing when the work of making a collection is done.

In SQL Server or other relational databases, most of the time you do the work of creating a collection when you run a query. So, you might have a huge table of, say, vegetables, and then create a query to only give you carrots, or radishes, or whatever.

But in Redis, no such querying is possible. Therefore, instead of doing the work when we need the query, we prep the data in advance at the point where it changes. Consequently, the queries are ready for consumption immediately after the corresponding event handlers are done processing.

All we're doing is moving the time when we create the query results from "when the query runs" to "when the source data changes."

With the current set up of the repositories, any time a LocationRM or EmployeeRM object is saved, that object is merged back into the respective "all collection" for that object. Hence, I needed MergeIntoAllCollection().

Location Repository

Now, let's see what the LocationRepository looks like:

public class LocationRepository : BaseRepository, ILocationRepository
{
    public LocationRepository(IConnectionMultiplexer redisConnection) : base(redisConnection, "location") { }
    public LocationRM GetByID(int locationID)
    {
        return Get<LocationRM>(locationID);
    }

    public List<LocationRM> GetMultiple(List<int> locationIDs)
    {
        return GetMultiple(locationIDs);
    }

    public bool HasEmployee(int locationID, int employeeID)
    {
        //Deserialize the LocationDTO with the key location:{locationID}
        var location = Get<LocationRM>(locationID);

        //If that location has the specified Employee, return true
        return location.Employees.Contains(employeeID);
    }

    public IEnumerable<LocationRM> GetAll()
    {
        return Get<List<LocationRM>>("all");
    }
    public IEnumerable<EmployeeRM> GetEmployees(int locationID)
    {
        return Get<List<EmployeeRM>>(locationID.ToString() + ":employees");
    }

    public void Save(LocationRM location)
    {
        Save(location.LocationID, location);
        MergeIntoAllCollection(location);
    }

    private void MergeIntoAllCollection(LocationRM location)
    {
        List<LocationRM> allLocations = new List<LocationRM>();
        if (Exists("all"))
        {
            allLocations = Get<List<LocationRM>>("all");
        }

        //If the district already exists in the ALL collection, remove that entry
        if (allLocations.Any(x => x.LocationID == location.LocationID))
        {
            allLocations.Remove(allLocations.First(x => x.LocationID == location.LocationID));
        }

        //Add the modified district to the ALL collection
        allLocations.Add(location);

        Save("all", allLocations);
    }
}

Now our Repositories are complete, and we can finally write the last, best piece of our system's Read Model: the event handlers.

Building the Event Handlers

Whenever an event is issued by our system we can use an Event Handler to do something with that event. In our case, we need our Event Handlers to update our Redis data store.

First, let's create an Event Handler for the Create Employee event.

public class EmployeeEventHandler : IEventHandler<EmployeeCreatedEvent>
{
    private readonly IMapper _mapper;
    private readonly IEmployeeRepository _employeeRepo;
    public EmployeeEventHandler(IMapper mapper, IEmployeeRepository employeeRepo)
    {
        _mapper = mapper;
        _employeeRepo = employeeRepo;
    }

    public void Handle(EmployeeCreatedEvent message)
    {
        EmployeeRM employee = _mapper.Map<EmployeeRM>(message);
        _employeeRepo.Save(employee);
    }
}

Note that all interfacing with the Redis data store is done through the repository, and so the event handler consumes an instance of IEmployeeRepository in its constructor. Because we're using Dependency Injection (which we will set up in Part 4), this usage becomes possible and greatly simplifies our event handler.

In any case, notice that all this event handler is doing is creating the corresponding Read Model object from an event (specifically the EmployeeCreatedEvent).

Now let's build the event handler for a Location. In this case, we have three events to handle: creating a new Location, assigning an employee to a Location, and removing an employee from a Location (and in order to do all of those, it will need to take both ILocationRepository and IEmployeeRepository as constructor parameters):

public class LocationEventHandler : IEventHandler<LocationCreatedEvent>,
                                    IEventHandler<EmployeeAssignedToLocationEvent>,
                                    IEventHandler<EmployeeRemovedFromLocationEvent>
{
    private readonly IMapper _mapper;
    private readonly ILocationRepository _locationRepo;
    private readonly IEmployeeRepository _employeeRepo;
    public LocationEventHandler(IMapper mapper, ILocationRepository locationRepo, IEmployeeRepository employeeRepo)
    {
        _mapper = mapper;
        _locationRepo = locationRepo;
        _employeeRepo = employeeRepo;
    }

    public void Handle(LocationCreatedEvent message)
    {
        //Create a new LocationDTO object from the LocationCreatedEvent
        LocationRM location = _mapper.Map<LocationRM>(message);

        _locationRepo.Save(location);
    }

    public void Handle(EmployeeAssignedToLocationEvent message)
    {
        var location = _locationRepo.GetByID(message.NewLocationID);
        location.Employees.Add(message.EmployeeID);
        _locationRepo.Save(location);

        //Find the employee which was assigned to this Location
        var employee = _employeeRepo.GetByID(message.EmployeeID);
        employee.LocationID = message.NewLocationID;
        _employeeRepo.Save(employee);
    }

    public void Handle(EmployeeRemovedFromLocationEvent message)
    {
        var location = _locationRepo.GetByID(message.OldLocationID);
        location.Employees.Remove(message.EmployeeID);
        _locationRepo.Save(location);
    }
}

With the Event Handlers in place, every time an Event is kicked off, it will be consumed by the Event Handlers and the Redis data model will updated. Success!

Summary

In this part of our Real-World CQRS/ES with ASP.NET and Redis series, we:

  • Built the Read Model Data Store using Redis,
  • Designed our Read Model to support our business's queries,
  • Built the Event Handlers which place data into said data store AND
  • Built a set of repositories to access the Redis data.

There's still a lot to do, though. We need to set up our Dependency Injection system, our validation layer, and our Requests. We'll do all of that in Part 4 of Real-World CQRS/ES with ASP.NET and Redis!

Happy Coding!