Getting All Valid Enum Values in ASP.NET Web API

One of the commenters on my blog posed an interesting question on my article about serializing enumerations in Web API:

A comment on my blog, asking

He wanted to know how someone who is querying a Web API might know what possible values a given enumeration has. I didn't answer that in the post he commented on, so I'll do that now. How do we tell consumers what possible values an enumeration in a Web API app has?

Let's say we have the following enums:

public enum AddressType  
{
    Physical,
    Mailing,
    Shipping
}

public enum AccessLevel  
{
    Administrator,
    ReadWrite,
    ReadOnly
}

We want to expose a query which returns these values for each enumeration. To do that, let's create a class for EnumValue:

public class EnumValue  
{
    public string Name { get; set; }
    public int Value { get; set; }
}

It's a pretty generic class, to be sure, but since all enumerations are is a name and a value it serves our purposes well enough.

The trick now is to create an extension which uses Reflection to get all the names for the given enum. In fact, we can even make that method generic, so it can be used for any enumeration at all.

public static class EnumExtensions  
{
    public static List<EnumValue> GetValues<T>()
    {
        List<EnumValue> values = new List<EnumValue>();
        foreach (var itemType in Enum.GetValues(typeof(T)))
        {
            //For each value of this enumeration, add a new EnumValue instance
            values.Add(new EnumValue()
            {
                Name = Enum.GetName(typeof(T), itemType), 
                Value = (int)itemType
            });
        }
        return values;
    }
}

Finally, we can use Web API methods that call this extension:

public class HomeController : ApiController  
{
    [HttpGet]
    [Route("accesslevels/all")]
    public IHttpActionResult GetAccessLevels()
    {
        return Ok(EnumExtensions.GetValues<AccessLevel>());
    }

    [HttpGet]
    [Route("addresstypes/all")]
    public IHttpActionResult GetAddressTypes()
    {
        return Ok(EnumExtensions.GetValues<AddressType>());
    }
}

When we call this method using Postman, we now get an appropriate response code (200 OK) and the data we were looking for. Here's the address types:

A Postman response, showing the three possible values for the enumeration AddressType

And here's the access levels:

A Postman response, showing the three possible values for the enumeration AccessLevel

Of course, this is just one of many possible solutions to this particular problem. I like this method because it's generic and reusable for all enumerations, but if you find a better method, please feel free to share in the comments!

That's all there is to it! Now any consumer can hit these methods and know exactly what values can be used for the AccessLevel and AddressType enumerations!

I also have a repository over on GitHub which stores the sample code for this post. Check it out!

Happy Coding!

The BugCatcher Chronicles #1 - Jamestown Avenue

Danny

Shadows flank me as I march down Jamestown Avenue toward the short, squat building in the distance. The sun's last few rays are off in the distance, casting a lavender twilight into the sky that would be beautiful if I had the time to admire it. Night will soon blanket the campus, punctured only by the streetlamps and the lights of other students' rooms as they cram for finals. That's where I should be, studying, but I can't focus anymore.

My phone begins to beep, slowly at first, then more and more rapidly. There's another one in the area. As I keep walking, the beeping increases, gradually becoming a constant drone before a little blue beetle appears on my HUD. Gotcha!

I tap on the beetle, and its face fills my viewscreen. Two little options appear in the lower corners of the screen, "invoke" and "leave". I tap on "invoke" and a set of tiny spinners pops up, showing the potential inputs. I select a couple at random and hit "fire!". The blue insect stumbles but doesn't fall; I got at least one of the inputs right, just not all of them. I switch the left input to the next option, attempt the invocation again, and this time the little blue beetle falls to the ground and fades away. As I unconsciously relax my grip on my phone, the streetlamp next to me flickers and dies.

A little counter on the application heads-up display goes up by 15 points. 15 points?! That's barely worth the effort! The bug report appears, showing that this particular bug caused the light to turn off when it should have stayed lit. Ugh. That's all it did? I tap the little "report" button, and the app beeps once to let me know my bug report has been sent to the correct authority.

Looking up from my dim screen, I locate my destination in the distance: a brick two-story building at the end of the road. That building is a data center, and data centers are gold mines for us hunters. I can make out a few flickering screens in the distance; there's some hunters there already, so perhaps they found something worth catching. Unlike that blue beetle.

I glance down at the app again, pondering that name they gave it: BugCatcher. Well, that's original, isn't it? But don't let the stupid name fool you: this thing is the biggest multiplayer game on campus. Every day, every night and into the early morning, there will be people walking around staring at their phones to catch these little auto-generated bugs. I swear, people who didn't know about the app would think we were zombies.

The app finds real-world software bugs, and represents them as little insect and arachnid avatars on our phones. Each software bug is different, and so each avatar is different; the more critical the bug, the more dangerous its avatar becomes.

We hunters try to "invoke" these bugs by flinging inputs at them; only the correct inputs will trigger the bug and kill the avatar. Once triggered, we get to keep the little insect avatar in our collection and can show off what we collected to our friends. Plus, the app tells us what the bug did, and lets us report the bug to the proper organization so that they can fix it. Of course, the only way the app can know what the bug did is to actually invoke it, so once the bug is invoked, we can report it.

My roommate Jeshi and I are dedicated hunters, and normally he'd be out here with me, except that he's got some big physics final tomorrow that he's freaking out about. I mean, I've got the same final, but you don't see me all frantic. I hate physics, might as well accept that tomorrow is going to suck.

I keep walking down the street, sliding my phone back into my jeans pocket. That data center I'm heading toward tends to be a gold mine for bugs. Banks, office buildings, government buildings; all these places have loads of bugs that hunters like me can invoke and report. But data centers top them all due to the sheer concentration of software in the area. My school's data center is the perfect example: I regularly find several bugs a minute when I'm out there.

The bug I invoked last week is still my favorite: a vicious pink mantis-like thing I found at the campus credit union which, when invoked, caused something like $10,000 to disappear from a bank account. Poof. Vanished into thin air. Of course I reported it, and the bank restored the poor guy's money. But I still get to keep the avatar, and since it's fixed now, no one will ever see that exact avatar again. It's all mine.

That's the funny thing about this game: you don't have to report the bugs. There's tons of hunters that walk around invoking bugs and never reporting them. We call those guys "burners"; they just like to watch the world burn. Last week a burner made all our student records disappear, and the uni's tech support team didn't notice until Jeshi told them the next morning; they spent all night restoring the records from backups. Me, I always report the bugs I find. After all, we're causing things to break in the real world and the real world should know about it.

I'm almost to the data center when my phone starts to beep again. As I keep walking, the beeping gets louder until the constant whine bores into my ears from my pocket. I pull my phone out of my jeans and flick on the screen. The bug that greets me is something straight out of my worst nightmare.

It's a horrid cross between a tarantula and a scorpion and according to my app it's the size of a small house. Its fangs are dripping something (saliva maybe) and the six red eyes have deep dark pupils that are boring their way into my skull. For a brief second I consider closing the app and moving along, as this thing clearly hasn't been here long and I don't know if I can find the right inputs to invoke it. But I need it for my collection! No one in my building has any bug even remotely close to this one. Tentatively, I slide the input selection screen up and begin turning the dials.

The first several invocations, predictably, do not go well. The bug doesn't so much as blink as my panicked offensive goes unheeded. The tarantula-scorpion's mandibles clack and my terrified brain fills in the appropriate, awful sound. It is glaring at me, daring me to make a move, knowing that all my invocations so far have failed. I...I know it's not real, and yet I'm having to fight my own instincts, to keep my feet in place and not flee back to my dorm. It continues to gnash and swagger and glare, and my invocations are each no more effective than the last.

I figured playing this game would help me get over my fear of bugs. I'm no longer sure that this is a good plan.

On the fifteenth attempt, the monster's left side stumbles. I've found something! One of the inputs was correct, and now I've got a much bigger chance of completing a successful invocation. I spin the inputs again, hoping for a bigger effect, and by some miracle the colossus trips and falls to its knees (or whatever it has for knees). I'm so close to capturing this thing!

I spin the last two inputs to new values; the monster buckles but gets up again. No new effect. I spin several more times, until finally the bug stumbles backward and falls on its segmented tail. Now I'm close. I give the last input another spin and another and another, the spinner whirling so fast that I'm not sure how my fingers are keeping up. I'm running on instinct now, on hundreds of hours played and hundreds of bugs invoked. But nothing's happening. It's laughing at me, I can hear it, I need to make it stop. I will make it stop.

The bug stumbles, falls, goes cross-eyed, and finally melts into the virtual ground it had been standing on. That last input spin must have been right! I wasn't even conscious of my invocations, but I must have figured it out.

I caught the bug!

I pump my fist into the air, shout "Yes!" and scare the pants off a poor alley cat nearby who immediately careens into a trash can. BANG! I've been holding my breath this entire time, so I exhale, slowly, the trapped air whistling as it leaves my lungs. In the next instant, my phone is ringing, and a quick glance at it tells me that Jeshi is calling. I answer, and he informs me that our chemistry final has been moved up to tomorrow afternoon.

Dammit. I say thanks, hang up, and start the long walk back to my apartment. I enjoy chemistry, and I want to do well on that final, so it looks like I'm going to go study some more. The data center will have to wait.

As the last of the sunlight fades, I reach my apartment, open my books, and start reading. Jeshi, my roomate, has made us coffee. It'll be a long night, and we need to get started. At least I caught that bug!

Ethan

Just a few hundred feet from where the broken streetlamp towered in the darkness, another student was diligently reading his textbooks. Ethan had a philosophy final in the morning, and while all the other students in his class said it would be a simple thing to ace, he didn't want to take any chances. He was here to study, not party.

As the night engulfed the campus, he started to feel sweaty, tired, just not quite himself. He filled a small plastic glass with some orange juice and fingered his insulin pump to make sure it was still working. He felt the familiar hum, knew that it was doing its job and that his type-1 diabetes was under control, and returned to his books.

Just after midnight Ethan began to feel lightheaded. He could no longer concentrate, and ascribed his creeping tiredness to the immense amount of studying he'd been doing. The philosophy final tomorrow worried him now more than ever, and he couldn't quite place why.

He pushed his chair back from the desk and stood, tried to flick the overhead light's switch off but missed, then slowly tried again and succeeded. As his eyes adjusted, he groped his way toward the tiny bed lurking in the opposite corner of the room. In the darkness, the insulin pump continued its task, sensing that Ethan had high blood sugar and pumping more insulin into him. It had no way of knowing that its sensor was malfunctioning, and that Ethan's blood sugar levels were well within normal range.

Ethan flopped face-down onto his mattress and immediately fell into a deep, dreamless sleep. Two hours later he awoke, drenched in sweat and cold from the sudden realization that he knew what was happening, and it wasn't simple lethargy.

He sat up and reflexively checked the insulin pump's history on its tiny yellow screen, finding that he'd been given 20 units earlier that evening, 20 units that his body didn't need. He was overdosing. He carefully removed the pump and stumbled to his refrigerator, where he'd stashed an emergency glucagon shot for just this kind of situation.

Opening the fridge door and fumbling around on the shelf, his fingers finally brushed the small red case containing the one-use shot. He flipped open the case, picked up the syringe placed inside, injected it into his left thigh, placed the now-empty syringe back into the case and latched it closed before dialing 911 on his cell phone. As he tried to make coherent sentences, tried to tell the operator what was wrong, he haphazardly slid into the desk chair.

A few minutes later, as the sirens sounded in the distance, his rational brain cut through the insulin-induced fog, wondering what could have possibly happened that made his pump deliver way more insulin than he'd needed. He glared at the little silver box now resting on his desk; a glint of moonlight reflected off of the shiny casing. He'd need a new one, that much was clear, and he could get one as soon as tomorrow, but still...

What if it happened again?

As dawn approached, with the first rays of the sun climbing over the eastern horizon, on the other end of Jamestown Avenue a hunter proudly revealed the new, terrifying member of his impressive collection.

Special thanks to Scott Hanselman (@shanselman) for help on what an insulin overdose does to a type-1 diabetic.

Real-World CQRS/ES with ASP.NET and Redis Part 5 - Running the APIs

NOTE: This is the final part of a five-part series in which I detail how a real-world ASP.NET Web API app using the Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) patterns and the Redis database might look. Here's Part 1 of this series. The corresponding repository is over on GitHub.

All our work in the previous parts of this series (learning what Command-Query Responsibility Segregation and Event Sourcing are, building the Write Model to modify our aggregate roots, building the Read Model to query data, and building both our Write and Read APIs) has lead to this. We can now test these two APIs using [Postman] and see how they operate.

In this post, the final part of our Real-World CQRS/ES with ASP.NET and Redis series, we will:

  • Run the Commands API with both valid and invalid commands.
  • Run the Queries API with existent and non-existent data.
  • Discuss some shortcomings of this design.

You're on the last lap, so don't stop now!

Command - Creating Locations

The first thing we should do is run a few commands to load our Write and Read models with data. To do that, we're going to use my favorite tool Postman to create some requests.

First, let's run a command to create a new location. Here's a screenshot of the Postman request:

Running this request returns 200 OK, which is what we expect. But what happens if we try to run the exact same request again?

Hey, lookie there! Our validation layer is working!

Let's create another location:

Well, seems our create location process is working fine. Or, at least, it looks like it is.

Query - Locations (All and By ID)

To be sure that our system is properly updating the read model, let's submit a query to our Queries API that returns all locations:

Which looks good. Let's also query for a single location by its ID. First, let's get Location #2:

Now we can query for Location #3:

Oh, wait, that's right, there is no Location #3. So we get back HTTP 400 Bad Request, which is also what we expect. (You could also make this return HTTP 404 Not Found, which is more semantically correct).

OK, great, adding and querying Locations works. But what about Employees?

Command - Creating Employees

Let's first create a new employee and assign him to Location #1:

Let's also create a couple more employees:

So now we should have two employees at Location #1 and a third employee at Location #2. Let's query for employees by location to confirm this.

Query - Employees by Location

Here's our Postman screenshot for the Employees by Location query for each location.

Just as we thought, there are two employees at Location #1 and a third at Location #2.

We're doing pretty darn good so far! But what happens if Reggie Martinez (Employee #3) needs to transfer to Location #2? We can do that with the proper commands.

Command - Assign Employee to Location

Here's a command to move Mr. Martinez to Location #2:

And now, if we query for all employees at Location #2:

I'd say we've done pretty darn good! All our commands do what we expect them to do, and all our queries return the appropriate data. We've now got ourselves a working CQRS/ES project with ASP.NET and Redis!

Shortcomings of This Design

Even though we've done a lot of work on this project and I think we've mostly gotten it right, there's still a few places that I think could be improved:

  • All Redis access through repositories. I don't like having the Event Handlers access the Redis database directly, I'd rather have them do that through the repositories. This would be easy to do, I just didn't have time before my publish date.
  • Better splitting of requests/commands and commands/events. I don't like how commands always seem to result in exactly one event.

That said, I'm really proud of the way this project turned out. If you see any additional areas for improvement, please let me know in the comments!

Summary

In this final part of our Real-World CQRS/ES with ASP.NET and Redis series:

  • Ran several queries and commands.
  • Confirmed that those queries and commands worked as expected.
  • Discussed a couple of shortcomings of this design.

As always, I welcome (civil) comments and discussion on my blog, and feel free to fork or patch or do whatever to the GitHub repository that goes along with this series. Thanks for reading!

Happy Coding!

Post image is Toddler running and falling from Wikimedia, used under license

Real-World CQRS/ES with ASP.NET and Redis Part 4 - Creating the APIs

NOTE: This is Part 4 of a five-part series in which I detail how a real-world ASP.NET Web API app using the Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) patterns and the Redis database might look. Here's Part 1 of this series. The corresponding repository is over on GitHub.

We've done quite a lot of work to get to this point! We've discussed why we might want to use Command-Query Responsibility Segregation (CQRS) and Event Sourcing (ES) in our app, we've built a Write Model to handle the processing of our commands, and we've built a Read Model to query our data.

Now we can show why this is a "real-world" app. Here's what we're going to do in Part 4 of Real World CQRS/ES with ASP.NET:

  • Build a Queries API so we can query the system for data.
  • Build a Commands API so that we can issue commands to the system.
  • Implement a validation layer using FluentValidation to ensure that commands being issued are valid to execute.
  • Implement dependency injection using StructureMap in both the commands and queries APIs.

Don't stop now! Let's get starting building our APIs!

The Queries API

We're going to switch it up a bit and build the Queries API first, as that turns out to be easier than building the Commands API right off the bat. After all, the Queries API doesn't have to worry about things like validation. So, let's create a new ASP.NET Web API app.

Dependency Injection with StructureMap

After creating the new ASP.NET Web API project, the first thing we need to do is download the StructureMap.WebApi2 NuGet package and install it. Doing so gives us a folder structure that looks something like this (notice the new DependencyResolution folder):

I've blogged about how to use StructureMap with Web API in a previous post, so if you're not familiar with the StructureMap.WebApi2 package, you might want to read that post first, then come back here. It's OK, I'll wait.

Once we've downloaded and installed the StructureMap.WebApi2 package, we'll need to change just a couple of things. In our Global.asax file, we need to start the StructureMap container:

public class WebApiApplication : System.Web.HttpApplication  
{
    protected void Application_Start()
    {
        AreaRegistration.RegisterAllAreas();
        GlobalConfiguration.Configure(WebApiConfig.Register);
        FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
        RouteConfig.RegisterRoutes(RouteTable.Routes);
        BundleConfig.RegisterBundles(BundleTable.Bundles);

        StructuremapWebApi.Start(); //Start! Your! Containers! VROOOOOOOOM
    }
}

We also need to register the appropriate items with the container so that they can be injected. Among those items are the Repositories we created in the previous part of this series; we must register them so our API controller have them injected.

In Part 3, we also established that we are using Redis as our Read Data Store, and that we are utilizing StackExchange.Redis to interface with said Redis instance. StackExchange.Redis conveniently comes prepared for dependency injection, so we will need to register the IConnectionMultiplexer interface with our container.

In all, our DefaultRegistry class for the Queries API looks like this:

public class DefaultRegistry : Registry {  
    public DefaultRegistry() 
    {
        //Repositories
        For<IEmployeeRepository>().Use<EmployeeRepository>();
        For<ILocationRepository>().Use<LocationRepository>();

        //StackExchange.Redis
        ConnectionMultiplexer multiplexer = ConnectionMultiplexer.Connect("localhost");
        For<IConnectionMultiplexer>().Use(multiplexer);
    }
}

See, that wasn't too bad! Just wait until you see the Commands API's registry.

Building the Queries

Anyway, with StructureMap now in place, we can start building the queries we need to support. Here's the queries list we talked about in Part 3:

  • Get Employee by ID
  • Get Location by ID
  • Get All Locations
  • Get All Employees (with their assigned Location ID)
  • Get All Employees at a Location

Let's start with the easy one: getting an Employee by their ID.

Get Employee by ID

We need an EmployeeController, with a private IEmployeeRepository, to execute this query. The complete controller is as follows:

[RoutePrefix("employees")]
public class EmployeeController : ApiController  
{
    private readonly IEmployeeRepository _employeeRepo;

    public EmployeeController(IEmployeeRepository employeeRepo)
    {
        _employeeRepo = employeeRepo;
    }

    [HttpGet]
    [Route("{id}")]
    public IHttpActionResult GetByID(int id)
    {
        var employee = _employeeRepo.GetByID(id);

        //It is possible for GetByID() to return null.
        //If it does, we return HTTP 400 Bad Request
        if(employee == null)
        {
            return BadRequest("No Employee with ID " + id.ToString() + " was found.");
        }

        //Otherwise, we return the employee
        return Ok(employee);
    }
}

Well, that looks pretty simple. How about the GetAll() query?

Get All Employees

[RoutePrefix("employees")]
public class EmployeeController : ApiController  
{
    ...

    [HttpGet]
    [Route("all")]
    public IHttpActionResult GetAll()
    {
        var employees = _employeeRepo.GetAll();
        return Ok(employees);
    }
}

I think I'm sensing a theme here.

The Location Queries Controller

Let's see what the Location queries are:

[RoutePrefix("location")]
public class LocationController : ApiController  
{
    private ILocationRepository _locationRepo;

    public LocationController(ILocationRepository locationRepo)
    {
        _locationRepo = locationRepo;
    }

    [HttpGet]
    [Route("{id}")]
    public IHttpActionResult GetByID(int id)
    {
        var location = _locationRepo.GetByID(id);
        if(location == null)
        {
            return BadRequest("No location with ID " + id.ToString() + " was found.");
        }
        return Ok(location);
    }

    [HttpGet]
    [Route("all")]
    public IHttpActionResult GetAll()
    {
        var locations = _locationRepo.GetAll();
        return Ok(locations);
    }

    [HttpGet]
    [Route("{id}/employees")]
    public IHttpActionResult GetEmployees(int id)
    {
        var employees = _locationRepo.GetEmployees(id);
        return Ok(employees);
    }
}

Yep, definitely a theme going on. All this setup has made implementing our controllers very simple, and simplicity is definitely better when dealing with complex patterns like CQRS and ES.

We'll run queries against this API in Part 5, but for now let's turn our attention to the Commands API, which may prove to be a bit more difficult to write.

The Commands API

As I mentioned early on in this post, the Commands API is considerably more complex than the Queries API; this is largely due to the number of things we need to inject into our container, as well as the Commands API being responsible for validating the requests that come in to the system. We're going to tackle each of these problems.

Dependency Injection

First, let's deal with Dependency Injection. We'll use the same package as before, with the same Global.asax change. However, our DefaultRegistry looks much different.

In the Commands API, we need the following services available for injection:

  • CQRSLite's Commands and Events bus
  • Our Commands and Events (from Part 2)
  • Our Event Store (from Part 2)
  • AutoMapper
  • Our own Repositories (from Part 3)
  • StackExchange.Redis

That results in this monstrosity of a registry:

public class DefaultRegistry : Registry {  
    #region Constructors and Destructors

    public DefaultRegistry() {
        //Commands, Events, Handlers
        Scan(
            scan => {
                scan.TheCallingAssembly();
                scan.AssemblyContainingType<BaseEvent>();
                scan.Convention<FirstInterfaceConvention>();
            });

        //CQRSLite
        For<InProcessBus>().Singleton().Use<InProcessBus>();
        For<ICommandSender>().Use(y => y.GetInstance<InProcessBus>());
        For<IEventPublisher>().Use(y => y.GetInstance<InProcessBus>());
        For<IHandlerRegistrar>().Use(y => y.GetInstance<InProcessBus>());
        For<ISession>().HybridHttpOrThreadLocalScoped().Use<Session>();
        For<IEventStore>().Singleton().Use<InMemoryEventStore>();
        For<IRepository>().HybridHttpOrThreadLocalScoped().Use(y =>
                new CacheRepository(new Repository(y.GetInstance<IEventStore>()), y.GetInstance<IEventStore>()));

        //AutoMapper
        var profiles = from t in typeof(DefaultRegistry).Assembly.GetTypes()
                        where typeof(Profile).IsAssignableFrom(t)
                        select (Profile)Activator.CreateInstance(t);

        var config = new MapperConfiguration(cfg =>
        {
            foreach (var profile in profiles)
            {
                cfg.AddProfile(profile);
            }
        });

        var mapper = config.CreateMapper();

        For<IMapper>().Use(mapper);

        //StackExchange.Redis
        ConnectionMultiplexer multiplexer = ConnectionMultiplexer.Connect("localhost");
        For<IConnectionMultiplexer>().Use(multiplexer);
    }

    #endregion
}

Holy crap that's a lot of things that need to be injected. But, as we will see, each of these things is actually necessary and provides a lot of value to our application.

(Hold on a second while I smack myself. I sounded way too much like a marketer just now.)

SMACK

Okay, I'm better now.

Requests

I've been using the term "request" liberally throughout this series, and now it's time to truly define what a request is.

In this system, a request is a potential command. That's all. Consuming applications which would like commands issued must submit a request first; that request will be validated and, if found to be valid, mapped to the corresponding command.

A request is, therefore, a C# class which contains the data needed to issue a particular command.

Request 1 - Create Employee

Let's begin to define our requests by first creating a request for creating a new employee.

public class CreateEmployeeRequest  
{
    public int EmployeeID { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
    public string JobTitle { get; set; }
    public int LocationID { get; set; }
}

WTF Matthew, you say, that looks almost EXACTLY like the CreateEmployeeCommand! Why can't we just use that?! And after I'm done looking around for my parents (nobody calls me Matthew), I can tell you that there are two reasons why we don't reuse the command objects as requests.

First, requests must be validated against the Read Model, whereas commands are assumed to be valid. Second a single request may kick off more than one command, as is the case with this request.

But how do we accomplish that validation, you say? By using one of my favorite NuGet packages of all time: FluentValidation.

The Validation Layer

FluentValidation is a NuGet package which allows us to validate objects and places any validation errors found into the Controller's ModelState. But (unlike StackExchange.Redis) it doesn't come ready for use in a Dependency Injection environment, so we must do some setup.

First, we need a factory which will create the validator objects:

public class StructureMapValidatorFactory : ValidatorFactoryBase  
{
    private readonly HttpConfiguration _configuration;

    public StructureMapValidatorFactory(HttpConfiguration configuration)
    {
        _configuration = configuration;
    }

    public override IValidator CreateInstance(Type validatorType)
    {
        return _configuration.DependencyResolver.GetService(validatorType) as IValidator;
    }
}

Next, in our WebApiConfig.cs file, we need to enable FluentValidation's validator provider using our factory:

public static void Register(HttpConfiguration config)  
{
    ...

    FluentValidationModelValidatorProvider.Configure(config, x => x.ValidatorFactory = new StructureMapValidatorFactory(config));

    ...
}

Finally, we need to register the validator provider in our StructureMap container, which is done in the DefaultRegistry class.

public DefaultRegistry()  
{
    ...
    //FluentValidation 
    FluentValidation.AssemblyScanner.FindValidatorsInAssemblyContaining<CreateEmployeeRequestValidator>()
            .ForEach(result =>
            {
                For(result.InterfaceType)
                    .Use(result.ValidatorType);
            });

With all of that in place, we're ready to begin building our validators!

Create Employee - Validation

Here's the validation rules we need to implement when creating an employee:

  • The Employee ID must not already exist.
  • The First Name cannot be blank.
  • The Last Name cannot be blank.
  • The Job Title cannot be blank.
  • Employees must be 16 years of age or older.

Here's how we would implement such a validator using FluentValidation:

public class CreateEmployeeRequest  
{
    public int EmployeeID { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
    public string JobTitle { get; set; }
}

public class CreateEmployeeRequestValidator : AbstractValidator<CreateEmployeeRequest>  
{
    public CreateEmployeeRequestValidator(IEmployeeRepository employeeRepo, ILocationRepository locationRepo)
    {
        RuleFor(x => x.EmployeeID).Must(x => !employeeRepo.Exists(x)).WithMessage("An Employee with this ID already exists.");
        RuleFor(x => x.LocationID).Must(x => locationRepo.Exists(x)).WithMessage("No Location with this ID exists.");
        RuleFor(x => x.FirstName).NotNull().NotEmpty().WithMessage("The First Name cannot be blank.");
        RuleFor(x => x.LastName).NotNull().NotEmpty().WithMessage("The Last Name cannot be blank.");
        RuleFor(x => x.JobTitle).NotNull().NotEmpty().WithMessage("The Job Title cannot be blank.");
        RuleFor(x => x.DateOfBirth).LessThan(DateTime.Today.AddYears(-16)).WithMessage("Employees must be 16 years old or older.");
    }
}

Notice that IEmployeeRepository and ILocationRepository are constructor parameters to the validator class. We don't need to do anything else to get those objects injected, as that was taken care of by registering the Repositories and the FluentValidation factory.

There's just one last thing we need to do to have our validation layer fully integrated: whenever validation fails, we need to automatically return HTTP 400 Bad Request. We accomplish this by using an ActionFilter...

public class BadRequestActionFilter : ActionFilterAttribute  
{
    public override void OnActionExecuting(HttpActionContext actionContext)
    {
        if (!actionContext.ModelState.IsValid)
        {
            actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.BadRequest, new ValidationErrorWrapper(actionContext.ModelState));
        }
        base.OnActionExecuting(actionContext);
    }
}

...and registering that action filter in WebApiConfig.

public static class WebApiConfig  
{
    public static void Register(HttpConfiguration config)
    {
        // Web API configuration and services
        config.Filters.Add(new BadRequestActionFilter());
        ...
    }
}

Controller Action

The controller action for creating an employee does two things: it issues the CreateEmployeeCommand, and then issues an AssignEmployeeToLocationCommand. Since this is the only action in the EmployeeController class, the entire class looks like this:

[RoutePrefix("employee")]
public class EmployeeController : ApiController  
{
    private IMapper _mapper;
    private ICommandSender _commandSender;

    public EmployeeController(ICommandSender commandSender, IMapper mapper)
    {
        _commandSender = commandSender;
        _mapper = mapper;
    }

    [HttpPost]
    [Route("create")]
    public IHttpActionResult Create(CreateEmployeeRequest request)
    {
        var command = _mapper.Map<CreateEmployeeCommand>(request);
        _commandSender.Send(command);

        var assignCommand = new AssignEmployeeToLocationCommand(request.LocationID, request.EmployeeID);
        _commandSender.Send(assignCommand);
        return Ok();
    }
}

Since we've now got the EmployeeController written, we can move on to the next request: creating a new location.

Request 2 - Create Location

Now let's build a request and a validator to create a location. Our validation rules for creating a new location look like this:

  1. The location ID must not already exist.
  2. The street address cannot be blank.
  3. The city cannot be blank.
  4. The state cannot be blank.
  5. The postal code cannot be blank.

Implementing those rules results in the following classes:

public class CreateLocationRequest  
{
    public int LocationID { get; set; }
    public string StreetAddress { get; set; }
    public string City { get; set; }
    public string State { get; set; }
    public string PostalCode { get; set; }
}

public class CreateLocationRequestValidator : AbstractValidator<CreateLocationRequest>  
{
    public CreateLocationRequestValidator(ILocationRepository locationRepo)
    {
        RuleFor(x => x.LocationID).Must(x => !locationRepo.Exists(x)).WithMessage("A Location with this ID already exists.");
        RuleFor(x => x.StreetAddress).NotNull().NotEmpty().WithMessage("The Street Address cannot be null");
        RuleFor(x => x.City).NotNull().NotEmpty().WithMessage("The City cannot be null");
        RuleFor(x => x.State).NotNull().NotEmpty().WithMessage("The State cannot be null");
        RuleFor(x => x.PostalCode).NotNull().NotEmpty().WithMessage("The Postal Code cannot be null");
    }
}

The corresponding controller (LocationController) looks pretty similar to EmployeeController.

[RoutePrefix("locations")]
public class LocationController : ApiController  
{
    private IMapper _mapper;
    private ICommandSender _commandSender;
    private ILocationRepository _locationRepo;
    private IEmployeeRepository _employeeRepo;

    public LocationController(ICommandSender commandSender, IMapper mapper, ILocationRepository locationRepo, IEmployeeRepository employeeRepo)
    {
        _commandSender = commandSender;
        _mapper = mapper;
        _locationRepo = locationRepo;
        _employeeRepo = employeeRepo;
    }

    [HttpPost]
    [Route("create")]
    public IHttpActionResult Create(CreateLocationRequest request)
    {
        var command = _mapper.Map<CreateLocationCommand>(request);
        _commandSender.Send(command);
        return Ok();
    }
}

Looking at this controller, you might be wondering why IEmployeeRepository and ILocationRepository are passed into the constructor when they aren't used by the Create() action. That's because we still have one request left to build: assigning an employee to a location.

Request 3 - Assign Employee to Location

Remember that one of our business rules (from Part 2) says the following:

  1. Employees may switch locations, but they may not be assigned to more than one location at a time.

The request we are going to build now will assign an employee to a new location, as well as remove that employee from the location s/he is currently assigned to.

But, you declare, we defined a command to remove an employee from a location! Is that not also a request? Nope, it's not, and for the same reason that creating an employee results in two commands: one request can result in multiple commands. In this case, assigning an employee to a location could result in one or two commands, depending on if the employee was just created or not.

First, let's build the request and its validator. In this case, we have three validation rules:

  1. The Location must exist.
  2. The Employee must exist.
  3. The Employee must not already be assigned to the given Location.

Implementing those rules result in the following classes:

public class AssignEmployeeToLocationRequest  
{
    public int LocationID { get; set; }
    public int EmployeeID { get; set; }
}

public class AssignEmployeeToLocationRequestValidator : AbstractValidator<AssignEmployeeToLocationRequest>  
{
    public AssignEmployeeToLocationRequestValidator(IEmployeeRepository employeeRepo, ILocationRepository locationRepo)
    {
        RuleFor(x => x.LocationID).Must(x => locationRepo.Exists(x)).WithMessage("No Location with this ID exists.");
        RuleFor(x => x.EmployeeID).Must(x => employeeRepo.Exists(x)).WithMessage("No Employee with this ID exists.");
        RuleFor(x => new { x.LocationID, x.EmployeeID }).Must(x => !locationRepo.HasEmployee(x.LocationID, x.EmployeeID)).WithMessage("This Employee is already assigned to that Location.");
    }
}

Now, all we have to do is write the controller action:

[RoutePrefix("locations")]
public class LocationController : ApiController  
{
    ...
    [HttpPost]
    [Route("assignemployee")]
    public IHttpActionResult AssignEmployee(AssignEmployeeToLocationRequest request)
    {
        var employee = _employeeRepo.GetByID(request.EmployeeID);
        if (employee.LocationID != 0)
        {
            var oldLocationAggregateID = _locationRepo.GetByID(employee.LocationID).AggregateID;

            RemoveEmployeeFromLocationCommand removeCommand = new RemoveEmployeeFromLocationCommand(oldLocationAggregateID, request.LocationID, employee.EmployeeID);
            _commandSender.Send(removeCommand);
        }

        var locationAggregateID = _locationRepo.GetByID(request.LocationID).AggregateID;
        var assignCommand = new AssignEmployeeToLocationCommand(locationAggregateID, request.LocationID, request.EmployeeID);
        _commandSender.Send(assignCommand);

        return Ok();
    }
}

Whew! With that final controller action in place, we have completed building our APIs! Give yourselves a pat on the back for coming this far!

Summary

In this part of our Real-World CQRS/ES with ASP.NET and Redis series, we:

  • Built a Queries API with a DI container and implemented our business queries.
  • Build a Commands API with a DI container and implemented our requests.
  • Used FluentValidation to implement the Commands API's validation layer.

Congratulations! We've completed the build of our real-world CQRS/ES system! All that's left to do is run a few commands and queries to show how the system works, and we will do that in the final part of this series. Keep your eyes (and feed readers) open for Part 5 of Real-World CQRS/ES with ASP.NET and Redis!

Happy Coding!

Real-World CQRS/ES with ASP.NET and Redis Part 3 - The Read Model

NOTE: This is Part 3 of a five-part series in which I detail how a real-world ASP.NET Web API app using the Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) patterns and the Redis database might look. Here's Part 1 of this series. The corresponding repository is over on GitHub.

In Part 1, we talked about why we might want to use Command-Query Responsibility Segregation and Event Sourcing (CQRS/ES) in our apps, and in Part 2 we defined how the Write Model (Commands, Command Handlers, Events, Aggregate Roots) of our simple system behaves. In this part, we will define the system's Read Model; that is, how other apps will query for the data we use.

In this part of our Real-World CQRS with ASP.NET and Redis series, we will:

  • Discover what comprises the Read Model for CQRS applications.
  • Gather our requirements for the queries we need to support
  • Choose a data store (and explain why we chose the one that we did)
  • Build the Repositories which will allow our app to query the Read Model data AND
  • Build the Event Handlers which will maintain the Read Model data store.

Let's get started!

What Is The Read Model?

Quite simply, the read model is the model of the data that consuming applications can query against. There are a few guidelines to keep in mind when designing a good read model:

  1. The Read Model should reflect the kinds of queries run against it.
  2. The Read Model should contain the current state of the data (this is important as we are using Event Sourcing).

In our system, the Read Model consists of the Read Model Objects, the Read Data Store, the Event Handlers, and the Repositories. This post will walk through designing all of these objects.

Query Requirements

First, a reminder: the entire point of CQRS is that the read model and the write model are totally separate things. You can model each in a completely different way, and in fact this is what we are doing in this tutorial: for the write model, we are storing events (using the Event Sourcing pattern), but our read model must conform to the guidelines laid out above.

When designing a Read Model for a CQRS system, you generally want said model to reflect the kinds of queries that will be run against that system. So, if you need a way to get all locations, locations by ID, and employees by ID, your Read Model should be able to do each of these easily, without a lot of round-tripping between the data store and the application.

But in order to design our Read Model, we must first know what queries exist. Here are the possible queries for our sample system:

  • Get Employee by ID
  • Get Location by ID
  • Get All Locations
  • Get All Employees (with their assigned Location ID)
  • Get All Employees at a Location

Let's see how we can design our Read Model to reflect these queries.

Design of Read Model Objects

One of the benefits of using CQRS is that we can use fully-separate classes to define what the Read Model contains. Let's use two new classes (EmployeeRM and LocationRM, RM being short for Read Model) to represent how our Locations and Employees will be stored in our Read Model database.

public class EmployeeRM  
{
    public int EmployeeID { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public DateTime DateOfBirth { get; set; }
    public string JobTitle { get; set; }
    public int LocationID { get; set; }
    public Guid AggregateID { get; set; }
}

public class LocationRM  
{
    public int LocationID { get; set; }
    public string StreetAddress { get; set; }
    public string City { get; set; }
    public string State { get; set; }
    public string PostalCode { get; set; }
    public List<int> Employees { get; set; }
    public Guid AggregateID { get; set; }

    public LocationRM()
    {
        Employees = new List<int>();
    }
}

For comparison, here's the properties from the Write Model versions of these objects (Employee and Location):

public class Employee : AggregateRoot  
{
    private int _employeeID;
    private string _firstName;
    private string _lastName;
    private DateTime _dateOfBirth;
    private string _jobTitle;

    ...
}

public class Location : AggregateRoot  
{
    private int _locationID;
    private string _streetAddress;
    private string _city;
    private string _state;
    private string _postalCode;
    private List<int> _employees;

    ...
}

As you can see, the LocationRM and EmployeeRM both store their respective AggregateID that was assigned to them when they were created, and EmployeeRM further has the property LocationID which does not exist in the Employee Write Model class.

Now we must tackle a different problem: what data store will we use?

Choosing a Data Store

In any CQRS system, the selection of a datastore comes down to a couple of questions:

  1. How fast do you need reads to be?
  2. How much functionality does the Read Model datastore need to be able to do on its own?

In my system, I am assuming there will be an order of magnitude more reads than writes (this is a very common scenario for a CQRS applications). Further, I am assuming that my Read Model datastore can be treated as little more than a cache that gets updated occasionally. These two assumptions lead me to answer those questions like this:

  1. How fast do you need reads to be? Extremely
  2. How much functionality does the Read Model datastore need to be able to do on its own? Not a lot

I'm a SQL Server guy by trade, but SQL Server is not exactly known for being "fast". You absolutely can optimize it to be such, but at this time I'm more interested in trying a datastore that I've heard a lot about but haven't actually had a chance to use yet: Redis.

Redis calls itself a "data structure store". What that really means is that it stores objects, not relations (as you would in a Relational Database such as SQL Server). Further, Redis distinguishes between keys and everything else, and gives you several options for creating such keys.

For this demo, you don't really need to know more about how Redis works, but I encourage you to check it out on your own. Further, if you intend to run the sample app (and, like most .NET devs, you're running Windows), you'll want to download MSOpenTech's redis client.

We now have two pieces of our Read Model in place: the Read Model Objects, and the Read Data Store. We can now begin implementation of a layer which will allow us to interface with the Read Data Store and update it as necessary: the Repository layer.

Creating the Repositories

The Repositories (for this project) are interfaces which allow us to query the Read Model. Remember that we have five possible queries that we need to support:

  • Get Employee by ID
  • Get Location by ID
  • Get All Locations
  • Get All Employees (with their assigned Location ID)
  • Get All Employees at a Location

However, we also need to support certain validation scenarios; for example, we cannot assign an Employee to a location that doesn't exist. Therefore we also need certain functions to check if employees or locations exist.

For the sake of good design, we need at least two Repositories: one for Locations and one for Employees. But a surprising amount of functionality is needed by both of these repositories:

  • They both need to get an object by its ID.
  • They both need to check if an object with a given ID exists.
  • They both need to save a changed object back into the Read Data Store.
  • They both need to be able to get multiple objects of the same type.

Consequently, we can build a common IBaseRepository interface and BaseRepository class which implement these common features. The IBaseRepository interface will be inherited by the other repository interfaces; it looks like this:

public interface IBaseRepository<T>  
{
    T GetByID(int id);
    List<T> GetMultiple(List<int> ids);
    bool Exists(int id);
    void Save(T item);
}

Now, we also need two more interfaces which implement BaseRepository<T>: IEmployeeRepository and ILocationRepository:

public interface IEmployeeRepository : IBaseRepository<EmployeeRM>  
{
    IEnumerable<EmployeeRM> GetAll();
}

public interface ILocationRepository : IBaseRepository<LocationRM>  
{
    IEnumerable<LocationRM> GetAll();
    IEnumerable<EmployeeRM> GetEmployees(int locationID);
    bool HasEmployee(int locationID, int employeeID);
}

The next piece of the puzzle is the BaseRepository class (which, unfortunately, does NOT implement IBaseRepository<T>). This class provides methods by which items can be retrieved from or saved to the Redis Read Data Store:

public class BaseRepository  
{
    private readonly IConnectionMultiplexer _redisConnection;

    /// <summary>
    /// The Namespace is the first part of any key created by this Repository, e.g. "location" or "employee"
    /// </summary>
    private readonly string _namespace;

    public BaseRepository(IConnectionMultiplexer redis, string nameSpace)
    {
        _redisConnection = redis;
        _namespace = nameSpace;
    }

    public T Get<T>(int id)
    {
        return Get<T>(id.ToString());
    }

    public T Get<T>(string keySuffix)
    {
        var key = MakeKey(keySuffix);
        var database = _redisConnection.GetDatabase();
        var serializedObject = database.StringGet(key);
        if (serializedObject.IsNullOrEmpty) throw new ArgumentNullException(); //Throw a better exception than this, please
        return JsonConvert.DeserializeObject<T>(serializedObject.ToString());
    }

    public List<T> GetMultiple<T>(List<int> ids)
    {
        var database = _redisConnection.GetDatabase();
        List<RedisKey> keys = new List<RedisKey>();
        foreach (int id in ids)
        {
            keys.Add(MakeKey(id));
        }
        var serializedItems = database.StringGet(keys.ToArray(), CommandFlags.None);
        List<T> items = new List<T>();
        foreach (var item in serializedItems)
        {
            items.Add(JsonConvert.DeserializeObject<T>(item.ToString()));
        }
        return items;
    }

    public bool Exists(int id)
    {
        return Exists(id.ToString());
    }

    public bool Exists(string keySuffix)
    {
        var key = MakeKey(keySuffix);
        var database = _redisConnection.GetDatabase();
        var serializedObject = database.StringGet(key);
        return !serializedObject.IsNullOrEmpty;
    }

    public void Save(int id, object entity)
    {
        Save(id.ToString(), entity);
    }

    public void Save(string keySuffix, object entity)
    {
        var key = MakeKey(keySuffix);
        var database = _redisConnection.GetDatabase();
        database.StringSet(MakeKey(key), JsonConvert.SerializeObject(entity));
    }

    private string MakeKey(int id)
    {
        return MakeKey(id.ToString());
    }

    private string MakeKey(string keySuffix)
    {
        if (!keySuffix.StartsWith(_namespace + ":"))
        {
            return _namespace + ":" + keySuffix;
        }
        else return keySuffix; //Key is already prefixed with namespace
    }
}

With all of that infrastructure in place, we can start implementing the EmployeeRepository and LocationRepository.

Employee Repository

In the EmployeeRepository, let's get a single Employee record with the given Employee ID.

public class EmployeeRepository : BaseRepository, IEmployeeRepository  
{
    public EmployeeRepository(IConnectionMultiplexer redisConnection) : base(redisConnection, "employee") { }

    public EmployeeRM GetByID(int employeeID)
    {
        return Get<EmployeeRM>(employeeID);
    }
}

Hey, that was easy! Because of the work we did in the BaseRepository, our Read Model Object repositories will be quite simple. Here's the rest of EmployeeRepository:

public class EmployeeRepository : BaseRepository, IEmployeeRepository  
{
    public EmployeeRepository(IConnectionMultiplexer redisConnection) : base(redisConnection, "employee") { }

    public EmployeeRM GetByID(int employeeID)
    {
        return Get<EmployeeRM>(employeeID);
    }

    public List<EmployeeRM> GetMultiple(List<int> employeeIDs)
    {
        return GetMultiple<EmployeeRM>(employeeIDs);
    }

    public IEnumerable<EmployeeRM> GetAll()
    {
        return Get<List<EmployeeRM>>("all");
    }

    public void Save(EmployeeRM employee)
    {
        Save(employee.EmployeeID, employee);
        MergeIntoAllCollection(employee);
    }

    private void MergeIntoAllCollection(EmployeeRM employee)
    {
        List<EmployeeRM> allEmployees = new List<EmployeeRM>();
        if (Exists("all"))
        {
            allEmployees = Get<List<EmployeeRM>>("all");
        }

        //If the district already exists in the ALL collection, remove that entry
        if (allEmployees.Any(x => x.EmployeeID == employee.EmployeeID))
        {
            allEmployees.Remove(allEmployees.First(x => x.EmployeeID == employee.EmployeeID));
        }

        //Add the modified district to the ALL collection
        allEmployees.Add(employee);

        Save("all", allEmployees);
    }
}

Take special note of the MergeIntoAllCollection() method, and let me take a minute to explain what I'm doing here.

Querying for Collections

As I mentioned earlier, Redis makes a distinction between keys and everything else, and because of this it doesn't really apply a "type" per se to anything stored against a key. Consequently, unlike in SQL Server, you don't really query for several objects (e.g. SELECT * FROM table WHERE condition) because that's not what Redis is for.

Remember that we're designing this to reflect the queries we need to run. We can think of this as changing when the work of making a collection is done.

In SQL Server or other relational databases, most of the time you do the work of creating a collection when you run a query. So, you might have a huge table of, say, vegetables, and then create a query to only give you carrots, or radishes, or whatever.

But in Redis, no such querying is possible. Therefore, instead of doing the work when we need the query, we prep the data in advance at the point where it changes. Consequently, the queries are ready for consumption immediately after the corresponding event handlers are done processing.

All we're doing is moving the time when we create the query results from "when the query runs" to "when the source data changes."

With the current set up of the repositories, any time a LocationRM or EmployeeRM object is saved, that object is merged back into the respective "all collection" for that object. Hence, I needed MergeIntoAllCollection().

Location Repository

Now, let's see what the LocationRepository looks like:

public class LocationRepository : BaseRepository, ILocationRepository  
{
    public LocationRepository(IConnectionMultiplexer redisConnection) : base(redisConnection, "location") { }
    public LocationRM GetByID(int locationID)
    {
        return Get<LocationRM>(locationID);
    }

    public List<LocationRM> GetMultiple(List<int> locationIDs)
    {
        return GetMultiple(locationIDs);
    }

    public bool HasEmployee(int locationID, int employeeID)
    {
        //Deserialize the LocationDTO with the key location:{locationID}
        var location = Get<LocationRM>(locationID);

        //If that location has the specified Employee, return true
        return location.Employees.Contains(employeeID);
    }

    public IEnumerable<LocationRM> GetAll()
    {
        return Get<List<LocationRM>>("all");
    }
    public IEnumerable<EmployeeRM> GetEmployees(int locationID)
    {
        return Get<List<EmployeeRM>>(locationID.ToString() + ":employees");
    }

    public void Save(LocationRM location)
    {
        Save(location.LocationID, location);
        MergeIntoAllCollection(location);
    }

    private void MergeIntoAllCollection(LocationRM location)
    {
        List<LocationRM> allLocations = new List<LocationRM>();
        if (Exists("all"))
        {
            allLocations = Get<List<LocationRM>>("all");
        }

        //If the district already exists in the ALL collection, remove that entry
        if (allLocations.Any(x => x.LocationID == location.LocationID))
        {
            allLocations.Remove(allLocations.First(x => x.LocationID == location.LocationID));
        }

        //Add the modified district to the ALL collection
        allLocations.Add(location);

        Save("all", allLocations);
    }
}

Now our Repositories are complete, and we can finally write the last, best piece of our system's Read Model: the event handlers.

Building the Event Handlers

Whenever an event is issued by our system we can use an Event Handler to do something with that event. In our case, we need our Event Handlers to update our Redis data store.

First, let's create an Event Handler for the Create Employee event.

public class EmployeeEventHandler : IEventHandler<EmployeeCreatedEvent>  
{
    private readonly IMapper _mapper;
    private readonly IEmployeeRepository _employeeRepo;
    public EmployeeEventHandler(IMapper mapper, IEmployeeRepository employeeRepo)
    {
        _mapper = mapper;
        _employeeRepo = employeeRepo;
    }

    public void Handle(EmployeeCreatedEvent message)
    {
        EmployeeRM employee = _mapper.Map<EmployeeRM>(message);
        _employeeRepo.Save(employee);
    }
}

Note that all interfacing with the Redis data store is done through the repository, and so the event handler consumes an instance of IEmployeeRepository in its constructor. Because we're using Dependency Injection (which we will set up in Part 4), this usage becomes possible and greatly simplifies our event handler.

In any case, notice that all this event handler is doing is creating the corresponding Read Model object from an event (specifically the EmployeeCreatedEvent).

Now let's build the event handler for a Location. In this case, we have three events to handle: creating a new Location, assigning an employee to a Location, and removing an employee from a Location (and in order to do all of those, it will need to take both ILocationRepository and IEmployeeRepository as constructor parameters):

public class LocationEventHandler : IEventHandler<LocationCreatedEvent>,  
                                    IEventHandler<EmployeeAssignedToLocationEvent>,
                                    IEventHandler<EmployeeRemovedFromLocationEvent>
{
    private readonly IMapper _mapper;
    private readonly ILocationRepository _locationRepo;
    private readonly IEmployeeRepository _employeeRepo;
    public LocationEventHandler(IMapper mapper, ILocationRepository locationRepo, IEmployeeRepository employeeRepo)
    {
        _mapper = mapper;
        _locationRepo = locationRepo;
        _employeeRepo = employeeRepo;
    }

    public void Handle(LocationCreatedEvent message)
    {
        //Create a new LocationDTO object from the LocationCreatedEvent
        LocationRM location = _mapper.Map<LocationRM>(message);

        _locationRepo.Save(location);
    }

    public void Handle(EmployeeAssignedToLocationEvent message)
    {
        var location = _locationRepo.GetByID(message.NewLocationID);
        location.Employees.Add(message.EmployeeID);
        _locationRepo.Save(location);

        //Find the employee which was assigned to this Location
        var employee = _employeeRepo.GetByID(message.EmployeeID);
        employee.LocationID = message.NewLocationID;
        _employeeRepo.Save(employee);
    }

    public void Handle(EmployeeRemovedFromLocationEvent message)
    {
        var location = _locationRepo.GetByID(message.OldLocationID);
        location.Employees.Remove(message.EmployeeID);
        _locationRepo.Save(location);
    }
}

With the Event Handlers in place, every time an Event is kicked off, it will be consumed by the Event Handlers and the Redis data model will updated. Success!

Summary

In this part of our Real-World CQRS/ES with ASP.NET and Redis series, we:

  • Built the Read Model Data Store using Redis,
  • Designed our Read Model to support our business's queries,
  • Built the Event Handlers which place data into said data store AND
  • Built a set of repositories to access the Redis data.

There's still a lot to do, though. We need to set up our Dependency Injection system, our validation layer, and our Requests. We'll do all of that in Part 4 of Real-World CQRS/ES with ASP.NET and Redis!

Happy Coding!