What if, no matter how you try to simplify, your aggregate root is pretty darn big? Writing application services to handle these large entities is a challenge. We run into this all the time with scientific computing. The object representing a simulation to run is typically quite complex. Imagine describing the geology of the Gulf of Mexico. No way around it: it’s going to be split across 30 database tables and reference some pretty heavy inputs. Even if you decide to be clever and keep it all in one loadable JSON object, for instance, it’s just heavy.

Such large objects present us with a dilemma we haven’t always handled well. If we want to be able to reason about them clearly, separate data access from domain logic, and maintain important invariants, we’d rather write domain logic that assumes the entire entity is in memory for us to reference.

In this only slightly contrived example, the end user is adding some information about the water depth in the Gulf of Mexico 15 million years ago (M[ega]a[nnum]). Because the simulation runs in discrete time steps, we need to make sure that there is a time step introduced at 15Ma that will use this new value.

void Handle(AddPaleoWaterDepthCommand c) {  s = LoadSimulation(c.simulationId);  s.WaterDepthSeries.Add(c.Age, c.WaterDepth);  s.RecalculateTimeSteps();  SaveSimulation(s);}

But lots of inputs can introduce time steps: major depositional events, salt movements, temperature histories, etc. So even though it’s just oneWaterDepthSeries, we can only RecalculateTimeSteps if we have all that other data in memory. (In a well factored domain object, by the way, just modifying the water depth series would automatically adjust all time series because you’d be using clever Value Objects, but it’s easier to see the point by writing the code this way.)

But hold on. What if the user just wants to change the name of her simulation to “Low Heat Flow Case #1 <01> Thursday Best (FINAL 2)”?

void Handle(ChangeProjectNameCommand c) {  s = LoadSimulation(c.simulationId);  s.Name = c.NewName;  SaveSimulation(s);}

Now loading hundreds of rows from dozens of tables, along with potentially megabytes of ancillary data (like gridded inputs) just seems pig-headed. Just to change a name?

So what happens in projects like this? I’ll list two approaches; there are surely more.

  1. Make heavier use of lazy loading. Using EF, or NHibernate, or some home-grown solution, find ways to load data as it’s needed. The problems with this are of course well known: it can be complicated to set up, and performance can be very difficult to tune. The ORM often does not know how to efficiently load the data. In the best case, this secret sauce solves the problem effectively. In the more common case, programmers end up courting the ORM like a reluctant date trying to get it to do what they want.
  2. Manually determine what to load. Each application service takes care to load only the data it needs. The ChangeProjectNameCommand handler would probably get away with just loading one row, while theAddPaleoWaterDepthCommand handler would know to load all data associated with all time series. This is fast and easy to read at first. But over time, if the domain object is at all complex, there is inevitably a lot of confusion the further you get from the application services. How do you write RecalculateTimeStepswhen you don’t know what’s in memory? In long-lived code bases, you end up with too many “Load” methods for all these cases. LoadSimulation,LoadSimulationForTimesteps, LoadSimulationAndMaps,LoadSimulationAndWells, LoadEarthModelOnly. More and more things take a custom path through the code. Less reuse, more confusion, more reimplementation.

Nothing is inherently wrong with either of these approaches, but they can have trouble scaling. On a recent project we decided to tackle this problem in a way appropriate for our real load. We wanted fast updates and to have the whole model in memory at once, but not to pay the cost of loading that whole model every time we wanted to do anything.

Our approach was simple: first we mediated all updates to the entity in question through a pattern we (unimaginatively) called EntityUpdater. Instead of loading and saving the object ourselves in the application service, we just told the entity updater what we wanted to do with the object and let it worry about how to do it.

void Handle(AddPaleoWaterDepthCommand c) {  _entityUpdater.Update(c.simulationId, s =&gt; {    s.WaterDepthSeries.Add(c.Age, c.WaterDepth);    s.RecalculateTimeSteps();  });}

The first naive implementation of course just loads the whole simulation, calls the update function, and saves the simulation.

IEntityUpdater {  void Update(TEntityId id, Action updateAction);} NaiveEntityUpdater : IEntityUpdater{  void Update(TEntityId id, Action updateAction) {    TEntity e = LoadEntity(id);    updateAction(e);    SaveEntity(e);  }}

For small entities, you register this strategy as the updater and you’re off and away. But for big ones how can we be cleverer? Our realization was that most of the time only one person was editing any given model. There might be several users on the system, but nearly always they were each working their own project. A fair amount of collaboration on reading the data, but not much in editing it. We also had a limited number of servers serving those users. The problem was speed and complexity, not scaling “out” to many users.

As an aside: lots of programmers get very excited reading the most recent missive from Facebook about how they handle one billion status updates every day and forget that their scientific or line of business applications have much more limited contention. Let’s keep it simple, shall we?

We decided to just cache the object. Cache? Sure, why not? Holding a “hot” copy of the model saves a lot of messing around. Every time an application service needs to call a method on it, it just does so. The save just emits the events or row updates necessary. What’s faster than a really clever really fast database read that you spend a month tuning? Why, not talking to your database at all!

But let’s face it: caching is pretty dangerous. What happens if my copy of the data is out of date? What if someone is editing it on another server? How do two application services edit it at once? In a case where you have high contention on these business objects, or you have lots of servers that would have to agree on proper updates, this is a dumb strategy. But it can be simple and fast if the conditions are right.

First: how does the cache work? We should only cache a copy of the data if it’s clean: any failure in an application service should throw away your copy so the next operation gets a clean one.

CachingEntityUpdater : IEntityUpdater{  void Update(TEntityId id, Action updateAction) {    TEntity e = _cache.Get(id);    if (e == null) {      e = LoadEntity(id);      _cache.Add(id, e);    }    try {      updateAction(e);      SaveEntity(e);    }    catch {      _cache.Remove(id);      throw;    }  }}

Well that was easy enough. While the first load is expensive, later operations absolutely sizzle: sub-millisecond domain logic is achievable in memory, with your SaveEntity call the only true cost. First load still getting you down? Watch the user browsing the application and preemptively load stuff in a background service.

Only masochists write thread-safe domain logic, so you really need to serialize access to the object in case two people happen to want to update the same object. (Recall: in our domain that’s rare, so a few milliseconds of contention between a small handful of users per model is the worst case and absolutely acceptable.) You could do it with a lock, or you could keep a little table of locks by ID so that you can tolerate lots of users. Implementing that is an exercise for the user, but quite straightforward. (Note: this is just to keep updates in the same process in a shared-memory threading model from stomping on each other. It’s not to handle concurrent updates by multiple systems or users on multiple servers. We’ll get to that.)

So, what’s not to like? You’ve got disgustingly fast access to domain objects, you have the peace of mind and ease of modeling that comes from the assumption that the whole domain objects in memory at once. Shoot, you could even expose the cache of domain objects for services that are reading data!

There are two more gotchas that were straightforward to get around in our environment, but bear thinking about if you’re trying this at home.

  1. Unit of Work. Notice how we only update one entity at a time? That tremendously simplifying assumption makes the implementation of an update strategy like this very straightforward. If your application service has to update five entities in a single transaction, now you have an absolute mess to deal with if you have any contention on those objects. Before you know it, you’re writing an in memory database. Stop it, and use the real database. Change your strategy.
  2. Cache Invalidation. You really need to know when that cached object has gone stale. It’s no good editing an object in memory that’s different from what’s in the database. To address this concern, you need to really understand what the source of truth in your system is, and where updates can come from. If you can partition your keyspace so that only one server will edit any given model at a time, wow, you’re done! You are the source of truth, and you can assume you’re right. More likely, you have multiple application servers, or there are other processes (e.g. legacy systems) that are updating the data underneath you.

Cache invalidation is really the hardest problem to solve. There are a few solutions that can be simple enough to justify the complexity of managing cache coherence in return for this programmability and speed payoff we’re after.

  • Invalidation with Events. In our system entity edits emitted domain events into a common event store. We could listen for these events coming in and flush the cache if a neighbor edited the same object. Emitting events during operations makes a lot of things easier, and if you can do it, you have a leg up on this problem.
  • Lean on the Database. Most databases now have a feature where you can be notified if data is edited. Oracle for instance calls them continuous query notification. These notifications are in fact usually meant for cache invalidation. So use them! Register a permanent watch on the relevant tables, and whenever something changes, flush the relevant parts of your cache. Shoot, reload it then if you’re feeling adventurous.
  • Optimistic Concurrency. One interesting idea is actually to just ignore the problem. Many ORMs or ActiveRecord implementations are capable of detecting concurrent edits. If you loaded an entity with timestamp 5, and then by the time you go to save the copy of the entity in the database is at timestamp 7, you have to do something clever. In this strategy, you’d detect that failure, reload the object, and try the update again. It defers solving the problem until it actually bites you. Again, in an environment where edits on any individual model are rarely contended, the occasional inefficient update is rare enough to justify this simplifying assumption.

This pattern is useful for getting high speed out of apparently stateless application services over lightly contended but heavy domain objects that have properly enforced transactional boundaries. That’s a lot of qualification, and be sure it’s all true! But the smiles on our users’ when they can update these objects at interactive speeds are worth it.

And perhaps most importantly — and easy to forget after all this talk of caches — the real winners are the programmers who get to make a dramatically simplifying assumption when writing their domain logic.

Contact Us

We are ready to accelerate your business forward. Get in touch.

Tell us what you need and one of our experts will get back to you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.