Service Caching

We have, as it turns out, naively implemented caching in our services, thinking that it would be beneficial to our services.

As many recommend (not just the SS community) for this kind of strategy, we cache GET responses (for say 5mins), AND we wipe the cached GET responses immediately when a related PUT or DELETE request is received (for the same response).
We also take steps to wipe any search results using GET, when we receive a POST that could potentially add to the cached search results.

This strategy is quite effective at reducing load on the service, and response times from it. And it is tolerant to resources that change. It’s a pretty maintainable solution too. Works just fine, for resources that either change often (volatile), or that change infrequently (in-volatile).

However, given that many of our original resources (Resource A, B, C) are aggregated by other consuming services (T, K) that compose/aggregate these cached resources into larger aggregated resources (Resource D and E) and that these aggregated resources are then aggregated further by a WebAPI (K) to drive a SPA web site, the problem very quickly becomes: "what happens to cached aggregated resources (Resource D and E) from these aggregating services when a cached original resource (i.e. Resource A, B or C) is updated? and this effectively invalidates the cached aggregated resources (Resource D and E)?

How will the aggregation services (T, K) know that an original resource has changed?, and how to prevent them serving stale aggregated results? Or a better question is, how will services T, K known when to request fresh copies of Resources A, B, C?

We understand that:

One technique, (Never Cache) is never to cache the aggregated resources at (T, K), and that forces the aggregation services to always fetch a fresh copy of the original resources (A, B, C) from the services that do cache them. There is a trip across the wire to the server for that of course.

Another technique, (Client-Cache) is to implement client side caching on the back of each of the aggregation services (T, K), just like web browsers have (caching responses from services to the right). And since the aggregation services (T, K) have built-in knowledge of which resources they aggregate they can choose to purge their own client-side caches of soon to be stale resources, and re-fetch fresh (updated) resources from services (T, X, Y, Z), when they know the original resources will be invalidated.

Are there other strategies for ensuring that stale resources that are aggregated are invalidated reliably?

(we understand the mechanisms employed by HTTP caches (i.e validation and expiration), what we are asking is for strategies or guidance that work specifically for services that aggregate resources.)

You can always take the simple sledge hammer option and just nuke all aggregate caches on any create/update that could invalidate any aggregate cache. I prefer going with the simple solution first then revisit it if load becomes an issue later, some caching is a whole lot better than no caching, so many times any simple/naive caching solution will suffice.

One approach that may work with dependent/aggregate caches if the dataset permits is to get the last modified date from all the dependent caches and add it to the cache key. That way if no dependent resources have changed you’ll get cache hit otherwise if one of the dependent resources have updated since you’ll get a cache miss. With this approach you’ll also want to set an expiry on the caches so it will clean up after itself as you wont be able to predict the cache keys. The UnixTime ms is generally a good candidate for serializing dates since it’s sequential/sortable, free of Time Zone issues and can be captured in an integer if needed. Instead of Date’s you could also concatenate the versions of each of the dependent resources (if you’re maintaining them), basically you just need something sequential that will result in a new cache key when one of the dependent resources have changed.

Sledgehammer is a good minimal viable option, which would be nice, but in our example T wouldn’t know to nuke its cached responses if K updates Resource C directly with Z.
T would still respond with stale data for Resource C.

PROXY

The only way the sledgehammer could work reliably is: if all updates from K go through T. So in effect T becomes a proxy to X, Y Z, and K does not talk to directly to X, Y or Z. Which is viable.

PURGE

These guys talk about PURGE (http://restcookbook.com/Basics/caching/) as a strategy for telling dependent services, to clear their caches. Its an interesting approach, not a standard HTTP approach, but has some merit. (SS doesn’t have a standard PURGE verb or Purge() service operation though :frowning: bummer!)

If we did PURGE, then when K updates a resource at Z, it can send a PURGE /resourceD to T, and then a GET /ResourceD to T, which should have forced T to re-fetch ResourceA, B and C from X, Y and Z.

thoughts?

I’d avoid esoteric verbs and just send a normal POST, there’s no value using them and will in all likelihood just end up as a source of issues.

It just sounds like you’re just calling a normal service saying which caches should be invalidated, although I personally wouldn’t be calling the service twice for this, I’d just have an extra ?reload=true param to indicate that it should return and populate the cache with a fresh version.

Yeah, fair call, agreed. Want to avoid the additional chat (where possible too).

We had experimented with a custom header like this to go along with the GET, to mean “force refetch, not from cache”.

Expect: Cache-Control=no-cache

or some such value.

But now coming to think of it - if the goal is to get the service at T to not use any of its cached values, then don’t we just support ‘ETags’ or ‘Last-Modified’ at T, X,Y and Z.
Then form K, we simply send a special (NOP) ETag in a If-None-Match, or a future date in a If-Modified-Since to T?
Surely the services T should/could/would take that as a definite cache miss, and do a full refetch on X,Y,and Z?

This way, we are simply just supporting standard HTTP caching behavior, arguably, with just special support for a special NOP Etag for a If-None-Match, or supporting a future date for If-Modified-Since? we could do either or both.

Right, it would be optimal to implement the HTTP Caching headers, for dependent/aggregate caches you would still need a strategy similar to what’s described above to calculate the last modified date or version string of all dependents.

Thanks Mythz, exploring this with someone helped me get to a possible solution.

I’ll report back once we have the kinks worked out of this addtion. Perhaps there is a new HttpCachingFeature() plugin here that others can reuse? Let’s see.