We have, as it turns out, naively implemented caching in our services, thinking that it would be beneficial to our services.
As many recommend (not just the SS community) for this kind of strategy, we cache GET responses (for say 5mins), AND we wipe the cached GET responses immediately when a related PUT or DELETE request is received (for the same response).
We also take steps to wipe any search results using GET, when we receive a POST that could potentially add to the cached search results.
This strategy is quite effective at reducing load on the service, and response times from it. And it is tolerant to resources that change. It’s a pretty maintainable solution too. Works just fine, for resources that either change often (volatile), or that change infrequently (in-volatile).
However, given that many of our original resources (Resource A, B, C) are aggregated by other consuming services (T, K) that compose/aggregate these cached resources into larger aggregated resources (Resource D and E) and that these aggregated resources are then aggregated further by a WebAPI (K) to drive a SPA web site, the problem very quickly becomes: "what happens to cached aggregated resources (Resource D and E) from these aggregating services when a cached original resource (i.e. Resource A, B or C) is updated? and this effectively invalidates the cached aggregated resources (Resource D and E)?
How will the aggregation services (T, K) know that an original resource has changed?, and how to prevent them serving stale aggregated results? Or a better question is, how will services T, K known when to request fresh copies of Resources A, B, C?
We understand that:
One technique, (Never Cache) is never to cache the aggregated resources at (T, K), and that forces the aggregation services to always fetch a fresh copy of the original resources (A, B, C) from the services that do cache them. There is a trip across the wire to the server for that of course.
Another technique, (Client-Cache) is to implement client side caching on the back of each of the aggregation services (T, K), just like web browsers have (caching responses from services to the right). And since the aggregation services (T, K) have built-in knowledge of which resources they aggregate they can choose to purge their own client-side caches of soon to be stale resources, and re-fetch fresh (updated) resources from services (T, X, Y, Z), when they know the original resources will be invalidated.
Are there other strategies for ensuring that stale resources that are aggregated are invalidated reliably?
(we understand the mechanisms employed by HTTP caches (i.e validation and expiration), what we are asking is for strategies or guidance that work specifically for services that aggregate resources.)