This is another question where I feel like I’m missing something obvious, but here goes:
I have an AutoQuery Data (i.e. using QueryDataContext.ServiceSource) endpoint that is storing the results in a Redis Cache. The call to get the data the first time is slow and expensive (can take several minutes), and I can sometimes get multiple requests for new data before the cache can get created (either initially or when the cache expires), enough to negatively impact the server as a whole. I thought about gating the request call via a Redis Lock:
// GetMyData is used as the ServiceSource for AutoQueryDataFeature
public List<MyData> Any(GetMyData request)
{
using (IRedisClient redis = clientsManager.GetClient())
{
using (redis.AcquireLock("ExpensiveCall", TimeSpan.FromMinutes(5)))
{
// Expesnive call here
}
}
}
… which helps manage the overall load (at the cost of some client code timing out), but doesn’t solve the problem of just getting the data once and relying on the cache. Is there a way to update the cache in the background instead through another process, or to serve the blocked clients cache data once I get it?
I’d only use redis distributed locking if you have multiple load balanced app servers and you absolutely want the same request across all of them.
The AutoQuery cachable Memory data source is an alternative strategy of pre-loading a disconnected data source and have AutoQuery Serviecs operate of that, i.e:
Using a non distributed lock makes sense since I don’t necessarily need the same request across my load balanced servers… is there any way to serve a cached response to a client blocked by the lock (once the lock is released)? I’m guessing not since caching short circuits the request, and if you are already in the request it’s too late?
Ok, but would that work with the caching done by the AutoQuery, or do I need to turn off caching there? e.g.
// Don't cache results here, rely on the service to cache data
.AddDataSource(ctx => ctx.ServiceSource<MyData>(new GetMyData(),
null, TimeSpan.FromHours(20)))
Or, should I use the in Memory LocalCache in conjunction with ToOptimizedResult (with a shorter expiration to cover clients stuck in this state), and use my regular cache (Redis) with AutoQuery? Something like:
This just shows generic caching for a single query (it’s only going to be able to cache the exact same query), if you wanted to cache all auto queries I would just use MemorySource and load the snapshot datasource you want all AutoQuery Data requests to query, I’d really avoid trying to do any double-caching.