I love the background jobs feature. It provides a perfect, simple informational dashboard for some background jobs but I have run into an issue. Before I invest a bunch of time in writing something like a RedisBackgroundJobsFeature, I wanted to see if you had a possible solution with the existing feature that I haven’t thought of yet.
I would like to be able to scale my deployment from 1 - N instances while the interface will also show jobs from all running instances. Do you have any suggestions on how we can reliably do this using the existing feature? I have tried shared azure-file based solutions but the sqlite file fails pretty quickly with “The database is locked” messages.
The Background Jobs feature is definitely designed and optimized for an SQLite store so you wouldn’t be able to use the local libraries for an alternative implementation, but you’re welcome to take any of the implementation to implement a custom alternative for a different data store.
SQLite is highly performant with near zero latency which should be able to handle most workloads but it does require a local disk, preferably a fast SSD. If that’s not available in AKS then it wont be suitable, you should likely use whatever they recommend, it looks like Web Jobs offers similar functionality.
I have been running my containerized ServiceStack apps using AWS AppRunner and find this to be a good solution for simple deployment and scaling. However, this approach did not seem compatible with BackgroundJobs using the SQLite persistence, due to multiple parallel workers as the instance count scales up. Does the new Database Jobs Feature work better for this deployment pattern, or is there a different recommended approach for implementing BackgroundJobs when running a ServiceStack web app with 1..N instances?