We’re working on our CI tool and were considering using a distributed workflow library such as WorkflowCore in conjunction with Redis to distribute the jobs. Given we now have a working Redis set-up, and we have some experience with ServiceStack, we’re now looking at a different approach where a job composer could dispatch multiple jobs to the MQ, and multiple workers (ServiceStack host subscribing to MQ) could process those jobs.
My question is: How can I ensure that a message is only consumed by a single worker? Do all MQ providers ensure a single message is only consumed once?
Also could you advise the best strategy for job tracking? So given the composer distributes 10 different jobs to MQ. 5 workers start processing these jobs. After all 10 jobs are done the composer needs to know about it, to move on to other things. Perhaps the workers themselves publish back to MQ the state, and composer itself subscribes to a processing MQ, or maybe all systems update a centralised cache, given we have Redis.