MQ Server ensure message is processed only once?

We’re working on our CI tool and were considering using a distributed workflow library such as WorkflowCore in conjunction with Redis to distribute the jobs. Given we now have a working Redis set-up, and we have some experience with ServiceStack, we’re now looking at a different approach where a job composer could dispatch multiple jobs to the MQ, and multiple workers (ServiceStack host subscribing to MQ) could process those jobs.

My question is: How can I ensure that a message is only consumed by a single worker? Do all MQ providers ensure a single message is only consumed once?

Also could you advise the best strategy for job tracking? So given the composer distributes 10 different jobs to MQ. 5 workers start processing these jobs. After all 10 jobs are done the composer needs to know about it, to move on to other things. Perhaps the workers themselves publish back to MQ the state, and composer itself subscribes to a processing MQ, or maybe all systems update a centralised cache, given we have Redis.

Yes they all take ownership of the message so only a single worker is processing it but how they requeue the message for retrying upon failure is different. RedisMQ just publishes it back to the MQ whilst RabbitMQ sends an explicit NAK where the broker does it. When messages have reached their max RetryAttempts they’re sent to the DLQ.

As described by the Message Workflow the Response of the MQ gets sent to the .inq of the response message so if you have Redis I’d just have a handler for the Response Message which increments a Redis Key that it only acts on after it’s received all 10 messages. All messages should be sent with the same JobId to be able to correlate them which will let you have a Redis Cache Key like urn:{Type}:completed:{JobId}. You’ll also need to Monitor the {Type}.dlq for any failed messages as they wont be processed or counted by the Response handler.

Thanks for the quick response. I knocked up a quick sample which demonstrates publishing multiple jobs to a Redis queue, and multiple subscribers which process these jobs. When all the jobs are done, the publisher shuts down.

One thing which would be handy is to be able to register a handler via a custom queue name so that it might be possible to run multiple publishers which share the same pool of workers (subscribers), but the response handler set-up in the publisher could leverage the ReplyTo queue name.

1 Like

Not sure if what’s being requested is supported by having your different jobs returning the same Response DTO, e.g. CompletedJob { MessageType, JobId } that way you could use a single handler to generically handle completed jobs and fire off the necessary batch completion handlers.

Otherwise I’ve no desire to change the Message Workflow which IMO is Simple, Intuitive & Typed. Adding complexity and niche rules to it diminishes these goals, it’s an indication that you should implement your own bespoke MQ workflow to suit your requirements.

FYI since PubSubJobsDemo is a .NET Core RedisMQ Only project, have you considered using the worker-redismq template? It’s designed for hosting non-HTTP long-running Background Tasks.