Scaling out a Redis Queue handling program

I have read that:

The message broker and client library have a major limitation, however, in that they work on a broadcast basis. As a result, multiple clients listening to a particular channel will all receive every message.

This raises issues when trying to use the Redis message broker for creating scalable systems where extra clients can be added to provide additional processing resource. For this type of scalability a round-robin system of message is needed, where messages are balanced across subscribers.

Source

Is this still correct? If I scale out a program that handles the Redis Queue would I hit a potential double-handling of a message?

No that’s not true, only one client can receive the message, the broadcast channel is used to tell the MQ workers that a new message has been published and that they should all check their queue to see if they have any messages, but only the 1 client that pops the message off the Redis List will receive that message. Also each Message Type has their own queue and there’s only 1 worker reading from each queue by default so even if Redis list operations weren’t atomic (every Redis op is atomic) it would be impossible for the message to be received to more than 1 worker.

The scalability limitation is that all clients are listening to a single Redis topic, but Redis is so fast and scalable you’re unlikely to hit the limit of a single instance especially in an MQ scenario where most of the CPU resources are in the workers processing the MQ Request and not in the Redis server which just has to deliver the message to workers.

To give you some perspective on the scalability of Redis, StackOverflow uses a single Redis master for all its remote caching which receives about 160 billion ops per month with CPU at 2%, whilst they run redis on powerful servers, very few companies are going to out grow a single Redis instance and I can’t recall of an instance where anyone’s hit the scalability limits of Redis MQ.