Redis Sharding, is there a best practice approach

Last year I posted a thread on Partitioning Redis. At the time I went with the solution of creating different DTO objects for different classes of service, which of course works beautifully (thanks Mythz).

I’m now at the point where I have multiple instances of a single service and need to broker messages to each one, via REDIS. note, that its not a load balancer scenario, its a scenario where the services have connections and therefore manage state, and i must get the message to the exact instance of the service.

I am considering the following approach:

  1. Group each instance of the service with its own Redis instance. This has lots of benefits.
  2. The broker will send a message to the redis instance on that server
  3. The message will be picked up and processed by the correct shard.

In order to do this, I need to dynamically manage redis client connections. Currently my RedisQueueProvider : IQueueProvider class manages only one connection to a redis instance, I am considering padding this out to manage multiple redis connections. (A pool of pools effectively)

Is this the correct approach, or is there a built in sharding mechanism that I can use for this?

Kind Regards.

The answer to your question is there a best practice approach to Sharding? Yes, it would be to use Redis Cluster, i.e. the native solution offered by Redis. But it works very different to how you want to use it, i.e. It shards based on hashed key slot for each key.

So you would need to invent your own Sharding solution, but I still don’t understand the rationale for effectively every App instance maintaining their own Redis Server instance. Think of Redis as a infrastructure data server dependency like an RDBMS, would you have each App instance have their own SQL Server instance just so it can store data relating to that instance? No, it would just be a field in a table. With Redis you could append the instance Id to the data structure key name to make it local for that instance.

If you just wanted to send a message to a specific instance I would just be making a HTTP API call to that instance. An alternative Redis solution would be to use a Redis Pub/Sub solution like Redis Server Events where by each instance would listen to a unique channel specific to them that way messages sent to that channel are only received by that instance. If you needed to persist those messages per instance beyond App Restarts, i.e. so you can’t just use an in memory solution like Background MQ, I’d prefer to use an embedded data persistence like SQLite, Google’s LevelDB or Facebook’s RocksDB over maintaining a custom infrastructure data server dependency per App Instance and custom client/server sharding solution.

Yes, what you say makes complete sense. The challenge I have is that our infrastructure is extremely high volume, and even a fully distributed read/write DB cluster, would become inefficient at these scales (think billions of messages per second).

What localized redis instances enable us to do is pipeline the messages from point to point, and shard the messages in the message bus with the delivery service. The server instance (which includes the service + redis) is still fault tolerant as if it dies, its removed from the array.

When you have a cluster of stateless services it makes perfect sense to have them share an infrastructure datastore, but when you are managing of collection of stateful services (ones that have connections), and which are processing high volumes of traffic, they benefit greatly from having their own localized infrastructure.

Its worth mentioning that we predominantly use redis for queueing rather than as a Key/Value store, although we do do that too.

Thanks for all the thoughts, I’m going to ponder them some more, I also felt a direct http call was better but basically the receiving app would just queue it itself anyway and I couldn’t much gain, but an embedded datastore may be way to go, will consider it

I mean you could use a local Redis instance in-place for an embedded data persistence solution, but I wouldn’t architect it as a sharded solution where different App instances communicate directly with each Redis shard instance, I’d still call the App Instance HTTP API so it’s the only instance that needs to communicate with its local Redis instance.

FYI If you register a Redis MQ or Background MQ Service then you can call the One Way endpoints of your ServiceStack Services which queues the message before processing it in the background.

Thanks, yes I agree with what you have said.

This approach we have taken is a message bus architecture with the following properties

  1. Top down, (presentation -> app -> micro/entity) are all rest based calls.
  2. Left to Right, Pipeline movement is handled through the message bus. no direct API calls, but just messages put in queues and picked up by services.

Up until now this Message bus has been a centralized REDIS instance, but we need more granularity to achieve auto-scaling potential.

e.g.

https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessageBus.html

The problem with pipelines like these is not so much the queueing before the send of the message (although that is important too), but rather the queueing of the message on receipt, because each step in the pipeline must have some persistence in the case of high volumes, backlogs etc. so each application ends up with a receive(queue)-> process->send(queue) structure. with these volumes you can’t guarantee you can process the message immediately upon receipt, so you must queue it straight away for when there is processing time.

But you are totally right that the HTTP App should be the Manager of its own queue in this new structure, so I’ll likely adjust this flow.

Thx for the dialog

1 Like