Heartbeat check using **ServiceStack.Redis**

We are actually facing a timeout issue reg. redis on machine, Every time there is a Surge in number of connections, Server load goes up. So, we are thinking of keeping 8-10k connection alive with Redis server in our connection pool, so that we don’t have to create connection at the time of message processing.
For that we are looking to set up some sort of heartbeat check using ServiceStack.Redis api in regular interval. PLease help.

It’s worth remembering that Redis is single threaded, so if you are concurrently sending these messages, the, it needs to process them one at a time. Redis is very fast, so if the messages are small and just simple commands like SET, this should be fine, but worth remembering.

Are you wanting to have a heartbeat to check for clients of Redis or Redis availability? You haven’t provided much detail or example of what you have tried so far, so if you can include more information that would be useful.

If you are using it in a Publish / Subscribe setup, you can take a look at our RedisServerEvents class where we use a heartbeat check to determine if a client is still connected or not.

The more details you can provide about your specific issue and related context, the more myself and others might be able to assist.

Below is the use case we are trying to solve.

In our application, the number of connections created every day ranges from 1K to 11K based on volume of messages ingested during an hour.
There are times when there is a surge in volume of ingested trades. That results in more than 5K connections getting created in a span of 2-3 mins. This leads to high server load (100%) in Azure Cache for Redis, resulting in loss of messages/ slow processing.

We are planning to keep a few connections open all the time. These connections objects should not get destroyed on server side. Currently they are destroyed after being idle for some time.

The thought is that if we keep 7K-8K connections alive all the time then the surge in ingested message will not end up creating new connections and hence the server load will not reach 100%.

In order to achieve this, we are looking for an out of the box heartbeat/keepalive functionality which will continue to send events every x mins (even when there are no valid messages) to keep the connections alive.

Are you able to share some of your code?

What Redis Client Manager are you currently using? The PooledRedisClientManager uses a connection pool to reuse clients.

So your Redis instance (in this case hosted on Azure) is at 100% load (cpu?)? Are you seeing timeout exceptions? If not, what exceptions are you seeing? Can you share the staketrace?

Maintaining a large number of connections open may not alleviate the load issue. While individual connections opening and closing can cause additional load, these are low numbers for Redis. You can look at optimizations like pipelining.

Are you currently using the RedisPubSubServer for your implementation currently?

There is no “out of the box heartbeat”, but if you haven’t already, I would collect specific details on the errors you are seeing to make sure your proposed solution will have the impact you are looking for. Eg, the RedisPubSubServer has keep alive retries already which are configurable via KeepAliveRetryAfterMs property. As well as an OnError and considerable debug logging. The more specific details on the errors and stacktraces you are seeing as well as implementation details, the more myself and others will be able to help.

In starting, we are initializing a pool of 20 RedisManagerPool, each with MaxPoolSize property of 45**.** Later, on the need basis, we are getting RedisClient from one of the RedisManagerPool and once the task is over, we are finally disposing the RedisClient object, making them in-active.

If I’m able to understand it correctly, I have below queries…

Q1 - So, will these in-active redisClient, will be re-activated and reused. in case we request for a redisClient? Or it will create a new RedisClient. What we are observing, whenever there is a surge in messages, more than 5K connections getting created in a span of 2-3 mins.

Q2 - If in-active redisclient can be reused, after re-activating then re-activating will also be as expensive as creating new RedisClient. And if these clients are just idle, then do we need to ping it in order to keep them alive?

As you’re using a pool, disposing the client only returns the client to the pool, it doesn’t close the tcp connection. Resolving a new client will use an inactive client from the pool before creating a new client connection (which is created outside the pool when there are no more inactive clients available).

Reusing a connected client is not expensive, it’s essentially just marking the client as active to remove it from the pool then making use of an already connected client. But neither is creating new tcp connections in Redis which has minimal overhead, i.e. it’s not nearly as expensive as establishing a new RDBMS connection which requires more server resources per connection. No pings are required to keep tcp connections alive, they’ll stay alive until the connection is forcibly dropped.

To increase the usage of pooled clients you should consider increasing the pool size, which you can do with Redis Configuration class:

Configure Pool Size of Redis Client Managers

RedisConfig.DefaultMaxPoolSize = 100;