PooledRedisClientManager clients leaking into multiple threads?

Hi

I’m having an issue with Redis that seems to be caused by clients being shared across multiple threads.

I don’t know how to find what’s causing a client to leak into multiple threads. Is this a known issue with the PooledRedisClientManager?

Here’s the relevant part of my setup (using AppHostHttpListenerBase for my app host):

public override void Configure(Container container)
{
    container.RegisterAutoWired<TelemetryService>().ReusedWithin(ReuseScope.Request);
    
    container.Register<IRedisClientsManager>(c => new PooledRedisClientManager(redisConnectionString));

    container.Register<IMessageService>(c => CreateMqServer());
    var rabbitMqServer = container.Resolve<IMessageService>();

    rabbitMqServer.RegisterHandler<TelemetryMessageWrapper>(
        m =>
        {
            return ServiceController.ExecuteMessage(
                Message<HandleTelemetryMessageRequest>.Create(m.GetBody().ConvertTo<HandleTelemetryMessageRequest>()));
        },
        noOfThreads: 8);
}

And here’s the service:

public class TelemetryService : Service
{
    public virtual void Post(HandleTelemetryMessageRequest request)
    {
        //do some processing here...
        
        var key = CustomerLastMessageHandledKeyFormat.Fmt(customerId);
        
        Redis.AddItemToSet("LastMessageHandled:Telemetry:Keys", key);
        
        Redis.ExecCachedLua(RedisLuaScripts.SomeScript, scriptSha1 =>
            Redis.ExecLua(RedisLuaScripts.SomeScript,
                keys: new[] { key },
                args: new[] { DateTime.UtcNow.Ticks.ToString() }));
    }
}

After a while, messed up keys start appearing in the set:

SMEMBERS LastMessageHandled:Telemetry:Keys

  1. “LastMessageHandled:Telemetry:CustomerId/4”
  2. “*4\r\nMessageHandled:Telemetry:CustomerId/5”
  3. “LastMessageHandled:Telemetry:CustomerId/7”
  4. “*3\r\nMessageHandled:Telemetry:CustomerId/5”
  5. “\r\n-9LastMessageHandled:Telemetry:Customer”
  6. “LastMessageHandled:Telemetry:CustomerId/5”
  7. “*6\r\nMessageHandled:Telemetry:CustomerId/5”
  8. “LastMessageHandled\r\n$164\r\nlocal value = tonumber”

It’s not a known issue with PooledRedisClientManager, but the sharing of Redis client could be happening anywhere you’re using Redis not just where the error occurred.

But I can see you’re re-registering TelemetryService in the IOC, please never ever do this, ServiceStack is responsible for registering and autowiring all Services, so remove the registration for TelemetryService and any other Services you’re doing this for.

Thanks for looking. I was hoping another set of eyes would reveal something obvious.

Turns out I was missing a ReuseScope when registering the RedisClientManagerCacheClient, so that should explain the problem.

container.Register<ICacheClient>(c => new RedisClientManagerCacheClient(c.Resolve<IRedisClientsManager>()));

Should be:

container.Register<ICacheClient>(c => new RedisClientManagerCacheClient(c.Resolve<IRedisClientsManager>()))
   .ReusedWithin(ReuseScope.Request);

But I can see you’re re-registering TelemetryService in the IOC, please
never ever do this, ServiceStack is responsible for registering and
autowiring all Services, so remove the registration for TelemetryService
and any other Services you’re doing this for.

Thanks for the pointer, not sure why I was doing that. :thumbsup:

Turns out I was missing a ReuseScope when registering the RedisClientManagerCacheClient, so that should explain the problem.

container.Register<ICacheClient>(c => 
    new RedisClientManagerCacheClient(c.Resolve<IRedisClientsManager>()))

This isn’t the cause, the ICacheClient is supposed to be registered in the default singleton scope but as:

container.Register(c => c.Resolve<IRedisClientsManager>().GetCacheClient());

Example from the Caching docs. Which is the official API for it and does a similar thing.

Hmmm, ok, looked at the docs.

That leaves me a bit stuck. Any ideas on how to find the cause?

All Redis usage in my entire app is via the auto-resolved Redis clients (via the Service.Redis property), so should be disposed and released back to the pool automatically when the request ends.

There are only 2 services that use Redis directly (outside of the ICacheClient usage). I’m not seeing anywhere the Redis clients could be leaking.

My logs have lots of RedisResponseExceptions with errors like:

Unknown reply on integer response: 43OK, LastCommand:'SADD…
Unknown reply on multi-request: 580, LastCommand:'ZRANGE…

From one of the bad keys in a Redis set, it seems that multiple instances of my TelemetryService are somehow sharing the same client:
“LastMessag$4\r\nEVAL\r\n$164\r\nlocal value = tonumber”

“LastMessageHandled” is what all the keys in the set should begin with.

“local value = tonumber” is the beginning of a Lua script that is called immediately after a key is added to the set:

Redis.AddItemToSet("LastMessageHandled:Telemetry:Keys", key);

Redis.ExecCachedLua(RedisLuaScripts.UpdateScript, scriptSha1 =>
    Redis.ExecLua(RedisLuaScripts.UpdateScript,
        keys: new[] { key },
        args: new[] { "LastReceived", DateTime.UtcNow.Ticks.ToString() }));

It looks like a SADD command and a EVAL command are being run on the same client at the same time.

Have you removed the registration of the TelemetryService?

If so can you show error details that occurred after you did this?

My bad, looks like the problem went away after I removed the registration of the service. :flushed:

I’ll keep monitoring and let you know if there’s still a problem.