Wayne Brantley - 168 - Dec 10, 2013

Can you advise on where these Message Queues ended up in SS v4? I am needing an implementation of a messaging system and wanted to see what you had done.   https://github.com/ServiceStack/ServiceStack/wiki/Messaging-and-Redis  

It’s moved to ServiceStack.Server - was also mentioned in the release notes: https://github.com/ServiceStack/ServiceStack/wiki/Release-Notes

I’m also going through and updating the links on the wiki, thx.

Wayne Brantley:

I didn’t notice that.  Thanks for fixing the links on that wiki page!  I am sure you have more links to fix than you can get to right now. 

Wayne Brantley:

Reading through your MQ Architecture, I cannot see how it would work as described.  The diagram shows the messages are produced and go to a queue.  The queues it shows are in/out/priority/deadletter.   So, this should mean the producer would put any message sent into a queue.  The consumer would then read from the queue - if the consumer fails, I guess the message would go to the DLC. If the client succeeds, it may have a reply.

Looking at your implementation, I do not see any of that.  I see that you are using the publish/subscribe methods of Redis.  If you use a Redis publish and there are no subscribers the message is lost.   Additionally if there is a client connected, it will receive the message - but if it crashes - the message is lost.  

https://groups.google.com/forum/#!topic/servicestack/5ExjGhSCZkw  Also, I vaguely remember looking at an old version of this code and it did not work this way?  

Redis MQ uses server-side Redis List’s for persisting all messages in queues, the Pub/Sub used is just a notification for RedisMQ master thread that there is a message pending and that it should wake up the worker theread and notify it it has messages pending. I’m not sure which implementation you’re looking at, but  whilst the API might be called ‘Publish’ it does push it on a Redis List and notify the MQ Topic: https://github.com/ServiceStack/ServiceStack/blob/master/src/ServiceStack.Client/Messaging/RedisMessageQueueClient.cs#L77-L78

Wayne Brantley:

Ok.  Is the message lost if the client crashes?  Looks like it is?  Instead of the pub/sub  model for masterthread, why not just use BLPOP which blocks until a message is available?  (Or BRPOPLPUSH to pop it - and shove it into another ‘processing queue’ to make sure message not lost by client?   (I am no expert here!)

How can the message get lost if the client crashes? It’s synchronous, if the call succeeds the message was published in Redis, if redis isn’t there the callee gets an exception signalling that their message wasn’t sent. BLPOP wont work there is a separate message queue for every operation, each worker thread is responsible for processing messages in its own Queue.
There is known failure point with the current design which is whilst the Message Handler is processing the message and if Redis or the Network dies and the message was an error it can’t be put back into the DLQ.

Wayne Brantley:

I was speaking of the consumer…the consumer gets via pop?

Which is RedisMQ when it processes the message? yeah that’s the known failure point, basically the Message Handler pops it off the Queue whilst its processing it, this is needed because there might be multiple worker threads looking at the same Queue so it avoids processing messages multiple times. I do plan on eventually adding a RedisMqServer that does server-side message traversals between lists and giving each worker thread it’s own work queue that it moves messages to whilst its processing them.

Wayne Brantley:

I think you can avoid that.  1)  Drop the subscribe/publish - not necessary.  2)  Use BRPOPLPUSH - that blocks the calling thread until something new shows up in the list ( which is fine, you have threads per message type anyway) and pushes that message to another Redis queue.   So, if the process fails, the message would be sitting in that processing queue.   

However, maybe I should just consider RabbitMQ - it has all this all built in.  However, looks like the ‘defacto client’ is https://github.com/mikehadlow/EasyNetQ - but if I use that probably no need for servicestack?  Maybe it is smarter to try to bolt on the AMQP library (http://www.nuget.org/packages/RabbitMQ.Client/) to servicestack messaging patterns much like you have done for Redis?    (Essentially he has done that - created servicestack like message/subscription - and used that library for primitives.  


The current model is more flexible you can send control messages to the master thread and it can handle when Redis fails over at run-time as the worker threads asks for a new connection each time it starts processing messages in its queue (and each client doesn’t hold a separate connection when it’s waiting for messages) and trying to communicate with blocked worker threads is not easy, i.e. when you need to stop/resume processing of messages at runtime.

Yeah you should definitely evaluate the alternatives, I like Redis MQ since it’s simple and clean and requires less infrastructure (as I usually have Redis around), and am able to use the primitive redis list operations to introspect, populate and manage the queues and being well integrated in ServiceStack means you can call the same Service via MQ or HTTP. We’re using RedisMQ for all of StackOverflow Career’s backoffice operations which works well and handles the 150k+ msgs/day pretty well. Also being able to re-use the same Service via both HTTP + MQ was a productivity boost since you can debug and develop it over HTTP whilst calling it over MQ in the Live environment. Also adding more MQ options is definitely a core goal and has received a fair bit of interest: http://servicestack.uservoice.com/forums/176786-feature-requests/suggestions/4459053-add-more-mq-options with RabbitMQ adapter being the first one I’m looking to add.

Wayne Brantley:

Well, that is good news that you would consider another adapter.

 If you do not want to block and want to maintain your current implementation - you could consider using the non-blocking RPOPLPUSH and you can get the message pushed to another ‘work queue’ while it is being processed…(when client finishes - it deletes that message from the work queue) meaning no more lost messages.  So, simply replacment of your RPOP with the above!  Also consider a feature of the BRPOPLPUSH method - you can specify ALL the lists you want it to wait for - so again, just one thread could run this - of course it would mean you held that connection - but you already have the subscribe/publish connection active all the time anyway…So, number of connections would be less than what you have now - but you would have to be able to recover from connection loss.

The big problems I have with Redis are  

1)  No fanout queue.  One of the wins is I should be able to write another client to subscribe the messages and take action without modifying anything.  So, say when message A arrives I now want to send an email - I can just write another consumer that subscribes.  As it exists without that they are essentially ‘work queues’. 
2)  No way to fail a job or recover from a consumer failure.
3)  Has to be 100% memory backed (database is held in memory during operation).  This is more of a ‘concern’ as opposed to a big problem - but with Rabbit, do not care.
4)  HA support is weak.  Has Master-Slave, but no auto recovery, etc.   Clustering is a cluster right now, lots of issues, bugs.  

Rabbit issues:
1)  Exactly what you said above.  No visibility into the queues of messages - where in Redis they are just a ‘normal list’.
2)  ServiceStack currently shows it no love.  :-)

Yeah that’s always been the plan and IMO one of the USP’s of SS’s message-based design, it would’ve come a lot sooner if one of the companies I’ve contracted for in the last few years were using RabbitMQ :). We used Rabbit at Mflow but for a bespoke content ingest background process. I’ll try bump it up the priority list to make it a reality sooner, hopefully it shouldn’t take too long as it’s able to re-use the same underpinnings in SS that RedisMQ is using (e.g. from ServiceController.ExecuteMessage). 

The benefits of having different queues for different types is that they’re less coupled and  the type of each message is already known without having to peek into the message, it also makes it easier to assign multiple worker threads per operation type and slow operations don’t affect the processing of other messages.

But yeah Redis’s lack of a solid HA story does affect the way we use it, where we use it in a way where we can recover from lost messages, e.g. we’d put an email in the db and the post a message to Redis MQ with the id of the email to send. In addition we also have a schedule task that goes and looks if there are any unsent emails and create new MQ msgs for them to send. Likewise with other process tasks like account syncing we use db row states to signal whether something needs processing or not and a schedule task to catch any the missing ones. The DLQ has been useful as well where there’s been quite a few times where we’ve been able to gracefully recover from a bug and replay the DLQ messages without applicants knowing we had issues :).

Although the one things Redis saves you from is load/scalability failures since it’s pretty much been able to handle everything we’ve been able to throw at it, provided it’s given enough RAM. So whilst we are using it defensively, IMO your data is more accessible and it’s easier to query Redis/RDBMS (i.e. without client libraries), and because we have explicit state flags, you can inspect the database to figure out what’s been done.

1 Like

Wayne Brantley:

Agree - thanks for your time.  In one of my scenarios, I want to perform some work - so the work is already done when client requests it.  In that case, if the message is missed I could pay the price and just do it when the client requests it, so not a big deal.  However, I want to deploy something that is going to handle many more situations.   BTW, in case you have not see this SS fanout implementation (http://cornishdev.wordpress.com/2013/04/04/fan-messaging-with-servicestack-redis/) :slight_smile: