ServiceStack with Docker Swarm / Kubernetes

I am not a SysOp and do not know much about Docker orchestration and service detection. However it may be, that servers I wrote will run in an environment that allows for multiple instances of a service. I just wonder how that influences my code and if I have to be aware of that at all. Key things I think about:

Assuming they like to setup a Swarm containing three instances of every server I wrote, how are the following things handled?

  1. Client sessions: The user connects to an URL and is authenticated. He gets a session but is this session cookie valid on all three instances? Should I care or is that handled by the infrastructure? (Load balancer in Swarm / Kubernetes environment). I only know, that Swarm has a number of tools doing DNS, service detection etc. but I have no idea how this is configured and how it works.
  2. Server side events. I send push messages to clients based on certain events on the server, e.g. if RabbitMQ sends a confirmation message on processing a previous command message. There is only ONE server instance who picks that message from a specific queue by nature, but how are the push clients registered on multiple server instances so the notification finds its way to the client? (It can be any of the three instances that picks the message and that one needs to push a notification to the client.

Has anybody here some experience with ServiceStack and Docker Swarm / Kubernetes? What does the application need to cover or be aware of, to work in such an environment?

Many thanks for sharing some experience!

We have some docs on deploying with Docker to AWS ECS but we don’t have experience with Docker Swarm or Kubernetes but these orchestration tools work at the container level so the fact it’s running ServiceStack should be a transparent impl detail.

Session Cookies are just references to User Sessions persisted in the registered ICacheClient so as long as each App Server is configured to use the same distributed caching provider the cookies will resolve to an Authenticated UserSession in each App Server. JWT is also noteworthy as their ideal for use in stateless Microservices.

Server Event Subscriptions are localized to the Server that’s holding the long running HTTP Connection, the only option for load-balanced servers to send messages to clients connected to different App Servers is to configure it to use Redis Server Events. Either that or you’d need to designate one of the App Servers to handle all /event-stream connections that also handles sending the notifications.

An alternative solution to using Server Events is to implement a custom long-polling solution, we do this for our built-in hot reloading which instructs the browser to reload when it detects file changes on the server. If you’re interested in this approach this is our long-poll HotReload Service and this is the client .js that does the periodic polling.

Hi Demis,
Thanks for this info. Glad to read that I do not need to change my code dramatically! I use Redis for everything since the beginning: Authentication, cache, configuration store and also for SSE. However I will look into the links you provided as soon as I have some more time.