From what I understand it should be fine, but the fact that you are seeing the TryResolve “stops working after a while” sounds odd, but TryResolve does acquire a lock in some circumstances that might be causing an issue.
Related code that would help us understand the problem might be:
How you are registering the dependency
Where you are using TryResolve
Code around the use of the SSE instance after resolving?
I’ve done some further testing this evening, it appears that since the upgrade to NET 6 and the latest ServiceStack, SSE seems to have issues that weren’t occurring in previous versions for some reason.
I see the following errors on the receiving process:
[SSE-CLIENT] OnExceptionReceived: Last Heartbeat Pulse was 41582.5552ms ago on #user3
And errors on the process sending the events:
Operation causing hung connection eventually completed, releasing connection...
Hung connection detected: Could not acquire semaphore within 30000ms.
It would appear that a timeout is occurring which breaks the connection, this is eventually recovered but causes downtime in the event stream.
It also seems when this happens, the ServerEvents resolution fails and just gives null.
I’ve not seen this happening in .NET Core before, but it can happen in .NET Framework when the write to a response stream hangs which prevents the connection from being disposed. When this happens it hangs the thread that is attempting to write to the response stream, all we can do at this point is record the thread as hung and park it to let it complete on its own. The timeout would be a symptom of the hung connection (i.e. not the cause), which is a result of heartbeats not being sent before their timeout elapses which attempts to dispose the connection and kicks off a reconnection.
To best mitigate this from happening you should avoid using any IServerEvents sync APIs and always use the *Async API equivalents.
Upgraded from NET Core 3.x and ServiceStack 5.9.2.
I’ve spent some time moving my code from using System.Timers.Timer to System.Threading.Timer. Along with the changes to use async where possible, this seems to have stopped the error from occurring. I’ve been testing for most of today and haven’t seen any log error as of yet.