Random HttpListenerException error 1229

Hello,
I am running an AppHostHttpListenerSmartPoolBase server. It generally works properly except sometimes when I get the following error:

Exception type: ‘HttpListenerException’
Message: An operation was attempted on a nonexistent network connection
Trace: in System.Net.HttpResponseStream.Write(Byte[] buffer, Int32 offset, Int32 size)
in ServiceStack.Text.DefaultMemory.Write(Stream stream, ReadOnlyMemory`1 value) in C:\BuildAgent\work\912418dcce86a188\src\ServiceStack.Text\DefaultMemory.cs:riga 432
in ServiceStack.ServerEventsFeature.<>c.<.ctor>b__118_0(IResponse res, String frame) in C:\BuildAgent\work\3481147c480f4a2f\src\ServiceStack\ServerEventsFeature.cs:riga 104
in ServiceStack.EventSubscription.PublishRaw(String frame) in C:\BuildAgent\work\3481147c480f4a2f\src\ServiceStack\ServerEventsFeature.cs:riga 770

Logging some more information I can say that the exact error code returned inside the exception has value 1229. It seems to happen randomically.

After this exception is fired the connection with client goes down and then goes up again but I would to find a way to avoid throwing the exception at all.

Can someone explain how to prevent the exception to be thrown?

Thank you in advance,
Riccardo

The connection doesn’t go down because of this Exception, this Exception occurs because the network connection was dropped. Unfortunately it’s not possible from here to tell why the network connection is being dropped.

Thank you for your answer.
I have another question: that error happens also when I am in localhost connection, so is there something I can do to understand or to isolate the issue?

Thank you again,
Riccardo

Does it drop after a predictable amount of time or is it random?

WireShark is a good program to diagnose Troubleshoot TCP/IP connection issues, which should hopefully identify whether the client or server is terminating the connection. Here’s a good article I’ve found for using WireShark to Troubleshoot TCP/IP connectivity issues.

It’s totally random.
Thank you for your suggestion, I am going to try with WireShark.

1 Like

Hello again,
I’ve tried with WireShark and what I saw is that the answer for the heartbeat is usually of a few millisecons except when the exception is thrown. In that case the answer is more then 20 seconds or it get losts at all.
What we see on code side is the HttpListenerException (please see the first post).
It could be a wrong configuration (maybe some Windows parameter or something about the ServiceStack server)? Or maybe it could be a bug somewhere?

Thanks in advance,
Riccardo

It’s already too late when the Exception occurs as the connection is already lost which is what causes the Exception. If the connection is being server terminated because of the elapsed heartbeats you can try changing the HeartbeatInterval (when clients should send heartbeats) and IdleTimeout (when the server considers connection is lost due to failed heartbeat) when registering the ServerEventsFeature plugin.

Hello again,
We followed your suggestion about changing the HeartbeatInterval and the IdleTimeout and it seemed to work better but it still happens. We have that issue on very different environments and custom configurations. We also did some internal verification in order to see if it could depend from some our wrong behaviour but we didn’t find anything.
We are quite desperate because we cannot find how to fix it.
Do you have any experience about that?
Or maybe do you know any possibile cause that could annoy the communication to the point of dropping it?

Thank you in advance,
Riccardo

Only way I’ll be able to identify the issue (if it’s an issue with the code), is if I had a repro I could run locally to repro the issue.

Have you looked at the wire traffic to see who is dropping the connection?

I’ve had a quick google to find an informative video on using WireShark to analyze TCP connection issues here (there may be other more relevant ones):

Do you only have this running with AppHostHttpListenerSmartPoolBase, have you tried porting to .NET 6 to see if the issue still happens?