I’m using SS 4.5.8, self-hosted with AppSelfHostBase.
It looks like ServiceStack.Text’s DirectStreamWriter has a detrimental effect on both performance and response size. This is because frequent flushing of the stream causes many more chunks to be written to the HTTP response using Transfer-Encoding: chunked. Previously, responses were written in chunks of 1024 bytes, the default buffer size for StreamWriter. HTTP responses now look like this:
1
[
1
{
1
"
2
Id
1
"
1
:
3
123
1
,
One of my heavier responses went from 181,715 bytes in 0.248 seconds to 522,338 bytes in 1.433 seconds.
This can be mitigated by using [CompressResponse] (which is awesome, thank you!!!) on all services. CompressResponseAttribute serializes the response DTO to a string before compressing it, so chunked encoding isn’t used since a Content-Length is known. One downside to [CompressResponse] in 4.5.8 is that all exceptions are returned as HTTP 200s instead of 4xx/5xx after being compressed. In the source in master it looks like they will be excluded from compression which means that a large serialized exception would still suffer from this problem. Also, some people may upgrade and not enable response compression.
I’m not sure what the best solution is here:
- keep JsonSerializer.SerializeToStream using DirectStreamWriter, but add an overload that accepts a buffer size and uses StreamWriter, then reference that method from ContentTypes.GetStreamSerializer?
- have ContentTypes.GetStreamSerializer wrap the stream parameter ‘s’ in a BufferStream before passing to JsonSerializer.SerializeToStream?
- something else?