SOLVED - The server committed a protocol violation. Section=ResponseHeader Detail=CR must be followed by LF

Having an issue with our service clients (.Net & ajax) after upgrading to 4.0.60 from 4.0.42 where the Expect 100 Continue request is throwing a HTTP 1.1 RFC error “The server committed a protocol violation. Section=ResponseHeader Detail=CR must be followed by LF”. Does anyone know what could be the cause with this? I tried to look at the history for CorsFeature but don’t see any changes there why the headers wouldn’t have a CrLf at the end.

I see many “fixes” for this out on StackOverflow and other sites suggesting on the client adding a web.config entry to ignore this, however this won’t work because need to fix this on the ServiceStack side (AppHost or some other change to avoid this) Any ideas are appreciated!

<system.net> 
    <settings> 
        <httpWebRequest useUnsafeHeaderParsing="true" /> 
    </settings> 
</system.net>

The only things we have special in our AppHost that would possibly be affect this are below. Basically it is some code that attempts to handle IE8/9 CORS issues, it was written some time ago and may have been already resolved in newer versions of SS.

// Add CORS support (includes CORS headers in all requests)
Plugins.Add(new CorsFeature(allowedHeaders: "Content-Type,Authorization"));

// always allow OPTIONS verb (for CORS).  Doing it here (RawHttpHandlers) eliminates the need to define "OPTIONS" as an allowable verb for every service.
HostContext.RawHttpHandlers.Add(request => request.HttpMethod == "OPTIONS" ? new CorsHandler() : null);

// remove markdown plugin (interferes with text/plain overrides we use for IE8/9)
Plugins.RemoveAll(x => x is MarkdownFormat);
  
// treat all plain text data as JSON (IE8/IE9 can only send plain text content-type)
this.PreRequestFilters.Add((request, response) =>
{
	var aspNetReq = ((ServiceStack.Host.AspNet.AspNetRequest)request);
	aspNetReq.HttpRequest.ContentType = aspNetReq.HttpRequest.ContentType.ToLower();
  
	if (string.IsNullOrWhiteSpace(request.Headers["content-type"]) ||
		request.Headers["content-type"].ToLower().Contains("text/plain"))
	{
		aspNetReq.HttpRequest.ContentType = MimeTypes.Json;
	}
});

Do you need all the code after CorsFeature? it already handles OPTIONS and if you want to change the Content Type you should set request.ResponseContentType not aspNetReq.HttpRequest.ContentType.

Can you show the raw HTTP Request and Response Headers that’s causing the protocol violation?

As you suggested I took the casting out to AspNetRequest, and just set content type on the IRequest now, but it is not fixing the error.

I finally was able to grab some packet captures from Wireshark and appears to be the Vary header is screwed up.

A little more background, we are seeing this on about 10-15% of requests with the same headers/body essentially executing like a load test, and this article was some help narrowing the issue down, but then saw this blog post about how a space character in the X-Powered-By header name could also cause it, so it makes sense that this Vary header is most likely the cause. If you can see in the value it says “a.p.i”, not sure if it has anything to do with it, but our site is hosted with a “https://api.acme.com” subdomain.

I’m looking into how to override the Vary header as a workaround, but using either the GlobalResponsetHeaders collection, GlobalResponseFilters, and via OnEndRequest don’t seem to be working.

        if (res.Headers.AllKeys.Contains("Vary"))
        {
            res.Headers["Vary"] = "Accept";
        }
        else
        {
            res.Headers.Add("Vary", "Accept");
        }

Would you be able to fast track me to a solution?

I don’t know what’s adding the corrupted Vary header you’re seeing, the only place where it’s added in ServiceStack is the Config.GlobalResponseHeaders or HTTP Cache Feature, but if you’re not using Caching it wont be added from there.

You can’t change the Global Response Headers at runtime, you can remove them from the Config.GlobalResponseHeaders, e.g:

var config = new HostConifg();
config.GlobalResponseHeaders.Remove("Vary");
SetConfig(config);

Then add it back in a Response Filter, e.g:

GlobalResponseFilters.Add((req,res,response) => res.AddHeader("Vary","Accept"));

But something strange is happening with that Vary HTTP Response header, which isn’t being emitted by ServiceStack. I would look at removing the Vary Header entirely to see if it resolves it or if another part of the HTTP Headers get corrupted.

OK so digging a little deeper and trying some things:

  • Appears that it is a false positive it is anything to do with SS version (reverted to 4.0.42 and still happens)

  • I removed all custom header handling in AppHost and used URL Rewrite to provide hard-coded values for Vary and other header values, but it just changed erroring on another header being malformed so it isn’t directly related to that Vary header.

  • As part of this version of our API, I had implemented a RequestLogs plugin that uses TPL Dataflow to queue the writes to SQL (w/ fallback to file on error/timeout). I removed this and it seems to have alleviated the issue.

So. :smile:
My question now is if there is anything in the lifecycle you think of that would cause response streams to be altered somehow by reading headers, etc inside the RequestLog parser (i.e. IRequest.HttpRequest.Headers, the response DTO object, IRequest.HttpResponse.Response.Headers etc)?

Gist of my RequestLog plugin here - https://gist.github.com/loosleef/9a0b34a134827aa0e142f62b3f502c15

I can’t really be sure without debugging it, but it looks like the header is corrupted so I’d be looking for a race condition, it looks like the most obvious culprit is trying to access the IRequest in the async callback, if the callback only completes after the request is finished and returned back to the HTTP Worker, you could have 2 threads trying to access a non thread-safe instance. So I’d look at extracting all the info you need from IRequest and adding it to RequestLogParams so you’re not accessing IRequest in the async callback.

Awesome, you are right, it appears it is fixed by populating my log entry before sending to async for persistence. The only negative effect I see is that the response headers contain far less detail (just the Server & Auth token headers). I’m sure there is probably some way to implement this in such a way to capture all response headers at the “end”, inside RequestLogs plugin infrastructure or not, but it is not critical for us.

Thanks for the pointer in the right direction!

1 Like