ServiceStack vs ASP.NET Core performance

Hi @mythz

As always I’m interested in how ServiceStack holds up performance wise against alternatives out there. The reason I started using SS years ago was due to the performance wins over WCF and Web API at the time. Also the perf advantage in ServiceStack.Text over JSON.Net.

So I came across this and this article and wanted to know how ServiceStack holds up? What are your thoughts?

ServiceStack is a web framework that runs on top of a HTTP Host. It currently runs on ASP.NET and HttpListener self hosts. These benchmarks are measuring raw performance of .NET Core’s Kestrel HTTP Host. Performance of HTTP on Mono has always been poor and the only way .NET is going to run well on Linux (where Tech Empower benchmarks are run) is to run on .NET Core (i.e not run on Mono) which is our priority to support in the future starting from last v4.0.62 release where we’ve released .NET Core builds for ServiceStack.Client and ServiceStack.Text, you can vote on this feature request to get notified of updates as we release support for .NET Core.

A new framework called “fastify” recently came out claiming some new performance improvements over other nodejs web frameworks and published a performance benchmarks repo. I added core 2.0, webapi and servicestack.core to the repo to do some benchmarks against core.

My results are posted here (ran using a MacBook Pro).

The servicestack raw handler perfoms pretty close to webapi, but the [Route] attribute and the [FallbackRoute] strategies don’t measure up as well. The pure core “.map” and raw handler performance are impressive. WebApi is 1/2 as fast as that. ServiceStack varies depending on the strategy but generally slower. Open to feedback on how to re-structure the tests … If you try to run these for yourself by cloning the repo, you’ll have to go into the Startup class and comment/uncomment certain sections before running autocannon.

Thanks Matt, we’ll investigate it, can you publish numbers for sync Services, to see if it’s related to async overhead?

Updated the tests and included sync numbers, see here

Very interesting. Looks like some time needs to be spent getting ServiceStack speed back up.

We’re getting different numbers to the results posted, but perf profiling/optimizations are high on our TODO list, ideally we’d like to focus on it before the next v5 release.

FYI now that v5 refactoring has been completed I’ve been able to start looking into this and just found and resolved a major perf issue by delaying disk access until it’s needed which had a significant effect on RPS so the latest v5 should have much improved performance.

That sounds interesting Demis! Please do let us know about any more perf wins :smile:

@dylan Yeah any I/O is a major perf killer for raw request benchmarks which was getting 21-22k rps before this change and is now getting ~40k rps for /hello JSON HTTP API on my iMac, just slightly more than an equivalent JSON API in Web Api at ~39k.

FYI I’m maintaining raw benchmarks for different basic request types at:

Hi Demis. Any more perf-oriented changes planned/upcoming?

We added some perf optimizations in the last release, but there’s none planned in the immediate future.

I’d like to move to Span<T> to replace where we’re currently using StringSegment but need to make sure it has minimal disruption with existing .NET v4.5 projects. This will require an external dependency to System.Memory in all packages, which I’d prefer to not have any external dependencies but it’s something we’ll look at adding after it moves out of preview release.

I see you’ve been working on a bunch of Span-related changes. How are you finding them? Reducing allocations?

I’ve moved ServiceStack.Text to use ReadOnlySpan<char> internally as it’s now the preferred way to efficiently slice strings in .NET. I’m not expecting a noticeable reduction in allocations since StringSegment already allowed allocation-free string slicing, but as there’s native runtime support for Span in .NET Core it should yield perf benefits from elided array bounds checking and as a ref struct will save on value type copies.

One issue is that whilst Span<T> types themselves have .NET Standard 2.0 builds in System.Memory, the only API support containing overloads for accepting Span types are in the platform specific .NET Core App 2.1 which we don’t have access to in our .NET Standard 2.0 and .NET v4.5 builds. So by default we have to use our own allocation-free managed implementation.

To be able to take advantage of the native APIs in .NET Core 2.1 I’ve created a new ServiceStack.Memory package which you will be able to reference in .NET Core 2.1 Apps to change ServiceStack.Text to use .NET Core’s native APIs with:


For parsing primitive values the native APIs should be faster since they have access to unsafe native memory APIs but the default implementations are also allocation-free so shouldn’t be that much different. But the stream APIs will have reduced allocations since they can accept Span types directly.

Anyway I’ll no more once I’ve finished the migration.

FYI I’ve completed the migration from StringSegment to the System.Memory Span and Memory APIs in both ServiceStack.Text for all JSON/JSV deserialization and ServiceStack for all Template/Expression parsing.

I’ve still got some profiling to do but if anyone wants to check out the latest v5.1.1 on MyGet it has some optimizations enabled where the JSON/JSV text serializers now deserialize using .NET’s new Span/Memory System.Memory structs, RecyclableMemoryStream is enabled by default so most usages of MemoryStream are pooled and deserializing JSON/JSV Request bodies should be faster courtesy of byte[]/char[] pooling and being able to use .NET Core’s native Span APIs for UTF-8 decoding.

For max performance add a reference to the new ServiceStack.Memory NuGet package in a .NET Core App 2.1 and configure it with:

public override void Configure(Container container)

Which will bind ServiceStack.Text to use .NET Core’s native Span APIs which are only available in .NET Core 2.1.

If anyone wants to check out a live ServiceStack .NET Core 2.1 App with this enabled I’ve redeployed running the latest v5.1.1 on MyGet.

Whilst this was a fairly significant refactor, all tests still pass so there should be minimal issues, but if you find any please report back any issues you find.


Hi @mythz

Any performance updates planned for 2019?

Do you still recommend adding reference to ServiceStack.Memory? Would you recommend this even if a project uses just OrmLite for example?

Other than switching internal use of Task to ValueTask, I don’t see any low-hanging perf opportunities remaining, while I’m working through the code-base I’ll remove unnecessary allocations where possible, but I haven’t identified any areas which needs perf attention. The only work outlined would be research/integration/testing to see if it’s possible (and there’s actually a noticable benefit) to be able to switch to .NET Core 3.0’s new JSON serializer after they’ve added POCO support, I’d expect it to create issues for missing SS.Text features (in all clients) so it will come down to how much of it can be polyfilled.

You no longer need to add a ref to ServiceStack.Memory in .NET Core Apps as ServiceStack.Text includes a netcoreapp2.1 build which already includes the bindings to .NET Core 2.1 APIs.

Very interesting. Thanks Demis. Looking forward to the future :slight_smile:

Heya Demis. Any further thoughts on the new preview 6 update of System.Text.Json?

Haven’t used it yet, but I was expecting a better perf improvement over JSON.NET, from the rough numbers they’ve disclosed it looks like it will be slower than utf8json or JIL which provide a better improvement over JSON.NET: