Apparently ServiceStack.Text serialisation is slow?

Most notably MsgPack.Cli, ServiceStack.Text, BinaryFormatter and Bois perform significantly worse on .NET Core.

Very interesting test suite here.

Is there anything that can be done?

I’ll see if I can run it with the current version. Deserializing a stream disadvantages ServiceStack.Text since it needs to convert it into a string (now a Span) first before deserializing it where as other Serializers deserialize from Streams natively.

But will be interesting to see what the results are with the latest version. I’ll see if I can run it with the latest version.

I’ve just checked in a bunch of optimizations and have re-run the Serializer Tests project against the latest v5.1.1 ServiceStack.Text on MyGet vs JSON.NET which now yields the results below for .NET Core 2.1:

.NET Core 2.1 Results

Serializer	Objects	"Time to serialize in s"	"Time to deserialize in s"	"Size in bytes"	FileVersion	Framework
ServiceStack<BookShelf>	1	0.000	0.000	69	5.0.0.0	.NET Core 2.0.7
ServiceStack<BookShelf>	1	0.000	0.000	69	5.0.0.0	.NET Core 2.0.7
ServiceStack<BookShelf>	10	0.000	0.000	305	5.0.0.0	.NET Core 2.0.7
ServiceStack<BookShelf>	100	0.000	0.000	2827	5.0.0.0	.NET Core 2.0.7
ServiceStack<BookShelf>	500	0.000	0.000	14827	5.0.0.0	.NET Core 2.0.7
ServiceStack<BookShelf>	1000	0.000	0.001	29829	5.0.0.0	.NET Core 2.0.7
ServiceStack<BookShelf>	10000	0.004	0.006	317831	5.0.0.0	.NET Core 2.0.7
ServiceStack<BookShelf>	50000	0.018	0.031	1677831	5.0.0.0	.NET Core 2.0.7
ServiceStack<BookShelf>	100000	0.036	0.068	3377833	5.0.0.0	.NET Core 2.0.7
ServiceStack<BookShelf>	200000	0.073	0.146	6977833	5.0.0.0	.NET Core 2.0.7
ServiceStack<BookShelf>	500000	0.181	0.374	17777833	5.0.0.0	.NET Core 2.0.7
ServiceStack<BookShelf>	800000	0.290	0.606	28577833	5.0.0.0	.NET Core 2.0.7
ServiceStack<BookShelf>	1000000	0.363	0.756	35777835	5.0.0.0	.NET Core 2.0.7

JSON.NET

JsonNet<BookShelf>	1	0.000	0.000	69	11.0.2.21924	.NET Core 2.0.7
JsonNet<BookShelf>	1	0.000	0.000	69	11.0.2.21924	.NET Core 2.0.7
JsonNet<BookShelf>	10	0.000	0.000	305	11.0.2.21924	.NET Core 2.0.7
JsonNet<BookShelf>	100	0.000	0.000	2827	11.0.2.21924	.NET Core 2.0.7
JsonNet<BookShelf>	500	0.000	0.000	14827	11.0.2.21924	.NET Core 2.0.7
JsonNet<BookShelf>	1000	0.001	0.001	29829	11.0.2.21924	.NET Core 2.0.7
JsonNet<BookShelf>	10000	0.005	0.006	317831	11.0.2.21924	.NET Core 2.0.7
JsonNet<BookShelf>	50000	0.025	0.035	1677831	11.0.2.21924	.NET Core 2.0.7
JsonNet<BookShelf>	100000	0.051	0.073	3377833	11.0.2.21924	.NET Core 2.0.7
JsonNet<BookShelf>	200000	0.105	0.153	6977833	11.0.2.21924	.NET Core 2.0.7
JsonNet<BookShelf>	500000	0.253	0.404	17777833	11.0.2.21924	.NET Core 2.0.7
JsonNet<BookShelf>	800000	0.400	0.726	28577833	11.0.2.21924	.NET Core 2.0.7
JsonNet<BookShelf>	1000000	0.520	0.920	35777835	11.0.2.21924	.NET Core 2.0.7

So it’s now about ~30% faster for serialization and 17.8% for deserialization for 1M objects.

.NET v4.7.1 Results

Serializer	Objects	"Time to serialize in s"	"Time to deserialize in s"	"Size in bytes"	FileVersion	Framework
ServiceStack<BookShelf>	1	0.000	0.000	69	5.0.0.0	.NET Framework 4.7.3110.0
ServiceStack<BookShelf>	1	0.000	0.000	69	5.0.0.0	.NET Framework 4.7.3110.0
ServiceStack<BookShelf>	10	0.000	0.000	305	5.0.0.0	.NET Framework 4.7.3110.0
ServiceStack<BookShelf>	100	0.000	0.000	2827	5.0.0.0	.NET Framework 4.7.3110.0
ServiceStack<BookShelf>	500	0.000	0.000	14827	5.0.0.0	.NET Framework 4.7.3110.0
ServiceStack<BookShelf>	1000	0.000	0.001	29829	5.0.0.0	.NET Framework 4.7.3110.0
ServiceStack<BookShelf>	10000	0.004	0.007	317831	5.0.0.0	.NET Framework 4.7.3110.0
ServiceStack<BookShelf>	50000	0.021	0.035	1677831	5.0.0.0	.NET Framework 4.7.3110.0
ServiceStack<BookShelf>	100000	0.043	0.072	3377833	5.0.0.0	.NET Framework 4.7.3110.0
ServiceStack<BookShelf>	200000	0.087	0.161	6977833	5.0.0.0	.NET Framework 4.7.3110.0
ServiceStack<BookShelf>	500000	0.208	0.413	17777833	5.0.0.0	.NET Framework 4.7.3110.0
ServiceStack<BookShelf>	800000	0.340	0.669	28577833	5.0.0.0	.NET Framework 4.7.3110.0
ServiceStack<BookShelf>	1000000	0.417	0.836	35777835	5.0.0.0	.NET Framework 4.7.3110.0

JSON.NET

JsonNet<BookShelf>	1	0.000	0.000	69	11.0.2.21924	.NET Framework 4.7.3110.0
JsonNet<BookShelf>	1	0.000	0.000	69	11.0.2.21924	.NET Framework 4.7.3110.0
JsonNet<BookShelf>	10	0.000	0.000	305	11.0.2.21924	.NET Framework 4.7.3110.0
JsonNet<BookShelf>	100	0.000	0.000	2827	11.0.2.21924	.NET Framework 4.7.3110.0
JsonNet<BookShelf>	500	0.000	0.000	14827	11.0.2.21924	.NET Framework 4.7.3110.0
JsonNet<BookShelf>	1000	0.001	0.001	29829	11.0.2.21924	.NET Framework 4.7.3110.0
JsonNet<BookShelf>	10000	0.005	0.007	317831	11.0.2.21924	.NET Framework 4.7.3110.0
JsonNet<BookShelf>	50000	0.025	0.038	1677831	11.0.2.21924	.NET Framework 4.7.3110.0
JsonNet<BookShelf>	100000	0.050	0.078	3377833	11.0.2.21924	.NET Framework 4.7.3110.0
JsonNet<BookShelf>	200000	0.100	0.170	6977833	11.0.2.21924	.NET Framework 4.7.3110.0
JsonNet<BookShelf>	500000	0.250	0.439	17777833	11.0.2.21924	.NET Framework 4.7.3110.0
JsonNet<BookShelf>	800000	0.402	0.777	28577833	11.0.2.21924	.NET Framework 4.7.3110.0
JsonNet<BookShelf>	1000000	0.503	0.992	35777835	11.0.2.21924	.NET Framework 4.7.3110.0

Which is about ~17% faster for serialization and ~15.7% faster for deserialization at 1M objects.

Also the latest .NET Core version is now ~12.9% faster for serialization and ~9.5% faster for deserialization vs .NET v4.7.1.

Some interesting things about the latest round of profiling was whilst we were able to use .NET Core’s native API’s like ReadOnlySpan<char> overloads for parsing int’s, it was still a lot slower than our custom (also alloc-free) implementation so we ended up reverting back to use our custom impls for both platforms.

2 Likes

Wow Demis that is some nice perf tuning! Love seeing numbers like that!

Did you use Benchmark.Net in your new bench proj?

These results were from running the linked benchmark project, but I am using BenchmarkDotNet for other benchmarks.

I’ve been doing some benchmarking on 5.1.0 today.

I have a few classes that when retrieved from a cache need to be deep copied. I do this by turning them into JSON and then deserializing them. I need this to be as efficient as possible.

Version 1:

        public static T DeepCopy<T>(T obj)
        {
            if (obj == null)
                return default(T);

            return JsonSerializer.DeserializeFromString<T>(JsonSerializer.SerializeToString(obj));
        }

Version 2:

        public static T DeepCopy<T>(T obj)
        {
            if (obj == null)
                return default(T);

            using (var ms = MemoryStreamFactory.RecyclableInstance.GetStream())
            {
                JsonSerializer.SerializeToStream<T>(obj, ms);
                ms.Position = 0;
                return JsonSerializer.DeserializeFromStream<T>(ms);
            }
        }

All the classes have a ShouldSerialize method in them.

The productCollection is a list with ± 1000 complex objects in it.

        [Benchmark]
        public void DeepCopyWithString()
        {
            foreach(var item in productCollection)
            {
                var r = CopyUtil.DeepCopy<Product>(item);
            }
        }

        [Benchmark]
        public void DeepCopyWithStream()
        {
            foreach (var item in productCollection)
            {
                var r = CopyUtil.DeepCopy2<Product>(item);
            }
        }

TypeSerializer and JsonSerializer perform similarly. My questions are:

  1. Why is so much being allocated?
  2. What can be done to further improve the deep copying perf?
  3. Do the serializers cache their plans for consecutive calls of the same type?

Fields in Product class with backing public properties:

		private int _Id;
		private int _PrincipleId;
		private int _ProductReferenceNumber;
		private bool _IsDeleted;
		private string _Name;
		private string _ProductCode;
		private string _UnitBarcode;
		private string _ShrinkBarcode;
		private string _CartonBarcode;
		private int _CartonQuantity;
		private int _ShrinkQuantity;
		private int _UnitQuantity;
		private string _CartonMeasurement;
		private string _ShrinkMeasurement;
		private string _UnitMeasurement;
		private decimal _CartonWeight;
		private decimal _ShrinkWeight;
		private decimal _UnitWeight;
		private decimal _CartonPrice;
		private decimal _ShrinkPrice;
		private decimal _UnitPrice;
		private DateTime _LastUpdated;
		private int _MaxQuantity;
		private int _MaxPrice;
		private string _Category;
		private string _PackSize;

Why are you looking at v5.1.0 instead of the latest v5.1.1 on MyGet? The rewrite to use Spans and all the work I’ve mentioned is only available from v5.1.1+ on MyGet.

Allocations are absolutely necessary, why is there so much allocations compared to what? Have you compared the results vs JSON.NET to see what the difference is?

I’ve done all the perf work I could identify in v5.1.1.

Yes serializers cache their delegates.