We have taken a look at the ResponseFilter on the StaticFileHandler but there are things to duplicate if we implement this ourselves (built-in cache-control for exemple).
Or is there a way to do this via a custom VirtualFileSource without breaking anything?
The conventional wisdom is to adopt a https://jamstack.org approach and host your front-end assets on CDN edge caches taking the load off your App Servers.
The solution in CompressedStaticFiles looks at pre-compressing content at build time and serving it with an ASP .NET Core Middleware.
Caching it in your App would be the least performant or most resource intensive option but doable within your App, you would need to register it at the start of the Request Pipeline, e.g. in RawHttpHandlers.
If it helps, this is the StaticFileHandler code used to serve an on-demand compressed static file:
In our case, CDNs and proxies aren’t an option unfortunately.
The CompressedStaticFiles way in NetCore (and compressing at build time) looks promising but would mean hosting the files “outside” ServiceStack… we expect conflicts with some features, that’s why we were looking for something like it but more integrated in your stack.
I’ve not used it but it looks like it’s just generating pre-compressed versions on the side of existing static files that its middleware would just be checking if a pre-compressed version exists for the current request, then returning it:
The files must have the exact same filename as the source + .br or .gzip ( index.html would be index.html.br for the Brotli version).
By conflict I meant with some of our features that sometimes requests files to the VirtualFileSystem, custom request filters that applies to those files and a custom fallback service. Some of those might not work out-of-the-box if the files are served by NetCore directly.
Found that post explaining a little more the CompressedStaticFiles middleware using Webpack to pre-compile the files if anyone else needs it: