We’re happy to announce the release of ServiceStack v8.5.
A lot of this release was spent on improving ServiceStack’s Add ServiceStack Reference support with improvements across the board for our 11 supported programming languages and client libraries to prepare for our v1 release of AI Server!
AI Server now ready to serve!
AI Server is a Free OSS self-hosted Docker private gateway to manage API access to multiple LLM APIs, Ollama endpoints, Media APIs, Comfy UI and FFmpeg Agents.
Centralized Management
Designed as a one-stop solution to manage an organization’s AI integrations for all their System Apps, by utilizing developer friendly HTTP JSON APIs that supports any programming language or framework.
Distribute load across multiple Ollama, Open AI Gateway and Comfy UI Agents
It works as a private gateway to process LLM, AI and image transformations requests that any of our Apps need where it dynamically load balances requests across our local GPU Servers, Cloud GPU instances and API Gateways running multiple instances of Ollama, Open AI Chat, LLM Gateway, Comfy UI, Whisper and ffmpeg providers.
In addition to maintaining a history of AI Requests, it also provides file storage for its CDN-hostable AI generated assets and on-the-fly, cacheable image transformations.
Native Typed Integrations
Uses Add ServiceStack Reference to enable simple, native typed integrations for most popular Web, Mobile and Desktop languages including: C#, TypeScript, JavaScript, Python, Java, Kotlin, Dart, PHP, Swift, F# and VB.NET.
Each AI Feature supports multiple call styles for optimal integration of different usages:
- Synchronous API · Simplest API ideal for small workloads where the Response is returned in the same Request
- Queued API · Returns a reference to the queued job executing the AI Request which can be used to poll for the API Response
- Reply to Web Callback · Ideal for reliable App integrations where responses are posted back to a custom URL Endpoint
For more info check out our overview at: https://openai.servicestack.net
New .NET 8 Deployments now using Kamal for Deployments
Kamal is a tool from 37 Signals that offers the same flexibility as our previous GitHub Action SSH Docker Deployments by wrapping up the use of fundamental tooling like SSH and Docker into a great CLI tool that simplifies the management of containerized applications, enabling them to be deployed any Linux host that’s accessible via SSH. It handles reverse proxy of web traffic automatically, as well as even the initial setup of the reverse proxy and related tooling to any target Linux host.
This means you get the same great ergonomics of just pointing your DNS and configuration file to a server, and Kamal takes care of the rest, including TLS certificates via LetsEncrypt. It even has commands that allow you to check on your running applications, view logs etc and all you need to do is run the commands from your local repository directory.
While our own templates used the same approach for GitHub Actions, it doesn’t have a lot of Kamal’s niceties like being able to monitor remote server app logs from your local workstation.
Standardizing on Kamal
We’re excited to have migrated our new .NET 8 templates to Kamal for deployments as it has distilled the simple approach we have baked in our templates for a number of years whilst dramatically improving on the ergonomics and are excited to see what the BaseCamp team and community around it continues to do to push the project forward.
Simple API Keys Credentials Auth Provider
We’ve improved the usability of Simple Auth with API Keys story with the new ApiKeyCredentialsProvider
which enables .NET Microservices provide user session like behavior using simple API Keys which you’d configure together with the AuthSecretAuthProvider
and ApiKeysFeature
to enable a Credentials Auth implementation which users can use with their API Keys or Admin AuthSecret.
When registered a Credentials Auth dialog will appear for ServiceStack Built-in UIs allowing users to Sign In with their API Keys or Admin Auth Secret.
ServiceStack.Swift rewritten for Swift 6
As part of the release of AI Server we’ve upgraded all generic service client libraries to support multiple file uploads with API requests to take advantage of AI Server APIs that accept file uploads like Image to Image, Speech to Text or its FFmpeg Image and Video Transforms.
ServiceStack.Swift received the biggest upgrade, which was also rewritten to take advantage of Swift 6 features, including Swift promises which replaced the previous PromiseKit dependency - making it now dependency-free!
Typed Open AI Chat & Ollama APIs in 11 Languages
A happy consequence of the release of AI Server is that its OpenAiChatCompletion API is an Open AI Chat compatible API that can be used to access other LLM API Gateways, like Open AI’s Chat GPT, Open Router, Mistral AI, GroqCloud as well as self-hosted Ollama instances directly in 11 of the most popular Web, Mobile & Desktop languages.
This is a great opportunity to showcase the simplicity and flexibility of the Add ServiceStack Reference feature where invoking APIs are all done the same way in all languages where the same generic Service Client can be used to call any ServiceStack API by downloading their typed API DTOs and sending its populated Request DTO.
DTOs in all languages downloadable without .NET
To make it easier to consume ServiceStack APIs in any language, we’ve added the ability to download and upload Typed DTOs in all languages without needing .NET installed with the new npx get-dtos
npm script.
It has the same syntax and functionality as the x
dotnet tool for adding and updating ServiceStack References where in most cases you can replace x <lang>
with npx get-dtos <lang>
to achieve the same result.
Multiple File Upload Support with API Requests supported in all languages
To be able to call AI Server APIs requiring file uploads we’ve added multiple file upload support with API Requests to the generic service clients for all our supported languages.
Please checkout the ServiceStack v8.5 Release Notes for more info.