ServiceStack AI


Quick question: using ServiceStack AI, is it possible to send requests directly to chatGPT or another model?

I’m going through the docs, and the examples are fascinating.

I’m getting started with this, and my goal is modest.

I have a servicestack endpoint that would need to call ChatGPT or another model (it makes no difference at this point).

I want to send a piece of text to a model and ask this model to rewrite it with some instructions.

Is this supported, and what’s the simplest way to accomplish this?

If you have something like: your first webservice calling a LLM it would be great. I’m looking for the Hello world of AI with servicestack.

Maybe this could be an idea to add in the default template or something that could be mixed in.

The template could have a new AI enable Hello world endpoint.

For example, calling a model and instead of just returning the usual hello xxx, the sample could pass along a prompt like Reply in Portuguese. Something like that.

It’s just a suggestion, but it would make it less daunting to get started.

Ps. I hope you keep pushing on the AI front. This raised a lot of interest at work. I don’t think there is a ServiceStack category yet in the forum.

Have a look at Execute_Raw_Prompt which calls ChatGPT with a simple HTTP Client:

var prompt = ...;
var dto = new Dictionary<string, object>
    ["model"] = "gpt-3.5-turbo",
    ["messages"] = new List<object> {
        new Dictionary<string,object>
            ["role"] = "user",
            ["content"] = prompt,
    ["temperature"] = 0,
    ["n"] = 1
var json = JSON.stringify(dto);
var response = await ""
    .PostJsonToUrlAsync(json, requestFilter: req =>
        req.With(x =>
            x.ContentType = MimeTypes.Json;
            x.Accept = MimeTypes.Json;

But a lot of the functionality in TypeChat is creating a prompt that returns a structured response that’s machine readable that your App can understand to make use of its response.

1 Like