Need some advice planning first service stack project

Hello Everyone,

I am brand new to service stack and would like to ask for some advice on planning my project. In truth this question goes outside the scope of service stack but any advice you can give me is greatly appreciated.

My job is to port several console applications to the cloud (Azure App Services) in a way that the features can be used internally and through multiple public front ends.

The idea is to have several small standalone APIs so that certain parts can easily be re-used across projects (this is why I bought service stack as it looks like it will save some time in this regard).

I have a few things I am unsure of so thought I would post to see if anyone can give me some advice where to look. I am a solo developer so where possible I like to use anything that will save me time and reduce technical debt so I am not adverse to premium solutions.

  1. Several APIs have long running processes that the other APIs rely on. One API needs to start the other APIs process and then when that process completes or aborts other APIs process must be triggered. I need some central way to manage this between all APIs. So some sort of global queue manager and event system. Is there any library/service that can handle this I should look at?

  2. Most API will have a credit system. I need to specify how many credits each user level is allowed for each API independantly. Is there any established way of doing this or something that can help with this?

  3. I need to be able to set permission levels so one user group can use any combination of methods across all of the APIs but only the ones permitted. How do I do this sort of management with service stack?

Thanks for taking the time to read that. I realise the question is high level and not looking for anyone to plan the entire software, I just need some leads on what I need to research and learn as I don’t really know what to look at right now.

Many thanks

All the questions are a bit vague, but I’ll see if I can add any info that might be helpful.

Not sure what you mean by this, whether it’s an external physical process or you’re just calling a long running API Request? If you’re referring to running a long-running task with your APIs, then you can just start a background thread in your AppHost.Configure() which is called once on Startup.

All .NET Core Apps are themselves long-running Console Apps so I generally recommend using .NET Core for any new projects unless you have a .NET Framework requirement that prevents you doing this.

I don’t really know what it is you’re managing. If you’re referring to monitoring and keeping your .NET Core Apps alive I recommend using supervisord to monitor the .NET Core process. I’ve published a guide on configuring a .NET Core App on Linux using nginx/supervisord at:

Or if you use deploy using Docker most cloud providers include an orchestration service that monitors your running Docker containers and will automatically restart them if any go down. We’ve also published a guide for deploying .NET Core Apps using Docker to AWS ECS:

There’s no high-level business/application logic like this baked into ServiceStack. But there’s some existing examples of rate limiting which is related:

ServiceStack includes support for Roles/Permissions, have a look at the built-in Required Role/Permission attributes for examples of how to limit Services to users with required permissions/roles. You can also use one of the existing Auth Repositories for persisting User Info where the Assign Roles APIs lets you assign Roles/Permissions to Users.

If you’re instead using a Custom AuthProvider you can instead populate the Roles/Permissions when populating the AuthUserSession.

Hi Mythz, thanks for the reply.

Sorry about the vagueness, just trying to find the right direction atm.

What I meant by the long running processes is this:

Imagine I have two APIs. “API A” and “API B”. Both APIs generate a report but API B needs data from a fresh report in API A.

When the report is requestsed in API B and no recent report is available it will start a report in API A by making a request. This report may take a long time, up to several days to complete. So when it does complete in API A it needs to make a call back to API B so all pending reports can be run.

These APIs are entirely separate so they are only talking to each other through the API. There are several interactions between individual APIs in proposed system like this so I think some sort of global queue system that all APIs can access would be better instead of hard coding APIs to make the call when process completes as that sounds like it might end up difficult to maintain. I have just found section of docs on MQ Services and I think it is what I am looking for on this although I am still trying to understand it.

Thanks for the links. I will try to do a few tests and see how I go.

So API A will need to send a callback to B to notify it when the report has completed. You can either use a simple callback to call a route on B, adopt a Webhook solution or a MQ solution which publishes a Message (i.e. Request DTO) that B is listening to (i.e. has a registered handler for) when the report has finished.