All the questions are a bit vague, but I’ll see if I can add any info that might be helpful.
Not sure what you mean by this, whether it’s an external physical process or you’re just calling a long running API Request? If you’re referring to running a long-running task with your APIs, then you can just start a background thread in your AppHost.Configure() which is called once on Startup.
All .NET Core Apps are themselves long-running Console Apps so I generally recommend using .NET Core for any new projects unless you have a .NET Framework requirement that prevents you doing this.
I don’t really know what it is you’re managing. If you’re referring to monitoring and keeping your .NET Core Apps alive I recommend using supervisord to monitor the .NET Core process. I’ve published a guide on configuring a .NET Core App on Linux using nginx/supervisord at:
Or if you use deploy using Docker most cloud providers include an orchestration service that monitors your running Docker containers and will automatically restart them if any go down. We’ve also published a guide for deploying .NET Core Apps using Docker to AWS ECS:
There’s no high-level business/application logic like this baked into ServiceStack. But there’s some existing examples of rate limiting which is related:
- GitHub - wwwlicious/servicestack-ratelimit-redis: A rate limiting plugin for ServiceStack that uses Redis for calculating and persisting request counts
- How to limit number of requests to ServiceStack per day?
ServiceStack includes support for Roles/Permissions, have a look at the built-in Required Role/Permission attributes for examples of how to limit Services to users with required permissions/roles. You can also use one of the existing Auth Repositories for persisting User Info where the Assign Roles APIs lets you assign Roles/Permissions to Users.
If you’re instead using a Custom AuthProvider you can instead populate the Roles/Permissions when populating the AuthUserSession.