I’m looking for some advice or ideas on what I can do to make our rather large API a bit more manageable.
We use ServiceStack 5.10 which backs on to our ERP system. At present we have 598 routes - and we’re only about a third of the way through exposing all functionality we want.
ServiceStack doesn’t have a problem with this size - it’s third party tools/utilities like SwaggerUI, Postman and Microsoft Azure API Management which has a problem - they either perform really poorly, or refuse to interface with such a large API (Azure API Management for instance won’t import an openapi spec document greater than 1MB and we’re already at 2MB).
I was thinking of perhaps partitioning the API into different subdomains for the different areas of concern - such as:
That way things like SwaggerUI will cope better with the smaller API surface - but that brings a whole new set of problems relating to authentication and so on.
Right splitting them up into logical Microservices would be the logical solution to reducing the API surface area into logical groupings.
If you need to make inter-service requests you can use the Service Gateway to transparently call internal and external services through the same API.
Authentication across multiple Microservices / sub domains
JWT is ideal for authentication across Microservices where only a single “Auth” Service needs to be configured to enable Authentication and be able to issue tokens, as JWT’s are stateless all other Microservices need only be configured with the Auth Key used to validate the tokens which contain the encapsulated Authenticated User Session.
Another solution is to use a Reverse Proxy so all Microservices appear that they’re from the same domain, e.g:
Where if you’re using any Session-based Auth Providers the Cookies will be sent to all Microservices when requesting it through the external API and as long as each Microservice is configured to use the same distributed caching provider, each service will still have access to the same Authenticated UserSession.
An alternative to using a reverse proxy is to configure to use domain cookies so that browsers will also send Cookies to all sub domains as well.
@Rob - thanks for the suggestion - but I think we are doing that already - we just have a large API surface due to the nature of what we do.
A fair bit of our API surface size is due to security concerns - we implemented routes for granular aspects which could be limited by permissions for the user. To illustrate, and following your example of the /customers
GET /customers/{customerid} - gets a customer record
POST /customers - creates a customer
DELETE /customers/{customerid} - deletes a customer
PATCH /customers/{customerid} - partially updates a customer
GET /customers/{customerid}/contactnames - gets the contact names for a customer
GET /customers/{customerid}/contactnames/{contactnameid} - gets a contact name for a customer
POST /customers/{customerid}/contactnames - creates a contact name for a customer
DELETE /customers/{customerid}/contactnames/{contactnameid} - deletes a contact name for a customer
PATCH /customers/{customerid}/contactnames/{contactnameid} - partially updates a contact name for a customer
For example - we have requirements where we want some users to be able to PATCH contact names, but not be able to create customers or DELETE or POST contact names, and we also want only some users to be able to DELETE contact names - so we expose routes on which permissions can be set per user per route and verb.
The cost of such flexibility of course is a large number of routes.