Encrypted messaging and authentication timeout


We have a .NET Framework API running ServiceStack 5.4.0, which is using InMemory authentication. There is a clear pattern of the authentication timing out every hour. API is being called by a function app every 10 minutes.

Below is the registration of the authentication provider:

Plugins.Add(new AuthFeature(() => new AuthUserSession(),
	new IAuthProvider[]
		new BasicAuthProvider(),
		new CredentialsAuthProvider()
		IncludeAssignRoleServices = false,
		IncludeRegistrationService = false

var userRep = new InMemoryAuthRepository();
var auth = new UserAuth
	UserName = "AccountingApiUser"
userRep.CreateUserAuth(auth, settingsService.GetUserAuthPassword());

container.Register<ICacheClient>(new MemoryCacheClient());

How it’s called:

public string GetPaymentStatusFromBankConnect()
	return Get(new BankConnectStatus()).Payload;
private T Get<T>(IReturn<T> input)
	IEncryptedClient encryptedClient = InstantiateEncryptedClient();

	return encryptedClient.Get(input);

private IEncryptedClient InstantiateEncryptedClient()
	var accountingApiUrl = _settingsService.GetAccountingApiUrl();
	var accountingApiUser = _settingsService.GetAccountingApiUser();
	var accountingApiPassword = _settingsService.GetAccountingApiPassword();

	var azureStorageService = new JsonServiceClient(accountingApiUrl);

	//Instantiage Client
	string publicKeyXml = azureStorageService.Get(new GetPublicKey());
	IEncryptedClient encryptedClient = azureStorageService.GetEncryptedClient(publicKeyXml);

	//Authorize Client
	var authResponse = encryptedClient.Send(new Authenticate
		provider = "credentials",
		UserName = accountingApiUser,
		Password = accountingApiPassword
	encryptedClient.SessionId = authResponse.SessionId;

	return encryptedClient;

Below information was extracted from Application Insights.
First image with the spikes is when it fails authenticating every hour.
The duration of 1.7min is when trying to authenticate.

What could be the issue?

I’m only guessing from the subset provided, that maybe the authenticated UserSession on the server has expired?

Check on the request that’s failing that there’s a UserSession for the SessionId in the registered ICacheClient using the following cache key format:


If you’re using the default or In Memory cache then everytime your Server AppDomain gets recycled all Authenticated User Sessions are lost. You can switch to any of the other distributed caching providers to preserve User Sessions beyond App restarts.

The API is an Azure App Service which is set to “Always On”. It shouldn’t be recycling, however if that is the case, > 1 minute still seems a long time to generate a new UserSession, or even start. We are not re-using SessionIds, just calling authenticate every time, for now.

Initialization logs:

It fails when trying to authenticate (recieve a SessionId). Since it times out, there isn’t a SessionId I can compare against.

Moreover, as illustrated in the “End-to-end transaction details”, there is a subsequent authentication attempt immediately after the first timeout, which executes in 108 ms. I am not sure what this could indicate, perhaps ServiceStack was locked for some reason?

Is there any way I can help you help me debug this further?

I’ve no idea what the issue is, I’m only guessing that’s the cause since you say it fails every hour which sounds like the session is no longer in the cache, which would be the first thing I would be checking to validate that the SessionId of the request that’s failing is valid.

I’d then be looking at the full HTTP Request/Response headers of the failing request to see if it yields any more info on what the cause is. Without it you’d just be guessing in the dark.

Although authenticating each request is fairly expensive, I’d be using either an API Key or a JWT which you can send with the Encrypted ServiceClient using the BearerToken property. The nice thing about JWT is that its stateless and doesn’t require the server to preserve the Authenticated UserSession.

We did some digging and it seems that the issue is with our code, not ServiceStack. The caller of the API recieves a error 500:

System.Net.WebException: The remote server returned an error: (500)  .
   at System.Net.HttpWebRequest.GetResponse()
   at ServiceStack.ServiceClientBase.Send[TResponse](Object request)

Unfortunately, there is nothing in the exception. Moreover, the API itself does not log the exception, even with exceptionhandlers in pace:

var logFactory = container.Resolve<ILogFactory>();
ServiceExceptionHandlers.Add((httpReq, request, exception) =>
	logFactory.GetLogger(typeof(ServiceStack.Host.HandleServiceExceptionDelegate)).Warn($$"Failed to handle request to: '{httpReq.RawUrl}', type '{httpReq.GetHttpMethodOverride()}' with data: '{request}'", exception);
	return null; //continue with default Error Handling
UncaughtExceptionHandlers.Add((req, res, operationName, ex) =>
	logFactory.GetLogger(typeof(ServiceStack.Host.HandleUncaughtExceptionDelegate)).Error($$"Failed to handle request to: '{req.RawUrl}', type '{req.GetHttpMethodOverride()}' with data: '{req.Dto}'", ex);

The issue seems to be with the below code, and may have something to do with deadlocks and blocking in sync code:

var kv = new KeyVaultClient(_encryptionHelper.GetToken);
var sec = kv.GetSecretAsync(_settingsService.GetSecretUri()).GetAwaiter().GetResult();

Live Metrics in Azure shows that Azure returns a 401 invalid token exception. Would be nice to have it logged regardless. Will update once we have a fix.

1 Like

Initially we thought the issue was with encrypted messaging and authentication, however that is not the case, and the title of this post is wrong.

By default the ServiceStack client timeout is 100 seconds. We increased it to 10 minutes, and saw that our service returned an error 500, and that the requests always lasted around 3.8 minutes. It seems that the default timeout for an Azure App Service is 230 seconds = 3.8 minutes, and it throws an error 500 after this timeout. This also means that it shuts down the request and never reaches the logging code, which is why nothing was logged.

Investigating the issue further, we found out that a deadlock, when using the KeyValutClient from sync code, was causing the issue. Exact description and solution is here: https://stackoverflow.com/questions/32594642/azure-keyvault-active-directory-acquiretokenasync-timeout-when-called-asynchrono

ServiceStack supports async services so we’re gonna use that instead.

Cheers @mythz

1 Like

ok great glad you’ve managed to identify the issue, thanks for the follow up info!