Idempotent for duplicated create requests

I have implemented an idempotent mechanism to stop users submitting duplicated create requests (e.g. users with slow international connections). Before I role it out to production, I would like to ask whether there is a more recommended approach with ServiceStack.

I require a Guid idempotent token on each form instance for creating entities, which is then checked against a record of previous tokens to detect duplicated request from the form instance.

// Abstract base for all create requests to enforce token to detect duplicated requests.
public abstract class CreateRequest
{
    public Guid IdempotentToken { get; init; }
}

// Create request for UserSetting entity
public class CreateUserSetting : CreateRequest, ICreateDb<UserSetting>, IReturn<UserSetting>, IPost
{
    public Guid AccountId { get; init; }
    public string KeyName { get; init; } = null!;
    public string KeyValue { get; init; } = null!;
}

I define a global request and response filter to handle the duplication checks on a create request

private void ConfigureGlobalRequestFilters()
{
    GlobalRequestFilters.Add((req, res, dto) =>
    {
        // reject duplicated create requests
        if (req.Dto is IPost and CreateRequest createRequest)
        {
            if (Guid.Empty.Equals(createRequest.IdempotentToken))
            {
                throw new ArgumentException ("Form's Idempotent token missing");
            }

            using var db = Resolve<IDbConnectionFactory>().Open();
            var found = db.Single<Idempotent>(x => x.Token == createRequest.IdempotentToken);
            if (found is not null)
            {
                throw new OptimisticConcurrencyException("Duplicated create request submitted");
            }
        }
    });
}

private void ConfigureGlobalResponseFilters()
{
    GlobalResponseFilters.Add((req, res, dto) =>
    {
        if (req.Dto is IPost and CreateRequest createRequest)
        {
            // cache request's token if not error response
            if (!res.Dto.IsErrorResponse())
            {
                using var db = Resolve<IDbConnectionFactory>().Open();
                // cache new token for a successful create request
                db.Insert(new Idempotent
                {
                    Token = createRequest.IdempotentToken,
                    RequestTstp = DateTime.UtcNow
                });
            }
        }
    });
}

The Idempotent is a simple MSSQL table. I clear the previous cached tokens that are more than 60 minutes old each time AppHost starts.

[Schema("Log")]
[DataContract]
public record Idempotent
{
    [DataMember, Required]
    public Guid Token { get; set; }

    [DataMember, Required]
    public DateTime RequestTstp { get; set; }
}

I only use the above approach for creates, because I depend on the RowVersion property for duplicated update requests.

So, before I role it out to production, is there is a better approach with ServiceStack?

You are likely in the best position to judge that based on knowledge about your system. Comments below are made with some assumptions which might not be correct.

One thing I would note about the implementation you have posted is that depending on the time to process the request, there is an open time for a race condition between a successful response and the start of a duplicate request.

To clarify also, does the IdempotentToken column enforce a unique constraint?

If it does, this would throw an error on subsequent insert, however, other operations performed in the service would not be rolled back. You would want to handle that in read consistent transaction or other atomic operation.

Another way to do this would be to use Redis per request to increment a guid key and check the count on the response, if it is one, continue the request, if above, abort. Redis operations are atomic and you would not need to use both a request and response filter, just a request. A similar approach is often used for rate limiting.

One catch with the Redis approach is that for high workloads, the expiration of a large amount of keys can cause performance issues, but I’ve only seen this to be an issue at millions of keys a minute. This can be mitigated by storing keys in fixed time slot blocks within hashset keys so that large quantities of keys expire at the same time.

As stated above, this advice comes with its own set of assumptions that might not be correct for your situation but hope they have helped either way.

Thank you for your response.

Following your point about a potential race condition, I have refactored the above design to use explicit transactions and cache the request token immediately. If the response is an error, the token is deleted to allow re-submission from the same form instance (i.e. after user corrections).

The IdempotentToken is a primary key in the database table.

Redis always seems like a great option, unfortunately it is not approved software for me.