How is connection pooling handled with Database Scripts in #Script?

Or is it? This would be very important for us as we don’t want to be opening a new database connection on each script execution as we would quickly exhaust (or severally overload) the server’s available connections in a production environment with 1,000’s of concurrent users. If we can’t get connection pooling to work for this, we may have to rethink our strategy, which would be painful since we were planning on making HEAVY use of this.

RDBMS Connection Pooling isn’t an ORM concern, it’s handled by the underlying ADO.NET Provider that can be customized by connection string parameters which typically uses connection pooling by default.

But I guess my concern was in the context of #Script. How does #Script handle the opening and closing of the database connection behind the scenes? Is it any different than a regular .NET Core ServiceStack app that is always running and can therefore maintain a connection pool? Since #Script executes in it’s own sandbox, I was thinking it’s like an AWS Lambda function which has no concept of “state” and therefore has the problem of not being able to maintain a connection pool between each execution.

So the question here is, in the context of #Script, is it able to maintain connection pool with the underlying library (in this case, Npgsql) between different executions?

The behavior isn’t any different because it’s #Script, it’s managed the same way, by the underlying ADO .NET provider.

There isn’t anything magical about #Script, the script methods you call are just C# methods using whatever implementation you’ve registered it to use, e.g. DbScriptsAsync.cs.

Got it. I just ran a load test with Postman on this and sure enough, it maintained a single connection to the database. Thanks!