Skip to content

Expected behaviour for ConnectionPooling #1057

@trynn

Description

@trynn

Hi,
while upgrading from 7.10.1.0 to 9.0.1 we are having issues with the connection-pooling.
Mostly running into following exception.

Application_Start System.InvalidOperationException
Connection pool is full.
   bei FirebirdSql.Data.FirebirdClient.FbConnectionPoolManager.Pool.GetOrCreateConnectionImpl(Boolean& createdNew)
   bei FirebirdSql.Data.FirebirdClient.FbConnectionPoolManager.Pool.GetConnection(Boolean& createdNew)
   bei FirebirdSql.Data.FirebirdClient.FbConnection.<OpenAsync>d__74.MoveNext()

Now, i'm aware the this is thrown when the connection-pool reached the configured MaxPoolSize-Value (by default 100).
We have to restrict this even further to 10-connections per active application, since Firebird itself has massive issues when concurrent active connections raise above 300 (in our experience). But i'm digressing ..

We basically access all database queries similiar to this example.

// code for 9.0.1
using var connection = new FbConnection(connectionBuilder.ConnectionString);
await connection.OpenAsync();

using var transaction = await connection.BeginTransactionAsync(DbTransactionOptions.Read);
await executeSomeSqlWithTransaction(transaction);
await transaction.CommitAsync();

// code-replacement for 7.10.1.0 (no async transaction)
// using var transaction = connection.BeginTransaction(DbTransactionOptions.Read);
// transaction.Commit();

So for earch query retrieving/returning a connection to the FbConnectionPoolManager.
I dont think it's viable to not use connection-pooling at all. So i've not evaluated this option further.

behaviour change

With 7.10.1.0 we're able to configure MaxPoolSize=1 and any application will work and somehow queue/block database queries. they get serialized and share the existing connection. (i guess it has something todo with the non-async-transaction)

Since 9.0.1 (i guess since v8) this has changed. As soon as the parallel execution of queries exceeds MaxPoolSize, the Connection pool is full.-exception is thrown. Logically thats correct, since all connections are in use.
But with a limited ressource of parallel connections, thats a real-world-problem.

In a async web-api scenario it's easly possible that 100+ requests are made concurrently.
And each would require a database-query in the backend. At some point, some kind of wait()/blocking has to occure.

my personal expectation

i would have assumed that when 2 async queries are requested, and only one db-connection is allowed/exists, that the queries will be serialized and one have to wait until the first is completed.

question

what is the expected behaviour for the ConnectionPooling inside theFirebirdSql.Data.FirebirdClient?
is this a missing feature in the library yet, or where am i wrong here?

While implementing Polling the connection.Open() until not exception thrown would be possible, i don't think it's either a good idea to handle this with exceptions (overhead), nor do i think this should be implemented by every application itself.
For me this seems like a common problem. (it's covered/solved in Npgsql for example)

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions