Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invariant failure in connector pool #2552

Open
austindrenski opened this issue Jul 31, 2019 · 3 comments

Comments

@austindrenski
Copy link
Member

commented Jul 31, 2019

I don't have time to investigate right now, so dropping this here.

Just saw a surprising failure from AZP while testing the Linux + PG10 build. Could be something transient, but I thought that those invariant checks were....invariant.

Failed   GetConnectorFromExhaustedPool
Error Message:
 Npgsql.NpgsqlException : Busy is negative
Stack Trace:
   at Npgsql.ConnectorPool.CheckInvariants(PoolState state) in /home/vsts/work/1/s/src/Npgsql/ConnectorPool.cs:line 627
   at Npgsql.ConnectorPool.Release(NpgsqlConnector connector) in /home/vsts/work/1/s/src/Npgsql/ConnectorPool.cs:line 456
   at Npgsql.NpgsqlConnection.Close(Boolean wasBroken) in /home/vsts/work/1/s/src/Npgsql/NpgsqlConnection.cs:line 616
   at Npgsql.NpgsqlConnection.Close() in /home/vsts/work/1/s/src/Npgsql/NpgsqlConnection.cs:line 601
   at Npgsql.NpgsqlConnection.Dispose(Boolean disposing) in /home/vsts/work/1/s/src/Npgsql/NpgsqlConnection.cs:line 656
   at System.ComponentModel.Component.Dispose()
   at Npgsql.Tests.PoolTests.GetConnectorFromExhaustedPool() in /home/vsts/work/1/s/test/Npgsql.Tests/PoolTests.cs:line 78

See: https://dev.azure.com/npgsql/npgsql/_build/results?buildId=283

edit: Also failed in the next build, so doesn't look transient.

edit (again): This also doesn't appear to be PG10 related. I've seen this fail a few times on 9.6 too. Nothing on 11 yet, but that could be build-order related (e.g. the PG11 build fires off before the others).

/cc @roji @YohDeadfall @NinoFloris

@austindrenski austindrenski added the bug label Jul 31, 2019

@roji

This comment has been minimized.

Copy link
Member

commented Jul 31, 2019

Ouch. @NinoFloris you may be interested, although your new implementation makes this somewhat irrelevant... (should get back to that very soon).

@NinoFloris

This comment has been minimized.

Copy link
Contributor

commented Jul 31, 2019

@roji This may actually be the double close issue I also fixed in the PR. If at all possible we should do some extra work there to make it 'impossible' to do a double release by accident. I'm not certain at all that every double close (which then results in a double release) is patched, so something to head it off entirely would make me much more comfortable.

@roji

This comment has been minimized.

Copy link
Member

commented Aug 1, 2019

@NinoFloris agreed. This also makes me more comfortable with the idea of bringing your new implementation into a patch version - although it would still be good to have some stress tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.