-
-
Notifications
You must be signed in to change notification settings - Fork 90
Description
Describe the bug
So this is a follow up to my previous issue #435 . I mentioned that I often ran into crash on the following line:
postgres-nio/Sources/ConnectionPoolModule/PoolStateMachine+ConnectionGroup.swift
Line 441 in c826992
| self.stats.availableStreams -= closeAction.maxStreams |
So I have decided to to modify the source code a bit, added some log, and observed the following thing:
- availableStreams will reduce 1 before ping pong, and increase back afterwards;
- when the issue happens, availableStreams is already at 0;
- the issue only happens if, according to the log, two ping pong happens in vert short interval.
So I suppose this is just a strange weird data race around keep alive timers. But I have seen no pattern why this might happen, nor can I reliably reproduce this issue. Plus I suppose we will never see it if the ping pong can finish in very short period.
I wonder if there is anything I can do to help you people track this down. I am a bit occupied right now, but I will try to add some log and data race check on a local fork when I get some free time.
Expected behavior
No error.
Environment
Vapor Framework version: 4.84.6
Vapor Toolbox version: 18.7.4
OS version: macOS 14.0 (23A344)