You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In handle_event({:call, from}, {:client_call, bin, ready?}, data) (handler for database responses), if the client's mode is transaction and we received a ready?=true from the database, we check the db_pid back in and set it to nil.
The next time the db_pid is checked out and set in state is when the client makes another request. However, there are situations where the database will call the ClientHandler again before the client makes another request.
i.e. receiving ReadyForQuery is not a guarantee that the database won't send data until the next client request.
Currently, handle_db_pid fails when this happens, as we're trying to run Process.unlink(nil).
The solution is to ignore a nil db_pid in handle_db_pid. I think that's OK? But I don't know if getting into this state was intended behavior.
To Reproduce
What seems to be happening:
Db sends a ReadyForQuery after finishing the last query.
Client sends close and sync, seemingly to "wrap up" the last query. In the same bin, Client also sends parse, describe, sync - i.e., the next query.
Db sends back close_complete then ReadyForQueryin its own bin.
Then, it sends back the response to the subsequent query.
This is happening with the Prisma ORM client.
Expected behavior
ClientHandler does not crash when Db responds >1 time per client query.
The text was updated successfully, but these errors were encountered:
I believe that the db_handler should reset the "caller" value after ready_for_query because there can be a situation where the client is linked with another database process, and the old one will continue to send messages.
@abc3 I can take a stab at this with a bit more info. Are you imagining that while the Client may have "moved on" and is now associated with a new db_handler, the old db_handler - which still has messages incoming for the Client - would stay associated to the Client?
The difficulty I see is I'm not sure how we'll know that the db_handler is done responding to a Client. In a pipelined query, as I mentioned above, we have one incoming request from the Client that is going to instigate two separate db_handler responses, the first of which contains a ready_for_query.
Sorry for the late response, it has been a tough week. According to the PostgreSQL documentation, ready_for_query indicates that the cycle is finished and the current database pool is ready for new queries.
So, after receiving this message, we can be sure that the client's query is finished. We should then clean up data.caller to avoid scenarios where DbHandler might receive an asynchronous message and forward it to the last linked client.
Bug report
Describe the bug
In
handle_event({:call, from}, {:client_call, bin, ready?}, data)
(handler for database responses), if the client's mode istransaction
and we received aready?=true
from the database, we check the db_pid back in and set it to nil.The next time the db_pid is checked out and set in state is when the client makes another request. However, there are situations where the database will call the ClientHandler again before the client makes another request.
i.e. receiving
ReadyForQuery
is not a guarantee that the database won't send data until the next client request.Currently,
handle_db_pid
fails when this happens, as we're trying to runProcess.unlink(nil)
.The solution is to ignore a nil db_pid in
handle_db_pid
. I think that's OK? But I don't know if getting into this state was intended behavior.To Reproduce
What seems to be happening:
ReadyForQuery
after finishing the last query.close
andsync
, seemingly to "wrap up" the last query. In the same bin, Client also sendsparse
,describe
,sync
- i.e., the next query.close_complete
thenReadyForQuery
in its own bin.This is happening with the Prisma ORM client.
Expected behavior
ClientHandler does not crash when Db responds >1 time per client query.
The text was updated successfully, but these errors were encountered: