Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[q] custom cancellation func #679

Open
utrack opened this issue Feb 11, 2020 · 26 comments
Open

[q] custom cancellation func #679

utrack opened this issue Feb 11, 2020 · 26 comments
Labels

Comments

@utrack
Copy link
Contributor

utrack commented Feb 11, 2020

Hi! I was migrating to pgx/v4 while I noticed that pgx.ConnConfig.CustomCancel hook is no more. How do I set up a custom cancel func with v4?
I'd like to disable cancellation signaling completely since it may kill wrong query when using pgBouncer.

@jackc
Copy link
Owner

jackc commented Feb 14, 2020

CustomCancel was removed in v4 because v4 changed strategies for context cancellation from cancelling the in progress query and attempting to recover to just closing the connection. So there was no need for it. However, this led to the problem that the too long running queries were still running server side. Now we close the connection but also send a cancel request to the server.

I suppose we could add something similar back... But I find it surprising that PgBouncer works that way. I would expect that it would only forward the cancellation request to the PostgreSQL server if the client side connection that received the cancel request had a query in progress. Seems to guarantee an error to do so otherwise.

@utrack
Copy link
Contributor Author

utrack commented Feb 18, 2020

Aha! Didn't know that the strategy's changed, thanks.
I don't think there are any guards against running cancel on idle connections, but it won't save us from this race anyway: https://hsto.org/webt/us/xf/v_/usxfv_i0ze_axpmjtlir183reiu.png

Here, PIDs mean true PID in Postgres (belongs to a conn between pgBouncer and DB).

While pgx fires cancel and kills connections to pgBouncer, pgB reuses its connections to DB.
Cancel from a client will be mapped to a true PID inside pgB; then a cancel pkt will be sent to Postgres, but this PID might be used by another query by then.

@jackc
Copy link
Owner

jackc commented Feb 22, 2020

That really surprises me that PgBouncer would forward cancellations in that manner. It seems that it would be impossible to use that functionality safely.

I would have expected PgBouncer to send its own PID and secret key to the client and when PgBouncer receives a cancel request it would map it to the underlying PG connection only if that PgBouncer connection was busy.

Anyway, as far as custom cancellation behavior, we are always going to close the connection, but I don't mind making the send cancellation request on close behavior customizable.

@nvorobev
Copy link

@jackc please return support for CustomCancel. Since in pgx4 there is no support for CustomCancel, therefore, pgx4 always sends cancel, and for pgBouncer it leads to filling the max_client_conn limit and, as a result, blocking new connections.

@eshkinkot
Copy link

eshkinkot commented Apr 17, 2020

Cancel from a client will be mapped to a true PID inside pgB

Yes, but it mapped by client->cancel_key:
https://github.com/pgbouncer/pgbouncer/blob/4ac6b897da791422362787959ffdf59ccc4835a3/src/objects.c#L1301

and this client->cancel_key is generated for each client, it is not server cancel_key:
https://github.com/pgbouncer/pgbouncer/blob/4ac6b897da791422362787959ffdf59ccc4835a3/src/proto.c#L228

but this PID might be used by another query by then

BTW, you have this race condition in clean postgres too, because, as you said above — cancel query is linked by PID, not by some "query id", so you does not known which query will be running in postgres backend when it will handle the cancellation.

@jackc
Copy link
Owner

jackc commented Apr 19, 2020

and this client->cancel_key is generated for each client, it is not server cancel_key:

I would expect pgx to only see the client cancel key then -- and pgbouncer be smart enough not to send the cancel to the Postgres server unless that client was busy.

BTW, you have this race condition in clean postgres too, because, as you said above — cancel query is linked by PID, not by some "query id", so you does not known which query will be running in postgres backend when it will handle the cancellation.

This isn't a problem with regular pgx because any cancellation closes the connection. Sending the cancel signal to the server is merely a courtesy to prevent long running queries from continuing to consume server time.


Maybe I don't understand the whole situation, but I still find it difficult to comprehend that this PgBouncer behavior is intentional and not a bug somewhere.

I'm not opposed to adding an on cancel function -- though since connections are always closed on cancel the only thing it could really do is decide whether or not to send the cancel signal -- I'd prefer to solve the root problem rather than add an override to work around it.

@jackc
Copy link
Owner

jackc commented Apr 24, 2020

I just spent several hours digging into this and am unable to reproduce this...

First, I wrote an example program that inspects that explicitly tries to create this problem.

https://github.com/jackc/pgx_issues/blob/8eb2ebfc8b41c96c4a642e4d078c293f69af320c/main.go

When run it creates two connections to a PgBouncer instance and displays the PID reported through the wire protocol and the PID reported by select pg_backend_pid(). This lets me see that PgBouncer is multiplexing them on the same underlying connection.

Then I start a long running query on one PgBouncer connection and cancel it on the other. PgBouncer correctly does not send the cancel to PostgreSQL.

jack@glados ~/dev/pgx_issues ±master⚡ » DATABASE_URL="host=127.0.0.1 port=6432 database=pgx_test user=pgx_md5 password=secret" go run main.go
2020/04/23 18:49:10 [busyConn]: protocolPID=841021419, sqlPID=63297, secretKey=491727008
2020/04/23 18:49:10 [cancelConn]: protocolPID=1124499268, sqlPID=63297, secretKey=4189145394
2020/04/23 18:49:10 [cancelConn]: waiting for other conn to run query...
2020/04/23 18:49:10 [busyConn] executing: select pg_sleep(10)
2020/04/23 18:49:12 [cancelConn]: sending CancelRequest
2020/04/23 18:49:12 [cancelConn]: CancelRequest err: <nil>
2020/04/23 18:49:21 [busyConn] was not interrupted

But maybe somehow this only happens under high concurrency or when using context cancellation rather than explicitly invoking CancelRequest().

So I wrote another test https://github.com/jackc/pgx_issues/blob/976053da4b9972d6a6e5fed1fc7b7a8562b3e913/main.go

This is a stress test that starts many workers each of which randomly executes queries, some of which are cancelled and some of which are not. It checks that no queries that are unexpectedly cancelled -- and they're not.

jack@glados ~/dev/pgx_issues ±master » DATABASE_URL="host=127.0.0.1 port=6432 database=pgx_test user=pgx_md5 password=secret" CONN_COUNT=30 QUERIES_PER_CONN=1000 go run main.go
2020/04/23 19:34:10 connCount: 30, queriesPerConn: 1000
2020/04/23 19:35:48 No cancel errors detected.

At this point I do not see a bug in PgBouncer or in pgx. I'm going to need a test case that reproduces this problem before investing any more time in this.

@Timosha
Copy link

Timosha commented Apr 24, 2020

@jackc can you show your pgbouncer.ini configuration file in your test?

@jackc
Copy link
Owner

jackc commented Apr 24, 2020

Here it is with the comments removed.

[databases]
pgx_test = host=127.0.0.1 dbname=pgx_test

[pgbouncer]
logfile = /usr/local/var/log/pgbouncer.log
pidfile = /usr/local/var/run/pgbouncer.pid
listen_addr = 127.0.0.1
listen_port = 6432
auth_type = trust
auth_file = /usr/local/etc/pgbouncer_userlist.txt
pool_mode = statement
server_reset_query = DISCARD ALL
max_client_conn = 100
default_pool_size = 20

@iosadchiy
Copy link

Hi! I tried to look a bit deeper into the issue. I guess you're right, it's not possible to cancel someone's else query:

  1. pgbouncer generates a unique cancel_key for each client connection, which is different from server connections' cancel_keys
  2. pgbouncer will only cancel a request if the client's connection is still linked to the server one
  3. pgconn closes connection immediately after sending the cancellation request, so no more requests will be sent to this connection

But the problem is still there. Here's what I see in pgbouncer when load testing one of our services which uses pgx/v4/pgxpool:

pgbouncer metrics

blue lines - cl_active, red dots - cl_waiting (right scale)

The pool is limited to 10 both on service and on DB. Maximum allowed client connections to pgbouncer - 100. As you can see, there are too many cl_waiting connections (almost up to 60, expected 0).

I've put together a test case to demonstrate the issue: https://github.com/iosadchiy/pgx_pgbouncer_issue. It runs 100 workers sending queries that are always canceled, with pool size 10. If you run it, pgbouncer will show these metrics:

database user cl_active cl_waiting sv_active sv_idle sv_used sv_tested sv_login maxwait maxwait_us pool_mode
pgbouncer pgbouncer 1 0 0 0 0 0 0 0 0 statement
postgres postgres 2 97 0 0 0 0 1 0 748640 transaction

As you can see, there are too little cl_active and too many cl_waiting connections (expected 10 and 0 correspondingly).

As far as I understand the problem lies in pgxpool - pgconn interaction:

  • pgxpool knows nothing about query cancellation, it can only control the total number of PgConn objects
  • when a PgConn object needs to close its connection, it creates a new one to cancel the query running on it
  • at the same moment, pgxpool creates a new PgConn object (and a new connection) because it thinks there is some space in the pool

Thus, in the situation when a lot of requests are being canceled by timeout (that is, under heavy load) pgx creates way more connections than maximum pool size allows and fills pgbouncer's pool with cancellation queries. Service's performance degrades dramatically due to the lack of cl_active connections.

One possible solution could be just not to use query cancellation when working with pgbouncer. That is, to get CustomCancel back. It works pretty well for us with pgx v3.

Another solution could be to allow pgxpool to control the query cancellation process and to know the total number of connections including cancellation requests.

As of now, we're kind of blocked by this problem in the long run - we cannot migrate to pgx/v4 until there is a good workaround.

Hope this was useful. Let me know what you think!

@jackc
Copy link
Owner

jackc commented Aug 21, 2020

I was able to duplicate the problem using your example.

But I don't think the cancel query that occurs when closing the connection is the problem. I experimented with removing it. It still exhibited catastrophic behavior very rapidly.

I think the root problem is that closing a connection is handled asynchronously. As far as pgxpool is concerned the connection is immediately closed, but it actually the connection lives for a little while. But there is now room in the pool so it establishes another connection. I think removing the query cancellation could even make it worse. The "closing" connections could live for much longer if they didn't actually close until their last query finished.

It's a bit of a tricky situation. When a query is cancelled the function must return immediately. It can't wait for the connection to close cleanly. That is why it is done in the background. But that causes an unfortunate side effect for the pool.


Okay. I think I have a solution. There were a few changes necessary.

  1. In pgconn I added CleanupDone(). It returns a channel that is closed when the asynchronous close / cleanup is complete.
  2. pgxpool now waits for CleanupDone in its connection destructor.
  3. puddle now does not remove a resource from the pool until its destructor has completed.

With these changes cl_active + cl_waiting was never greater than 10.

@iosadchiy
Copy link

Works like a charm, at least for performance tests. Thank you!

@iosadchiy
Copy link

It turns out that while the issue is fixed for pgx/v4 + pgxpool, it still can be reproduced for pgx/v4 + database/sql.

Check out the test case: https://github.com/iosadchiy/pgx_pgbouncer_issue/tree/database-sql

If you let it run for 15-30 seconds, you will get something like this:

$ echo "show pools;" | psql -h localhost -p 6432 -U postgres pgbouncer

 database  |   user    | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait | maxwait_us |  pool_mode
-----------+-----------+-----------+------------+-----------+---------+---------+-----------+----------+---------+------------+-------------
 pgbouncer | pgbouncer |         1 |          0 |         0 |       0 |       0 |         0 |        0 |       0 |          0 | statement
 postgres  | postgres  |         0 |        100 |         0 |       0 |       0 |         0 |        1 |       0 |     541061 | transaction
(2 rows)

Pool size is limited at 10, but there are 100 connections.

Simply closing the connection synchronously resolves the issue:

diff --git a/vendor/github.com/jackc/pgconn/pgconn.go b/vendor/github.com/jackc/pgconn/pgconn.go
index ff81206..304fbdf 100644
--- a/vendor/github.com/jackc/pgconn/pgconn.go
+++ b/vendor/github.com/jackc/pgconn/pgconn.go
@@ -564,7 +564,7 @@ func (pgConn *PgConn) asyncClose() {
        }
        pgConn.status = connStatusClosed

-       go func() {
+       func() {
                defer close(pgConn.cleanupDone)
                defer pgConn.conn.Close()

It works both with pgxpool and database/sql.

I'm not sure what the reasons are behind closing asynchronously. Could you please clarify or point to an explanation?

Do we really need to close the connection asynchronously? It seems that in case of pgxpool it is already being closed synchronously due to the waiting for CleanupDone(). And it works fine so far.
Maybe we can just drop it?

@jackc
Copy link
Owner

jackc commented Sep 28, 2020

Do we really need to close the connection asynchronously?

Well, I guess it depends on how we define "need". IMO, there are two requirements for context cancellation.

  1. Context cancellations should take effect immediately. It's not acceptable for the context passed to a Query to be canceled and the method to not immediately return.
  2. We must signal the server that the query has been canceled. Otherwise an expensive query could consume resources on the server indefinitely.

These objectives are opposed to each other in that the second objective requires network activity which could take an arbitrary amount of time.

That is why the query cancellation and connection closing is asynchronous.

It seems that in case of pgxpool it is already being closed synchronously due to the waiting for CleanupDone().

The underlying resource pool implementation, puddle, calls the destructor which calls CleanupDone asynchronously. This lets the canceled code return immediately, but the pool doesn't remove the connection until cleanup is done.

The only obvious way to not close asynchronously would be to violate one of the two requirements above. There might be some workaround possible in stdlib, but I'm not sure what it would be.

@alienth
Copy link

alienth commented Mar 5, 2021

A notable factor on this type of problem which we've encountered at reddit with both Golang and Python clients:

As demonstrated in your show pools; output, you have no servers available and one connection in sv_login. What's happening here is that pgbouncer is making a login attempt to postgres, but in the middle of the login attempt your client is cancelling the request, which in turn causes pgbouncer to cancel the login. When this happens and you run out of server slots in pgbouncer, you're stuck continually in a state where you're always in sv_login and it's always getting killed. Meanwhile, your clients are trying harder and harder to make queries, leading to more clients timing out and the never being able to escape the trap. Pgbouncer will be trashing postgres with login attempts, causing Postgres CPU to jump but never handling any queries.

A mitigation for this is to ensure that your clients never cancel queries faster than pgbouncer's query_wait_timeout. However, if you'd like very fast timeouts that's not always feasible.

@artemnesterenko
Copy link

Hello @jackc! We had migrated several services from pgx/v3 to pgx/v4 and started to see the same issue with pgbouncer. When we roll back to pgx/v3, the issue disappears.
This is happening even if the latest pgx version being used (v4.16.1). I forked the example provided by @iosadchiy, updated pgx version to v4.16.1 and was able to reproduce it:

pgx_pgxbouncer_issue_fork git:(master) ✗ echo "show pools;" | psql -h localhost -p 6432 -U postgres pgbouncer
 database  |   user    | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait | maxwait_us |  pool_mode  
-----------+-----------+-----------+------------+-----------+---------+---------+-----------+----------+---------+------------+-------------
 pgbouncer | pgbouncer |         1 |          0 |         0 |       0 |       0 |         0 |        0 |       0 |          0 | statement
 postgres  | postgres  |         0 |         99 |         0 |       0 |       0 |         0 |        1 |       2 |     313428 | transaction
(2 rows)

What can be done about that?

@artemnesterenko
Copy link

artemnesterenko commented May 20, 2022

I tested different versions of pgx and found out that this problem returned in 4.14.0. Version 4.13.0 works as expected.
Here is the test output for 4.14.0:

pgx_pgxbouncer_issue_fork git:(master) ✗ echo "show pools;" | psql -h localhost -p 6432 -U postgres pgbouncer
 database  |   user    | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait | maxwait_us |  pool_mode  
-----------+-----------+-----------+------------+-----------+---------+---------+-----------+----------+---------+------------+-------------
 pgbouncer | pgbouncer |         1 |          0 |         0 |       0 |       0 |         0 |        0 |       0 |          0 | statement
 postgres  | postgres  |         0 |         99 |         0 |       0 |       0 |         0 |        1 |       2 |     397710 | transaction
(2 rows)

Here is the test output for 4.13.0:

pgx_pgxbouncer_issue_fork git:(master) ✗ echo "show pools;" | psql -h localhost -p 6432 -U postgres pgbouncer
 database  |   user    | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait | maxwait_us |  pool_mode  
-----------+-----------+-----------+------------+-----------+---------+---------+-----------+----------+---------+------------+-------------
 pgbouncer | pgbouncer |         1 |          0 |         0 |       0 |       0 |         0 |        0 |       0 |          0 | statement
 postgres  | postgres  |         0 |          9 |         0 |       0 |       0 |         0 |        1 |       0 |     249274 | transaction
(2 rows)

EDIT:
It seems I found out what's causing this behaviour: it's pgconn v1.10.1. When I downgraded it to v1.10.0 the test output returned to its normal condition:

pgx_pgxbouncer_issue_fork git:(fix) ✗ echo "show pools;" | psql -h localhost -p 6432 -U postgres pgbouncer
 database  |   user    | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait | maxwait_us |  pool_mode  
-----------+-----------+-----------+------------+-----------+---------+---------+-----------+----------+---------+------------+-------------
 pgbouncer | pgbouncer |         1 |          0 |         0 |       0 |       0 |         0 |        0 |       0 |          0 | statement
 postgres  | postgres  |         0 |          9 |         0 |       0 |       0 |         0 |        1 |       0 |     217738 | transaction
(2 rows)

@levakin
Copy link
Contributor

levakin commented Jan 30, 2023

A notable factor on this type of problem which we've encountered at reddit with both Golang and Python clients:

As demonstrated in your show pools; output, you have no servers available and one connection in sv_login. What's happening here is that pgbouncer is making a login attempt to postgres, but in the middle of the login attempt your client is cancelling the request, which in turn causes pgbouncer to cancel the login. When this happens and you run out of server slots in pgbouncer, you're stuck continually in a state where you're always in sv_login and it's always getting killed. Meanwhile, your clients are trying harder and harder to make queries, leading to more clients timing out and the never being able to escape the trap. Pgbouncer will be trashing postgres with login attempts, causing Postgres CPU to jump but never handling any queries.

A mitigation for this is to ensure that your clients never cancel queries faster than pgbouncer's query_wait_timeout. However, if you'd like very fast timeouts that's not always feasible.

Hello, @alienth @jackc !

I'm currently using pq driver with the pgbouncer transaction pool and we had an issue passing ctx into the QueryContext, where ctx was passed from the http Request. The issue described above made us to use ctx.Background instead.

Now we are migrating to pgx/v5 and I'm curious how to pass ctx safely considering the problem @alienth described. Is it still possible to encounter this issue with the pgx/v5 ? Can we pass ctx from the Request which can be closed when the client goes away, etc. ?

Thanks for the help

@jackc
Copy link
Owner

jackc commented Jan 31, 2023

I am not aware of any changes regarding this issue between v4 and v5. But I haven't looked into either.

@bonnefoa
Copy link

Hi!

I have another use case where being able to disable cancel query would be better.
When using TLS, creating a new connection is expensive. Since sending a cancel request needs to create a connection, a pike of cancel request can lead to a CPU spike on the target PostgreSQL/pgbouncer.

Also, some managed PostgreSQL from some cloud providers have really low connection rate capacity and I wasn't able to exceed 25 conn/s on a 16 cores replica. In those environments, I would rather keep the connection capacity for queries and applications rather than use it to send cancel requests.

We usually rely on statement_timeout and idle_in_transaction_session_timeout to clean leftover queries and transactions running on servers and cancel requests is not necessary for us.

@roman-wb
Copy link

@jackc Hi!

We have same problem, on highload - pool fall to zero (using pool size greater then pgx.max_pool_size)
Can you help me, how choice best decision?

  • Override BuildContextWatcherHandler with empty struct (not cancel old connection)
  • Override BuildContextWatcherHandler with sync request cancel
  • Other variant

By versions:
v4.8.1 - problem
v4.9.0 - ok
v4.10.0 - ok
v4.11.0 - ok
v4.13.0 - ok
v4.14.0 - v5.6.0 - problem

@jackc
Copy link
Owner

jackc commented Jun 14, 2024

@roman-wb You probably want to use BuildContextWatcherHandler with CancelRequestContextWatcherHandler or DeadlineContextWatcherHandler.

@roman-wb
Copy link

roman-wb commented Jun 17, 2024

@jackc ok, thx!

Please tell: what happen if we assign empty struct to BuildContextWatcherHandler?
ps We using Transaction mode + simple_query_protocol

@jackc
Copy link
Owner

jackc commented Jun 17, 2024

Please tell: what happen if we assign empty struct to BuildContextWatcherHandler?

You would get a compile error... BuildContextWatcherHandler is a function that returns a ctxwatch.Handler not a struct.

If you want to simply reduce the number of cancellation requests at the expense of context cancellations not actually doing anything right away then you want to have your BuildContextWatcherHandler return a CancelRequestContextWatcherHandler with CancelRequestDelay and DeadlineDelay being set to however long you are willing to allow your query methods to block after the context is cancelled.

@roman-wb
Copy link

@jackc i mean empty struct:

type CustomRequestContextWatcherHandler struct {}
func (h *CustomRequestContextWatcherHandler) HandleCancel(context.Context) {}
func (h *CustomRequestContextWatcherHandler) HandleUnwatchAfterCancel() {}

c.ConnConfig.BuildContextWatcherHandler = func(pgConn *pgconn.PgConn) ctxwatch.Handler {
	return &CustomRequestContextWatcherHandler{}
}

1 What will be with code above?
2 Seems cancel req not require for (Transaction mode + simple_query_protocol). What do you think?

@jackc
Copy link
Owner

jackc commented Jun 17, 2024

The would make context cancellation a no-op. It would never send a cancel request and it would never set a deadline. A query could block forever.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests