Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First insert upon pool.acquire() takes long ass time #794

Closed
kuhnke-m opened this issue Aug 3, 2021 · 2 comments
Closed

First insert upon pool.acquire() takes long ass time #794

kuhnke-m opened this issue Aug 3, 2021 · 2 comments

Comments

@kuhnke-m
Copy link

kuhnke-m commented Aug 3, 2021

trio 0.19.0
trio-asyncio 0.12.0
triopg 0.6.0
asyncpg 0.23.0
PostgreSQL 12.7 (local install trhough unix socket)
Python 3.8.10
Running in containers on alpine
I use triopg.create_pool()

All python pkgs installed via pip

Can the issue be reproduced under both asyncio and [uvloop](:
Not sure, I use trio_asyncio loop

triopg seems like a wrapper proxying asyncpg commands, which is why I write here...
I can't use manually prepared statements or 'manual' async style. Only async with style.

I need to do the following: wait for an event. If the event happens, insert some rows into DB, as fast as possible, then do a conditional thing if entry is existant in DB or not, then idle again.

I accomplished this by creating a pool like so:

async with triopg.create_pool(
                               timeout=5.0,\
                               command_timeout=20.0,\
                               #... connection details retracted ... #) as self.pool:

Then I create an async worker waiting for the event and after it happened, executing:

In this worker I do the following:

async with self.pool.acquire() as con:
    async with con.transaction():
        await con.execute('insert into ..... ', arg1,arg2,...)

The insert style is the following: INSERT INTO (rid,sid,a,b,c,d) VALUES
($1,$2,$3,$4,$5,$6)
rid and sid are FOREIGN KEYS

Now when I execute this insert, it takes an astonishing 300ms ! This is unacceptable. I tried to narrow it down, some other things, such as inserting into the same table multiple times reduces this latency to around 4ms.
The transaction and pool handling are eating up an additional ~13ms. But executing without a transaction gives similar overall times. So I ignore this.

Furthermore I noticed, that on other tables, such an insert is not eating up such large times (maybe 30ms, other tables, 7ms) and the time difference between the first and following inserts is lesser.
These other tables have less columns and have no FOREIGN KEYS, only UNIQUE SERIAL PRIMARY KEY.

What is the reason for this? I suspect it has to do with preparation of prepared statements on the connection object?
The documentation is misinforming about this, at one point it says on release of connection to pool all state is dropped(incl. prepared statements) at another point it says everything EXCEPT prepared statements is released.
Can a solution be to lease a connection with acquire beforehand? When I set all timeouts to 0 (infinite) will the prepared statements be retained?
E.g.

max_inactive_connection_lifetime = 0 
max_queries = 0

Are there internal things that can corrupt the connection state and require a new connection that will induce this delay again?

Now because some table inserts are faster than others, I suspect maybe some settings in postgres config might also be wrong, which causes this. I am not sure which it could be because the psql commandline EXPLAIN ANALYZE command does not show this in Plannung and execution time (which is together around 1ms).

Any help is much appreciated.

Regards, M.K.

@elprans
Copy link
Member

elprans commented Aug 3, 2021

Try turning jit off:

... server_settings={'jit': 'off'}

@kuhnke-m
Copy link
Author

kuhnke-m commented Aug 3, 2021

Try turning jit off:

... server_settings={'jit': 'off'}

This was exactly it! The psql explain analyze didn't even tell me there would be JIT optimisation going on for this query. Disabling it still helped tremendously!!
You saved me. Thanks.

@kuhnke-m kuhnke-m closed this as completed Aug 3, 2021
elprans added a commit that referenced this issue Sep 23, 2023
The misapplication of JIT to asyncpg introspection queries has been a
constant source of user complaints.

Closes: #530
Closes: #1078
Previously: #875, #794, #782, #741, #727 (and probably more).
elprans added a commit that referenced this issue Oct 7, 2023
The misapplication of JIT to asyncpg introspection queries has been a
constant source of user complaints.

Closes: #530
Closes: #1078
Previously: #875, #794, #782, #741, #727 (and probably more).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants