Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huge performance degradation after upgrade to Postgres 0.8.4 #477

Closed
cambierr opened this issue Oct 12, 2020 · 28 comments
Closed

Huge performance degradation after upgrade to Postgres 0.8.4 #477

cambierr opened this issue Oct 12, 2020 · 28 comments
Labels
for: external-project For an external project and not something we can fix

Comments

@cambierr
Copy link

cambierr commented Oct 12, 2020

Hi,

After upgrading from boot 2.2.x to 2.3.x we observed database access time exploding from an average 1.3 ms for a find by id to about 45 ms for the exact same query.

Here are the more precise chinks of information I can get for now:

2020-10-12 12:48:24.380 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [or-http-epoll-2] o.s.d.r.c.R2dbcTransactionManager        : Creating new transaction with name [xxx.ClientService.getClient]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,readOnly
2020-10-12 12:48:24.381 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [or-http-epoll-2] io.r2dbc.pool.ConnectionPool             : Obtaining new connection from the driver
2020-10-12 12:48:24.381 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [or-http-epoll-2] io.r2dbc.postgresql.QUERY                : Executing query: SELECT 1
2020-10-12 12:48:24.381 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [or-http-epoll-2] i.r.p.client.ReactorNettyClient          : Request:  Query{query='SELECT 1'}
2020-10-12 12:48:24.382 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Response: RowDescription{fields=[Field{column=0, dataType=23, dataTypeModifier=-1, dataTypeSize=4, format=FORMAT_TEXT, name='?column?', table=0}]}
2020-10-12 12:48:24.382 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Response: DataRow{columns=[PooledSlicedByteBuf(ridx: 0, widx: 1, cap: 1/1, unwrapped: PooledUnsafeDirectByteBuf(ridx: 46, widx: 66, cap: 496))]}
2020-10-12 12:48:24.383 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Response: CommandComplete{command=SELECT, rowId=null, rows=1}
2020-10-12 12:48:24.383 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Response: ReadyForQuery{transactionStatus=IDLE}
2020-10-12 12:48:24.383 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [tor-tcp-epoll-5] o.s.d.r.c.R2dbcTransactionManager        : Acquired Connection [MonoLift] for R2DBC transaction
2020-10-12 12:48:24.383 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [tor-tcp-epoll-5] o.s.d.r.c.R2dbcTransactionManager        : Switching R2DBC Connection [custom.PoolingConnectionFactory$4@7c2a0094] to manual commit
2020-10-12 12:48:24.383 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [tor-tcp-epoll-5] i.r2dbc.postgresql.PostgresqlConnection  : Setting auto-commit mode to [false]
2020-10-12 12:48:24.383 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [tor-tcp-epoll-5] i.r2dbc.postgresql.PostgresqlConnection  : Beginning transaction
2020-10-12 12:48:24.383 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [tor-tcp-epoll-5] io.r2dbc.postgresql.QUERY                : Executing query: BEGIN
2020-10-12 12:48:24.383 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Request:  Query{query='BEGIN'}
2020-10-12 12:48:24.384 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Response: CommandComplete{command=BEGIN, rowId=null, rows=null}
2020-10-12 12:48:24.384 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Response: ReadyForQuery{transactionStatus=TRANSACTION}
2020-10-12 12:48:24.384 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [tor-tcp-epoll-5] i.r2dbc.postgresql.PostgresqlConnection  : Skipping begin transaction because status is OPEN
2020-10-12 12:48:24.385 DEBUG [identity,3c05ce6821c40f83,40b9950123eebb0f,true] 1006432 --- [tor-tcp-epoll-5] o.s.d.r2dbc.core.DefaultDatabaseClient   : Executing SQL statement [SELECT * FROM oauth.client as c WHERE c.key = $1]
2020-10-12 12:48:24.385 DEBUG [identity,3c05ce6821c40f83,40b9950123eebb0f,true] 1006432 --- [tor-tcp-epoll-5] o.s.d.r2dbc.core.NamedParameterExpander  : Expanding SQL statement [SELECT * FROM oauth.client as c WHERE c.key = $1] to [SELECT * FROM oauth.client as c WHERE c.key = $1]
2020-10-12 12:48:24.385 TRACE [identity,3c05ce6821c40f83,40b9950123eebb0f,true] 1006432 --- [tor-tcp-epoll-5] o.s.d.r2dbc.core.DefaultDatabaseClient   : Expanded SQL [SELECT * FROM oauth.client as c WHERE c.key = $1]
2020-10-12 12:48:24.386 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] io.r2dbc.postgresql.QUERY                : Executing query: SELECT * FROM oauth.client as c WHERE c.key = $1
2020-10-12 12:48:24.386 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Request:  Bind{name='B_39', parameterFormats=[FORMAT_TEXT], parameters=[CompositeByteBuf(ridx: 0, widx: 18, cap: 18, components=1)], resultFormats=[], source='S_0'}
2020-10-12 12:48:24.387 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Request:  Describe{name='B_39', type=PORTAL}
2020-10-12 12:48:24.387 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Request:  Execute{name='B_39', rows=0}
2020-10-12 12:48:24.387 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Request:  Close{name='B_39', type=PORTAL}
2020-10-12 12:48:24.387 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Request:  Sync{}
2020-10-12 12:48:24.431 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Response: BindComplete{}
2020-10-12 12:48:24.431 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Response: RowDescription{fields=[Field{column=1, dataType=20, dataTypeModifier=-1, dataTypeSize=8, format=FORMAT_TEXT, name='id_client', table=28443}, Field{column=2, dataType=1043, dataTypeModifier=-1, dataTypeSize=-1, format=FORMAT_TEXT, name='name', table=28443}, Field{column=3, dataType=1043, dataTypeModifier=-1, dataTypeSize=-1, format=FORMAT_TEXT, name='key', table=28443}, Field{column=4, dataType=1043, dataTypeModifier=-1, dataTypeSize=-1, format=FORMAT_TEXT, name='secret', table=28443}, Field{column=5, dataType=16, dataTypeModifier=-1, dataTypeSize=1, format=FORMAT_TEXT, name='enabled', table=28443}, Field{column=6, dataType=16, dataTypeModifier=-1, dataTypeSize=1, format=FORMAT_TEXT, name='trusted', table=28443}, Field{column=7, dataType=16, dataTypeModifier=-1, dataTypeSize=1, format=FORMAT_TEXT, name='extends_refresh_token_validity', table=28443}, Field{column=8, dataType=20, dataTypeModifier=-1, dataTypeSize=8, format=FORMAT_TEXT, name='refresh_token_validity', table=28443}, Field{column=9, dataType=20, dataTypeModifier=-1, dataTypeSize=8, format=FORMAT_TEXT, name='access_token_validity', table=28443}, Field{column=10, dataType=3802, dataTypeModifier=-1, dataTypeSize=-1, format=FORMAT_TEXT, name='finalizers', table=28443}, Field{column=11, dataType=3802, dataTypeModifier=-1, dataTypeSize=-1, format=FORMAT_TEXT, name='metadata', table=28443}, Field{column=12, dataType=1043, dataTypeModifier=-1, dataTypeSize=-1, format=FORMAT_TEXT, name='otp_mode', table=28443}, Field{column=13, dataType=1043, dataTypeModifier=-1, dataTypeSize=-1, format=FORMAT_TEXT, name='otp_configuration', table=28443}]}
2020-10-12 12:48:24.431 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Response: CommandComplete{command=SELECT, rowId=null, rows=0}
2020-10-12 12:48:24.431 DEBUG [identity,3c05ce6821c40f83,40b9950123eebb0f,true] 1006432 --- [tor-tcp-epoll-5] i.r.postgresql.util.FluxDiscardOnCancel  : received cancel signal
2020-10-12 12:48:24.432 TRACE [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [tor-tcp-epoll-5] o.s.d.r.c.R2dbcTransactionManager        : Triggering beforeCompletion synchronization
2020-10-12 12:48:24.432 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [tor-tcp-epoll-5] o.s.d.r.c.R2dbcTransactionManager        : Initiating transaction rollback
2020-10-12 12:48:24.432 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [tor-tcp-epoll-5] o.s.d.r.c.R2dbcTransactionManager        : Rolling back R2DBC transaction on Connection [custom.PoolingConnectionFactory$4@7c2a0094]
2020-10-12 12:48:24.432 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [tor-tcp-epoll-5] io.r2dbc.postgresql.QUERY                : Executing query: ROLLBACK
2020-10-12 12:48:24.432 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Request:  Query{query='ROLLBACK'}
2020-10-12 12:48:24.433 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Response: CloseComplete{}
2020-10-12 12:48:24.433 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Response: ReadyForQuery{transactionStatus=TRANSACTION}
2020-10-12 12:48:24.433 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Response: CommandComplete{command=ROLLBACK, rowId=null, rows=null}
2020-10-12 12:48:24.433 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Response: ReadyForQuery{transactionStatus=IDLE}
2020-10-12 12:48:24.433 TRACE [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [tor-tcp-epoll-5] o.s.d.r.c.R2dbcTransactionManager        : Triggering afterCompletion synchronization
2020-10-12 12:48:24.433 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [tor-tcp-epoll-5] i.r2dbc.postgresql.PostgresqlConnection  : Setting auto-commit mode to [true]
2020-10-12 12:48:24.433 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [tor-tcp-epoll-5] o.s.d.r.c.R2dbcTransactionManager        : Releasing R2DBC Connection [custom.PoolingConnectionFactory$4@7c2a0094] after transaction
2020-10-12 12:48:24.434 DEBUG [identity,3c05ce6821c40f83,06a6c829442fb8a4,true] 1006432 --- [tor-tcp-epoll-5] io.r2dbc.pool.PooledConnection           : Releasing connection

To me, the interesting part is

2020-10-12 12:48:24.387 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Request:  Sync{}
2020-10-12 12:48:24.431 DEBUG [identity,8fa02bd64d09b06c,09e4fa10148a8571,true] 1006432 --- [tor-tcp-epoll-5] i.r.p.client.ReactorNettyClient          : Response: BindComplete{}

that seems to show that the issue would be on the communication with the DB, taking most of the time but

  1. it was fine before the upgrade
  2. checked with both versions running at the same time and only the boot 2.3.x version is impacted

At this time, I have to admit that I don't know how to go deeper in the analysis... feel free to suggest any "next step" you think may be worth a try: this is impacting several of our platforms since the overall API responsiveness has peaked up to 5* the average time.

Thank you guys

@mp911de
Copy link
Member

mp911de commented Oct 12, 2020

We have seen a similar issue at pgjdbc/r2dbc-postgresql#334 with similar dependency arrangements. Right now, it seems tricky to trace down the issue. I'd suggest building a minimal reproducer that just uses R2DBC SPI and then incrementally adding dependencies/components to find out how we can reproduce the issue.

@jroucou
Copy link

jroucou commented Oct 12, 2020

I have the exact same thing. I noticed that the degradation of performance occurred when moving :

  • from spring-boot 2.2.8 to 2.2.9 and higher
  • or from spring-boot 2.3.1 to 2.3.2 and higher

I have overrided the version r2dbc-postgres to 0.8.2.RELEASE in each configuration so it always this one. So it's not because of r2dbc-postgres.

@cambierr
Copy link
Author

Okay, after hours of adding/removing dependencies & tweaking configuration I managed to find what actually creates this mess: r2dbc-pool.

With r2dbc-pool enabled, my lowest query time is 3ms with average 41 et max 89.

Without r2dbc-pool, it becomes 1.4ms / 2.7ms / 7.2ms

I'll continue more investigation to confirm this behavior on a r2dbc app without spring (just to make sure this issue has nothing to do with spring-data-r2dbc) and continue on @mp911de link (pgjdbc/r2dbc-postgresql#334) or raise a new one in the r2dbc-pool project.

@jroucou @deblockt are you guys using r2dbc-pool too ?

@deblockt
Copy link

@cambierr I use r2dbc pool too. I will try to remove It to see if something change. On my side the issue occurs when I switch r2dbc-postgres from 0.8.3 to 0.8.4.

@jroucou
Copy link

jroucou commented Oct 12, 2020

@cambierr yes I am using using it. But it's the same code and same configuration on my side before updating spring-boot.
So the impact is elsewhere.

@deblockt
Copy link

deblockt commented Oct 12, 2020

@cambierr I have try on my reproducer project (an api with 1 insertion), and using

  • r2dbc-postgres 0.8.4 with the pool: times are bad (30ms instead of 8ms)
  • r2dbc-postgres 0.8.4 withouth the pool: time are fine (8ms)
  • r2dbc-postgres 0.8.4 with the poll 0.8.3: time are bad (30ms instead of 8ms)
  • r2dbc-postgres 0.8.3 with or without r2dbc-pool: time are fine (8ms)

@cambierr
Copy link
Author

@jroucou I was also using r2dbc-pool before and after the upgrade, and it's only after the upgrade that I have issues. Moreover, after the upgrade, without changing anything but removing r2dbc-pool, I get back the original performances despite a new connection having to be created for each request.

@deblockt regarding your tests, I see the exact same thing: before upgrading to boot 2.3.x, performance are fine both with and without r2dbc-pool. But as soon as I upgrade, performances get crappy.

I also was able to see the same thing on a very simple project with only r2dbc-pool & r2dbc-postgres so this is definitely not related to spring or sleuth tracing overhead.

One thing I noticed through TCP tracing is this:

Capture d’écran de 2020-10-12 15-51-06

I could be wrong but my understanding of this trace is that each time r2dbc sends a Parse command, Postgres takes tens of millis to reply which is where 95% of the time is wasted. This only happen when using r2dbc-pool and was able to reproduce this behavior ~ 90% of the time (the 10 other % it works with almost normal performances) and I tested the exact same query (both through native sql using jdbc driver, through jpa, and using a naked psql session) and wasn't able to reproduce, so this clearly seems to be something wrong with r2dbc-pool.

@mp911de knowing you're one of the early authors of the r2dbc project, would you have any advice on where to start to help diagnose this ?

@cambierr
Copy link
Author

by the way, we should probably move this to https://github.com/r2dbc/r2dbc-pool at this point. Not familiar enough with github issue tracker to know if there is a way to move/duplicate this issue there ?

@mp911de
Copy link
Member

mp911de commented Oct 13, 2020

Parse frames are used to prepare a parameterized SQL statement. Once a statement is prepared, it is expected to be executed only by submitting the statement name and bind parameters. The driver caches prepared statements using the actual SQL query and type OIDs. That being said, running a statement twice that uses the same parameter types should result in a single parse and two execute frames.

Note that prepared statement caching is connection-local. Using a fresh connection (i.e. not using r2dbc-pool or the pool invalidates connections aggressively) for each statement results in a lot of parse messages.

Enabling debug logging gives you a lot of insight. When the pool reports Obtaining new connection from the driver, then this is an indicator for opening a new connection (then also the driver reports the connection handshake sequence).

@mp911de
Copy link
Member

mp911de commented Oct 13, 2020

Until we've sorted where the issue comes from, let's stick with this ticket.

@cambierr
Copy link
Author

cambierr commented Oct 13, 2020

Ok, I may have found something that would mean it's not actually linked to r2dbc-pool: I wrote a very simple ConnectionFactory as-follow (disclaimer: quick and dirty stuff, just for testing purpose):

    Mono<? extends Connection> co;
    @Override
    public Publisher<? extends Connection> create() {
        if(co == null){
            co = Mono.from(delegateConnectionFactory.create()).map(delegate -> new Connection() {
                @Override
                public Publisher<Void> close() { return Mono.empty(); }
                // all other methods are directly relayed to the wrapepd connection
            }).cache();
        }
    }

I'd expect this to have the best performances for a simple use case since we keep a connection open but instead... i have exactly one quick transaction, followed by one slow, then one quick, then.... your got it. I'll attach a pcap with 4 calls in this order: slow, fast, slow, fast:
pg-capture.zip

I'll continue looking into this...

@cambierr
Copy link
Author

cambierr commented Oct 13, 2020

Looking at wireshark, the only difference betwwen the "fast" and the "slow" flows seems to be that in the fast flow, r2dbc is sending:

t0:          Bind
t0 + .2ms:   Describe
t0 + .3ms:   Execute
t0 + .4ms:   Close
t0 + .45ms:  Sync

while in the "slow" flow:

t0:         Bind
t0 + 40ms:  Describe+Execute+Close+Sync

Don't know if this can help in any way but... well, I'll continue trying to understand what causes this

@mp911de
Copy link
Member

mp911de commented Oct 13, 2020

Not really sure about fast/slow flows since the difference seems to be 5ms (or are the values above 0.2ms, 0.45ms)? However, the numbers are all pretty close.

@cambierr
Copy link
Author

yep, in the "fast" (a.k.a "normal") flow, times are 0.2ms to 0.45ms while in the "slow" flow, the time is 40ms

@mp911de
Copy link
Member

mp911de commented Oct 13, 2020

That's interesting. Do you have a simple reproducer for the two cases?

@mp911de
Copy link
Member

mp911de commented Oct 13, 2020

I had a look at the capture provided by @cambierr. What's interesting is that that the ACK packet for BIND arrives 40ms after sending the BIND packet. I'm not able to reproduce the behavior (using docker or localhost). I can measure that the first BIND takes around 20ms, all subsequent BIND are basically complete in 1-2ms.

@cambierr
Copy link
Author

I'll try to build the simplest reproducer I can. Meanwhile, using a single persistent connection, I can see that exactly one request every two is slow... this is... wtf.

Will keep digging into this through the smallest app possible (will also try with another postgres version, just in case)

@mp911de
Copy link
Member

mp911de commented Oct 13, 2020

FTR, tried Postgres versions 11 and 13

@cambierr
Copy link
Author

Here is the simplest reproducer I could come up with: https://gist.github.com/cambierr/910c92e48de9c089831e4fcd0e5967ed

@mp911de
Copy link
Member

mp911de commented Oct 14, 2020

Thanks a lot, will have a look.

@mp911de
Copy link
Member

mp911de commented Oct 14, 2020

I wasn't able to reproduce the issue. I'm using the exact same dependencies, I attached my log output to https://pastebin.com/EyhnHdnE (I even incremented the number of iterations to increase the likelihood of reproducing the issue).

I'm running the test on a Mac using nio. Switching to kqueue (adding netty-transport-native-kqueue as dependency) didn't change anything.

@cambierr
Copy link
Author

Can you confirm the boot version you're using ? I'm on 2.3.4.RELEASE

@mp911de mp911de added for: team-attention An issue we need to discuss as a team to make progress status: waiting-for-triage An issue we've not yet triaged labels Oct 14, 2020
@mp911de
Copy link
Member

mp911de commented Oct 14, 2020

I'm not using Spring Boot at all (as per the reproducer that consists of Spring Data R2DBC and the Postgres driver only)

@cambierr
Copy link
Author

my bad, you're right... I'll check with other configurations of netty & postgres

@cambierr
Copy link
Author

Well, just tested again with

        <dependency>
            <groupId>org.springframework.data</groupId>
            <artifactId>spring-data-r2dbc</artifactId>
            <version>1.0.0.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>io.r2dbc</groupId>
            <artifactId>r2dbc-spi</artifactId>
            <version>0.8.0.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>io.r2dbc</groupId>
            <artifactId>r2dbc-postgresql</artifactId>
            <version>0.8.0.RELEASE</version>
        </dependency>

which gives the "best" performances so there is something wrong but I have a really hard time finding out why haha

@cambierr
Copy link
Author

OK, finally found out what has an impact:

        <dependency>
            <groupId>io.r2dbc</groupId>
            <artifactId>r2dbc-postgresql</artifactId>
            <version>0.8.3.RELEASE</version>
        </dependency>

gives a normal (good) response time, while

        <dependency>
            <groupId>io.r2dbc</groupId>
            <artifactId>r2dbc-postgresql</artifactId>
            <version>0.8.4.RELEASE</version>
        </dependency>

(as well as 0.8.5.RELEASE) gives a 40ms penalty. Time to track the issue at postgres side ?

@mp911de
Copy link
Member

mp911de commented Oct 14, 2020

Yes. Closing this ticket in favor of pgjdbc/r2dbc-postgresq#334. @cambierr Are you okay with using a snapshot for your test since you're the one who is able to reproduce the issue?

@mp911de mp911de closed this as completed Oct 14, 2020
@mp911de mp911de added for: external-project For an external project and not something we can fix and removed for: team-attention An issue we need to discuss as a team to make progress status: waiting-for-triage An issue we've not yet triaged labels Oct 14, 2020
@mp911de mp911de changed the title Huge performance degradation after upgrade to Sprint boot 2.3.x (from 2.2.x) Huge performance degradation after upgrade to Postgres 0.8.4 Oct 14, 2020
@cambierr
Copy link
Author

@mp911de sure, It'll be a pleasure to help fixing this thing haha

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
for: external-project For an external project and not something we can fix
Projects
None yet
Development

No branches or pull requests

4 participants