Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logging details for connection pool reactor-netty #917

Closed
gurudatta11 opened this issue Nov 27, 2019 · 2 comments
Closed

logging details for connection pool reactor-netty #917

gurudatta11 opened this issue Nov 27, 2019 · 2 comments
Labels
type/enhancement A general enhancement
Milestone

Comments

@gurudatta11
Copy link

  1. logging as warn, if connection pool reached maximum limit might be helpful.
  2. And also other loggers to indicate how many connections are being used at any given point of time.

Motivation

Currently it's difficult to debug if there is issue with connection pool in reactor-netty. Hence more loggers and monitoring might help when application is scaled.

Desired solution

  • logging as warn, if connection pool reached maximum limit will be helpful and also advise on how to address this (adjust configuration or etc.,)
  • And also other loggers to indicate how many connections are being used at any given point of time.
  • During startup of application, log what kind of connection pool being used with the min and max limits.
  • Also if possible, Monitoring endpoints (make it integrable) Spring Boot Actuator: Health check, Auditing, Metrics gathering and Monitoring

Considered alternatives

Additional context

Creating this feature request based on conversation in #907

@gurudatta11 gurudatta11 added status/need-triage A new issue that still need to be evaluated as a whole type/enhancement A general enhancement labels Nov 27, 2019
@violetagg
Copy link
Member

If you configured Micrometer in your Spring Boot application you should be able to see
reactor.netty.connection.provider.<name>.total.connection
reactor.netty.connection.provider.<name>.active.connection
reactor.netty.connection.provider.<name>.idle.connection
reactor.netty.connection.provider.<name>.pending.connection

Where <name> by default is http otherwise the name that you provided when creating the ConnectionProvider

About the logs - if you enable the log level for Reactor Netty to DEBUG you should be able to see:

21:18:22.388 [Test worker] DEBUG r.n.r.PooledConnectionProvider - Creating new client pool [test] for /0:0:0:0:0:0:0:1:60172
21:18:22.434 [reactor-http-nio-2] DEBUG r.n.r.PooledConnectionProvider - [id: 0x50025567] Created new pooled channel, now 1 active connections and 0 inactive connections
21:18:22.517 [reactor-http-nio-2] DEBUG r.n.r.PooledConnectionProvider - [id: 0x50025567, L:/0:0:0:0:0:0:0:1:60173 - R:/0:0:0:0:0:0:0:1:60172] Channel connected, now 1 active connections and 0 inactive connections
21:18:22.645 [reactor-http-nio-2] DEBUG r.n.r.PooledConnectionProvider - [id: 0x50025567, L:/0:0:0:0:0:0:0:1:60173 - R:/0:0:0:0:0:0:0:1:60172] Releasing channel
21:18:22.648 [reactor-http-nio-2] DEBUG r.n.r.PooledConnectionProvider - [id: 0x50025567, L:/0:0:0:0:0:0:0:1:60173 ! R:/0:0:0:0:0:0:0:1:60172] Channel closed, now 0 active connections and 0 inactive connections
21:18:22.650 [reactor-http-nio-2] DEBUG r.n.r.PooledConnectionProvider - [id: 0x50025567, L:/0:0:0:0:0:0:0:1:60173 ! R:/0:0:0:0:0:0:0:1:60172] Channel cleaned, now 0 active connections and 0 inactive connections

@violetagg violetagg removed the status/need-triage A new issue that still need to be evaluated as a whole label Nov 27, 2019
@violetagg violetagg added this to the 0.9.3.RELEASE milestone Nov 27, 2019
@violetagg
Copy link
Member

In addition to the comment above

The log was extended

16:31:31.511 [Test worker] DEBUG r.n.r.PooledConnectionProvider - Creating a new fixed client pool with name [test] and max connections [1] for [/0:0:0:0:0:0:0:1:59978]

logging as warn, if connection pool reached maximum limit will be helpful and also advise on how to address this (adjust configuration or etc.,)

The fact that the pool reached the max connections limit still cannot be handled as a warning.

Having TimeoutException with a message Pool#acquire(Duration) has been pending for more than the configured timeout of 20ms is the point when a warning should be triggered.

In addition to the metrics mentioned above we will expose also an API #925

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/enhancement A general enhancement
Projects
None yet
Development

No branches or pull requests

2 participants