Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate server.connection-timeout and create server-specific configuration keys #18473

Closed
joedj opened this issue Oct 2, 2019 · 6 comments
Closed
Assignees
Labels
type: bug A general bug
Milestone

Comments

@joedj
Copy link

joedj commented Oct 2, 2019

https://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html documents server.connection-timeout as "Time that connectors wait for another HTTP request before closing the connection. When not set, the connector's container-specific default is used. Use a value of -1 to indicate no (that is, an infinite) timeout".

Support for this property was added for Netty, in Issue #15368 / PR #15385

This was done using ChannelOption.CONNECT_TIMEOUT_MILLIS, which does not implement the described functionality. As far as I can tell, CONNECT_TIMEOUT_MILLIS is the timeout for the TCP connection handshake.

The correct way to implement the documented functionality (i.e. an idle/keep-alive timeout) appears to be using Netty's IdleStateHandler.

An example of doing this in a WebServerFactoryCustomizer might look something like this:

import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
import io.netty.handler.timeout.IdleStateEvent;
import io.netty.handler.timeout.IdleStateHandler;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.web.embedded.netty.NettyReactiveWebServerFactory;
import org.springframework.boot.web.server.WebServerFactoryCustomizer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import java.time.Duration;
import java.util.concurrent.atomic.AtomicBoolean;

import static java.util.concurrent.TimeUnit.NANOSECONDS;

@Configuration
public class NettyConfig {

    @Bean
    public WebServerFactoryCustomizer<NettyReactiveWebServerFactory> idleTimeoutCustomizer(
            @Value("${server.netty.idle-timeout}") Duration idleTimeout
    ) {
        return factory ->
                factory.addServerCustomizers(server ->
                        server.tcpConfiguration(tcp ->
                                tcp.bootstrap(bootstrap -> bootstrap.childHandler(new ChannelInitializer<Channel>() {
                                    @Override
                                    protected void initChannel(Channel channel) {
                                        channel.pipeline().addLast(
                                                new IdleStateHandler(0, 0, idleTimeout.toNanos(), NANOSECONDS) {
                                                    private final AtomicBoolean closed = new AtomicBoolean();
                                                    @Override
                                                    protected void channelIdle(ChannelHandlerContext ctx, IdleStateEvent evt) {
                                                        if (closed.compareAndSet(false, true)) {
                                                            ctx.close();
                                                        }
                                                    }
                                                }
                                        );
                                    }
                                }))));
    }

}
@spring-projects-issues spring-projects-issues added the status: waiting-for-triage An issue we've not yet triaged label Oct 2, 2019
@snicoll snicoll changed the title Bug: Netty server.connection-timeout does not do what is documented Netty server.connection-timeout does not do what is documented Oct 2, 2019
@bclozel bclozel self-assigned this Oct 2, 2019
@bclozel
Copy link
Member

bclozel commented Oct 2, 2019

It's hard to come up with a property description that fits 100% with subtle behavior of each server.

For Tomcat, we're setting the connectionTimeout property:

The number of milliseconds this Connector will wait, after accepting a connection, for the request URI line to be presented.

Which also sets the keepAliveTimeout property:

The number of milliseconds this Connector will wait for another HTTP request before closing the connection.

For Jetty, we're setting the idleTimeout property:

Sets the maximum Idle time for a connection, which roughly translates to the Socket.setSoTimeout(int) call, although with NIO implementations other mechanisms may be used to implement the timeout.

Undertow with NO_REQUEST_TIMEOUT:

The amount of time a connection can sit idle without processing a request, before it is closed by the server.

With Netty, CONNECT_TIMEOUT_MILLIS is indeed about setting up the TCP connection.

As far as I understand, the IdleStateHandler is like configuring a combination of ReadTimeoutHandler and WriteTimeoutHandler. Those are not strictly about connection timeout; they will close the connection if they don't see any traffic on the connection for the configured time. This behavior would fit with a idle-timeout/read-timeout/write-timeout, but not a connection timeout in my opinion.

With the current situation, I don't see how we could improve the behavior or the documentation. Let us know what you think!

@bclozel bclozel added the status: waiting-for-feedback We need additional information before we can continue label Oct 2, 2019
@joedj
Copy link
Author

joedj commented Oct 2, 2019

Yeah, it is tricky, although I think the behaviour that is currently implemented for Jetty is also consistent with IdleStateHandler (it is an idle timeout that applies even during request handling, if no progress is made i.e. there are no reads/writes):

    /**
     * <p>Sets the maximum Idle time for a connection, which roughly translates to the {@link Socket#setSoTimeout(int)}
     * call, although with NIO implementations other mechanisms may be used to implement the timeout.</p>
     * <p>The max idle time is applied:</p>
     * <ul>
     * <li>When waiting for a new message to be received on a connection</li>
     * <li>When waiting for a new message to be sent on a connection</li>
     * </ul>
     * <p>This value is interpreted as the maximum time between some progress being made on the connection.
     * So if a single byte is read or written, then the timeout is reset.</p>
     *
     * @param idleTimeout the idle timeout
     */

@spring-projects-issues spring-projects-issues added status: feedback-provided Feedback has been provided and removed status: waiting-for-feedback We need additional information before we can continue labels Oct 2, 2019
@joedj
Copy link
Author

joedj commented Oct 2, 2019

I'm don't actually think Netty's CONNECT_TIMEOUT_MILLIS has any effect at all for a listening socket. It seems to be about setting the connection timeout for a connect() operation (i.e. an outbound client call).

The code path in AbstractNioChannel where it is used will never be reachable for an NioServerSocketChannel because the earlier call to doConnect will throw UnsupportedOperationException.

(Edit: confirmed this in a debugger - the method that checks CONNECT_TIMEOUT_MILLIS is never reached for an incoming connection)

@bclozel bclozel added the for: team-attention An issue we'd like other members of the team to review label Oct 2, 2019
@bclozel
Copy link
Member

bclozel commented Oct 2, 2019

I agree there's a problem here, we're not really consistent and worse, not doing the right thing for Netty.

Maybe we could remove server.connection-timeout and have server.idle-timeout instead, since it seems to be more generally supported and actually closer to what we're doing anyway.

Undertow has a dedicated IDLE_TIMEOUT property:

The amount of time a connection can be idle for before it is timed out. An idle connection is a connection that has had no data transfer in the idle timeout period. Note that this is a fairly coarse grained approach, and small values will cause problems for requests with a long processing time

We could keep using Jetty's idleTimeout and use an IdleStateHandler with Netty.

But with Tomcat, we might be stuck with connectionTimeout and keepAliveTimeout and they're not really 100% in line with an idle-timeout property.

I've marked this issue for discussion with the team.

@joedj
Copy link
Author

joedj commented Oct 2, 2019

Thanks, yeah, I don't know what the best solution is either. The current Tomcat/Undertow NO_REQUEST_TIMEOUT-type functionality does also seem useful. Perhaps server-specific properties are the way to go:

  • we can provide properties that match the behaviours the servers provide
  • we don't have to document the idiosyncrasies of each implementation under a common property
  • if a property for the server doesn't exist, it is clearer that users will have to implement this behaviour themselves

@philwebb philwebb added type: bug A general bug and removed status: feedback-provided Feedback has been provided status: waiting-for-triage An issue we've not yet triaged labels Oct 2, 2019
@philwebb philwebb added this to the 2.2.x milestone Oct 2, 2019
@bclozel bclozel changed the title Netty server.connection-timeout does not do what is documented Deprecate server.connection-timeout and create server-specific configuration keys Oct 2, 2019
@bclozel
Copy link
Member

bclozel commented Oct 2, 2019

We've discussed that during a team call and we think that, given servers might have subtle behavior difference even for similar configuration keys, we should:

  1. Deprecate server.connection-timeout (and possibly WARN developers if they set a value for it).
  2. Have server specific configuration keys like server.tomcat.connection-timeout, server.jetty.idle-timeout, etc.

Thanks for your input @joedj !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug A general bug
Projects
None yet
Development

No branches or pull requests

4 participants