Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

connectTimeout and replyTimeout #105

Closed
wants to merge 10 commits into from

Conversation

arnimarj
Copy link

@arnimarj arnimarj commented Mar 4, 2016

replyTimeout and connectTimeout for txredisapi.

The unit-tests are rather awkward right now. Any critique welcome.

The connectTimeout exists because I'd like to test the case where a connectTimeout should trigger before replyTimeout.

@@ -1771,6 +1797,8 @@ def sscan(self, key, cursor=0, pattern=None, count=None):
else:
RedisProtocol = BaseRedisProtocol

RedisProtocol = BaseRedisProtocol
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line was probably written for debugging purposes and left by mistake

@IlyaSkriblovsky
Copy link
Owner

Thanks for your contribution!

I've added a couple of minor comments to code, but then noticed, that there is more fundamental flaw in this approach to timeouts. Please consider following test case:

class TestTimeout(unittest.TestCase):
    . . .

    @staticmethod
    def _delay(time):
        d = defer.Deferred()
        reactor.callLater(time, d.callback, None)
        return d

    @defer.inlineCallbacks
    def test_delayed_call(self):
        factory = protocol.ServerFactory()
        factory.buildProtocol = lambda addr: PingMocker(delay=2)
        handler = reactor.listenTCP(8000, factory)

        c = yield redis.Connection(host="localhost", port=8000, reconnect=False, replyTimeout=3)
        try:
            yield self._delay(2)
            pong = yield c.ping()
            self.assertEqual(pong, "PONG")
        finally:
            yield handler.stopListening()
            yield c.disconnect()


class PingMocker(redis.LineReceiver):
    def __init__(self, delay=0):
        self.delay = delay
        self.calls = []

    def lineReceived(self, line):
        if line == 'PING':
            self.calls.append(reactor.callLater(self.delay, self.transport.write, b"+PONG\r\n"))

    def connectionLost(self, reason):
        for call in self.calls:
            if call.active():
                call.cancel()

I've added a delay to your PingMocker class to simulate response lag. The key line here is yield self._delay(2). Request is send in 2 seconds after connect. You do setTimeout in connectionMade, so timeout will fire in 3 seconds after connection. When timeoutConnection is called, self.replyQueue.waiting will be True because there is an outstanding request, so it will raise TimeoutError. But this request is only waiting for 1s up to this moment, so TimeoutRequest should not be raised.

As I wrote in #100, I'm still thinking that TimeoutMixin only works well with server protocols and can't be correctly applied to our client-side code. It only provides us one single timer, but it seems to me that in order to consistently implement timeouts in client-side code we need distinct timer for every request we send. I think the more promising approach might be to implement timeouting in execute_command by using callLater that will call errback on returned Deferred (and cancelling this callLater when response is received from Redis).

I surely might be wrong, so please feel free to guard your solution!

@arnimarj
Copy link
Author

Hi. I intended the code to work exactly as the unit-test you wrote. But I'm having second thoughts about what to call this new feature/behaviour.

Let me first explain the problem we're trying to solve for ourselves.

We're running in two data-centers, and sometimes a TCP connection between them is closed on one end, without the other end noticing. All requests with pending responses on that connection will then happily block indefinitely. (It may be more appropriate to try to solve this at a lower level though.)

I understand that a per-execute-command timeout seems more natural, since you'd only want the timeout to be raised no earlier than replyTimeout.

My use case, however, would benefit from letting all pending requests know immediately when we've "given up" on the connection, and given them a chance to re-issue the requests on a fresh connection.

Perhaps this particular timeout could be called "closeConnectionIfTimebetweenResponsesExceeds", or something a bit shorter.

Am I making any sense?

@IlyaSkriblovsky
Copy link
Owner

Calling resetTimeout in execute_command will introduce another problem: if we will send requests regularly (more often than once in requestTimeout), we won't get TimeoutError even if server doesn't respond at all. This is not you want in the case of one-sided connection drop between DCs.

Yeah, closeConnectionIfTimebetweenResponsesExceeds is more exact name for the feature proposed code implements ;) Even it will work ok with high workload, it will break connections for users who sends requests rarely. It doesn't look like general solution...

I've got your point about immediately dropping all pending requests when requestTimeout is hit. This can be easily achieved by calling transport.loseConnection() (or may be better abortConnection) when any individual request doesn't get response in requestTimeout seconds. Then connectionLost method will call errbacks for all pending requests. But I'm still thinking that timeres should be per-request in order to work correctly in all cases.

@arnimarj
Copy link
Author

I'm gonna continue experimenting on this branch. Feel free to ignore for now.

@arnimarj arnimarj closed this Mar 11, 2016
@IlyaSkriblovsky
Copy link
Owner

Thanks for your efforts, Árni

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants