Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom timeout per request #165

Closed
delvedor opened this issue May 19, 2020 · 18 comments
Closed

Custom timeout per request #165

delvedor opened this issue May 19, 2020 · 18 comments
Labels
enhancement New feature or request

Comments

@delvedor
Copy link
Member

delvedor commented May 19, 2020

At the moment we can configure a global timeout, but not a custom timeout per request, which would be quite handy.
Is there a reason for this?

 const { Client } = require('undici')
 const client = new Client(`http://localhost:3000`)

 client.request({
   path: '/',
   method: 'GET',
+  timeout: 10000
 },  function (err, data) {
   //  handle response
 })

Related: #160

@ronag
Copy link
Member

ronag commented May 19, 2020

I'm not quite sure how that would work. The current timeout is on the socket.

So you could never have a per request timeout that is larger than the socket timeout. Allowing to change the socket timeout per requests makes things complicated, especially with pipelining.

I guess we could add it per request as long as it's less than the socket timeout. But it would incur a performance penalty when used.

Also does it apply on the request body (i.e read) or just socket activity (i.e. write/read)? What about when the request is delayed by other requests ahead in the pipeline?

Is there a reason for this?

Because it's complicated with pipelining 😄

@mcollina
Copy link
Member

I think we are talking about different things. The timeout on the socket is an activity timeout, e.g. the timeout between two chunk of data. I think @delvedor is asking about a request timeout.

I guess we could add it per request as long as it's less than the socket timeout.

It can be bigger than the socket timeout as a single request is composed of multiple chunks. Essentially it's a matter of creating a setTimeout with that value, and calling the callback if it triggers earlier.

I agree this will add overhead when used. However this is a common feature which would have similar overhead both here and externally, so if it does not cost us overhead when it is disabled, I would be ok in adding it.

But it would incur a performance penalty when used.

@ronag
Copy link
Member

ronag commented May 19, 2020

Ah, so it's a timeout for the entire request regardless of activity. @delvedor is this what you mean? Also please note that this is different from e.g. timeout on core http.request which is based on socket activity.

@delvedor
Copy link
Member Author

I think @delvedor is asking about a request timeout.

Correct :)
I would expect it to work in the same way as core's http.request.

@ronag
Copy link
Member

ronag commented May 19, 2020

Correct :)

I would expect it to work in the same way as core's http.request.

That's contradictory, the timeout argument for http.request is not the same thing as a request timeout.

There are two things here:

  • Activity timeout. Complex to implement due to pipelining.
  • Completion timeout. Simple to implement.

@delvedor
Copy link
Member Author

Essentially I would like to achieve something like this:

'use strict'

const http = require('http')
const server = http.createServer(handler)

server.listen(0, () => {
  const request = http.request(
    `http://localhost:${server.address().port}`,
    { timeout: 500 }
  )

  request.on('response', response => {
    console.log(response.statusCode)
  })

  request.on('timeout', () => {
    request.abort()
    console.log('timeout')
  })

  request.on('abort', () => {
    console.log('abort')
  })

  request.on('error', err => {
    console.log(err)
  })

  request.end()
})

function handler (req, res) {
  setTimeout(() => {
    res.end('ok')
  }, 1000)
}

The log is:

timeout
abort
Error: socket hang up
    at connResetException (internal/errors.js:604:14)
    at Socket.socketCloseListener (_http_client.js:400:25)
    at Socket.emit (events.js:323:22)
    at TCP.<anonymous> (net.js:668:12) {
  code: 'ECONNRESET'
}

The error is caused by request.abort.

@ronag
Copy link
Member

ronag commented May 19, 2020

That's not enough to determine how we implement this. Notice that undici is fundamentally different due to pipelining so that example cannot be directly applied. So how would you expect that timeout to work?

e.g. you might have requests that is being served ahead in the pipeline. Should the waiting time for other responses ahead in the pipeline count towards timeout or not?

client.request({ ...opts }, (err) => {
 // This is a large requests downloading lots of data.
})

client.request({ ...opts, timeout: 100 }, (err) => {
  // Should this timeout if there is no activity on this specific request 
  // since it's waiting for the previous large request. 
  // Or if there is no activity on the socket.
  // Do we start the timeout from the moment this request is at the head of 
  // the pipeline and then measure the socket activity?
})

@delvedor
Copy link
Member Author

That's a good question. As an external user, I don't really care about why the timeout has been reached, I would expect that if set a timeout of 1000 milliseconds, once the timeout has passed, I get an error back, even if the request has never been sent.
In such case, I know that I would need to increase the number of connections.

@ronag
Copy link
Member

ronag commented May 19, 2020

In such case, I know that I would need to increase the number of connections.

So your expectation would be that if the request has not completed within the timeout time counted from when the request was created, it should timeout.

So essentially it's the same as:

const timeout = setTimeout(() => {
  throw new Error('timeout')
}
const { body } = await client.request(...opts)
finished(body, () => {
  clearTimeout(timeout) 
})

Note that this is different than what http.request does.

@delvedor
Copy link
Member Author

So your expectation would be that if the request has not completed within the timeout time counted from when the request was created, it should timeout.

I think it is what a user without the knowledge of the internals would expect.
I don't recall how http.request is handling this, I'll take a look at node internals :)

@ronag
Copy link
Member

ronag commented May 19, 2020

I think it is what a user without the knowledge of the internals would expect.

I think that fits @mcollina's suggestion and should be rather simple to implement.

I don't recall how http.request is handling this, I'll take a look at node internals :)

It just invokes socket.setTimeout once it has received a socket.

My only hesitations here is that it works differently than http.request which might surprise some users (it would have surprised me). Though I guess we can explicitly document it.

@delvedor
Copy link
Member Author

It just invokes socket.setTimeout once it has received a socket.

Oooh, now I've understood what you mean. We can't use the same approach here because if we are using pipelining > 1, then there might be more than one request per socket, making the timeout unreliable.

What do you think about optimizing this case, and using socket.setTimeout when pipelining = 1 and use a setTimeout when pipelining > 1?

@mcollina
Copy link
Member

socket.setTimeout just use setTimeout internally, so there is no point on optimizing it.

@delvedor
Copy link
Member Author

delvedor commented May 19, 2020

Oks! So if we would like to compare Node.js core http.request and undici, we can summarize it like this:

Node.js core Undici
Agent keep alive timeout undici global timeout
http.request timeout undici internal request setTimeout

Did I get this right?

@ronag
Copy link
Member

ronag commented May 19, 2020

Did I git this right?

I don't really think they can be compared in a meaningful way. It's tricky. 😕

@ronag
Copy link
Member

ronag commented May 19, 2020

Suggestion on what we can do:

  • Before we have received response headers, timeout is relative to when request was enqueued.
  • After we have received response headers, timeout is relative to most recent response chunk.

Which is kind of a hybrid but it makes sense to me at least.

@mcollina
Copy link
Member

I would simply do

Before we have received response headers, timeout is relative to when request was enqueued.

So we can offer a strong guarantee to our users.

ronag added a commit that referenced this issue May 20, 2020
@ronag ronag added the enhancement New feature or request label May 20, 2020
@ronag
Copy link
Member

ronag commented May 24, 2020

This has been added

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants