Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Undici performance vs axios #2622

Closed
Gigioliva opened this issue Jan 16, 2024 · 12 comments
Closed

Undici performance vs axios #2622

Gigioliva opened this issue Jan 16, 2024 · 12 comments
Labels
bug Something isn't working

Comments

@Gigioliva
Copy link

Undici performance vs axios

I would like to migrate my client from axios to undici.
However, in some occasions undici is slower than axios. In the example in screenshot, an API call with axios has ~5ms latency (P90). When I use undici instead, the latency increases to 40ms.

Analyzing the program using Google Cloud Profiler, I notice that most of the time is spent creating the TLS connection, even though I use a Pool. I would expect that after an initial phase, creating the connection should no longer be a problem.

I also tried using the connections and pipelining parameters of Pool object but with no improvement 😞

Can you help me understand how to optimize these calls?

Reproducible By

Providing a full code is complex. High level:

const client = new Pool("https://my.url.com");

//....

const body: Buffer = this.input.buffer;

const options = {
    method: "POST",
    body: body,
    path: "/path",
    headers: {...customHeaders},
    throwOnError: true,
}
const buffer: Uint8Array[] = [];
client
  .stream(options, () => {
    return new Writable({
      write(chunk, _, callback) {
        buffer.push(chunk as Uint8Array);
        callback();
      },
    });
  })
  .then(() => {
    // handle response
  })
  .catch((err) => {
    // handle errors
  });

Expected Behavior

Latency using undici is more or less the same as it was with axios.

Logs & Screenshots

latency

undici

Environment

  • OS: base docker image node:20.9-bookworm-slim
  • Node: 20.9.0
  • Undici: 6.3.0
  • Axios: 1.6.2
@Gigioliva Gigioliva added the bug Something isn't working label Jan 16, 2024
@mcollina
Copy link
Member

Have you got something to replicate this setup? How many http calls are you making?

@Gigioliva
Copy link
Author

Have you got something to replicate this setup?

Nope. I was not able to create a deterministic environment locally. I will try again

How many http calls are you making?

We have 25/30 req/s (average)

@mcollina
Copy link
Member

Nope. I was not able to create a deterministic environment locally. I will try again

No need, I tracked it down.


What is happening is that undici is establishing a new connection for every POST, and we are paying the TLS price. This behavior is correct but strict, as in certain cases, the connection can be kept open.

This was introduced by @ronag in #76, and it made things too strict.

@ronag wdyt?

@ronag
Copy link
Member

ronag commented Jan 17, 2024

Yes, the implementation is too strict. Though it will require some thinking to make sure we don't introduce bugs by loosening it.

@ronag
Copy link
Member

ronag commented Jan 17, 2024

I think the simplest way is to make pipelining equal to 1 a special case.

@Gigioliva
Copy link
Author

I think the simplest way is to make pipelining equal to 1 a special case.

What about adding a new parameter to the pool configuration to let the user choose this behavior?

@ronag
Copy link
Member

ronag commented Jan 17, 2024

@mcollina Actually I'm unsure if you are right. I don't see why it wouldn't re-use the connection, once the whole request/response cycle has completed.

@Gigioliva
Copy link
Author

Application headers:

  "Content-Type": "application/x-thrift",
  "Accept": "application/x-thrift",
  "Authorization": internalAuthToken,
  "User-Agent": consumerKey,

@mcollina
Copy link
Member

mcollina commented Jan 17, 2024

@ronag, you are right, the socket is correctly reused:

import { createServer } from 'node:http';
import { request } from './index.js';
import { setTimeout as sleep } from 'node:timers/promises';
import { strictEqual } from 'node:assert';

let lastSocket = null;
const server = createServer((req, res) => {
  console.log('request ', req.url, req.method);
  if (lastSocket)  {
    res.end(JSON.stringify({ same: lastSocket == req.socket }));
  } else {
    res.end(JSON.stringify({ noSocket: true }));
  }
  lastSocket = req.socket;
})

await new Promise((resolve) => server.listen(3000, resolve));

{
  const { body } = await request('http://localhost:3000', {
    method: 'POST',
    body: JSON.stringify({ hello: 'world' })
  });
  console.log(await body.json());
}

await sleep(1000);

{
  const { body } = await request('http://localhost:3000', {
    method: 'POST',
    body: JSON.stringify({ hello: 'world' })
  });
  const { same } = await body.json();
  strictEqual(same, true);
  console.log('same socket');
}

server.close();

So I guess that whatever combination of inputs you are using @Gigioliva is causing undici to create a new socket every time.

Can you create a small reproduction of what you are doing?

@ronag
Copy link
Member

ronag commented Jan 17, 2024

So I guess that whatever combination of inputs

and server...

@Gigioliva
Copy link
Author

Can you create a small reproduction of what you are doing?

I will try 😅 the system is quite complex to reproduce...I will start from a simple setup and add new steps until the problem also occurs locally

@Gigioliva
Copy link
Author

Hi @mcollina

Thank you for your time. After some investigation, we found the problem. The default value of keepAliveTimeout is 4 seconds. By scaling the number of pods in our application, requests were being distributed across too many pods causing some connections to remain idle for more than 4 seconds (and thus closed). Increasing that value to 60 seconds significantly improved the performance.

Sorry for the issue. I think we can close it 🙏🏼

@ronag ronag closed this as completed Jan 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants