Optimizing thoughput to external calls #2439
Unanswered
LucCADORET
asked this question in
Q&A
Replies: 1 comment
-
It's very hard to provide an answer here on this issue without knowing the response time of the server. I would recommend you to check that first. Assumming the server is operating normally, those spikes can have two causes:
One of the ways to solve this is to add connection capping. Usually, it's better to wait for a socket to become free than having those spikes. You can also check out our |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I am currently working on a project where I need to connect to about 10 different APIs, and perform various requests very frequently.
![req count](https://private-user-images.githubusercontent.com/6899891/283321159-28689755-92bb-4b29-bbdc-89e82c0e6f49.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjAwMjkyMzIsIm5iZiI6MTcyMDAyODkzMiwicGF0aCI6Ii82ODk5ODkxLzI4MzMyMTE1OS0yODY4OTc1NS05MmJiLTRiMjktYmJkYy04OWU4MmMwZTZmNDkucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDcwMyUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA3MDNUMTc0ODUyWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9ZDlhNmY5M2JiYTJkMjM0MjJiYzFlZTEwNDFmNTRkNWRiZmZmOGVmOTdlOWY1ZjVhZGJhOTE1NjRjZjBmMGJiOCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.Zt2PbkEJWTRXdGAoXuEp2emUIK0KmezO18ikZDncdzg)
Here's an excerpt of the amount of requests I'm doing, in some sampled 4 minutes:
However, I noticed that when having a lot of requests at the same time, some start to take time.
![duration plot](https://private-user-images.githubusercontent.com/6899891/283321318-0bfeec55-dfc8-49c5-80d0-591b4bda7a9b.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjAwMjkyMzIsIm5iZiI6MTcyMDAyODkzMiwicGF0aCI6Ii82ODk5ODkxLzI4MzMyMTMxOC0wYmZlZWM1NS1kZmM4LTQ5YzUtODBkMC01OTFiNGJkYTdhOWIucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDcwMyUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA3MDNUMTc0ODUyWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9N2UwOTJlMDAyNGI1ZTE1NDZmMDljNjQ0MzE1YTMwNzk4N2Y1OTEwOTNhYTM1NzE3ZTg5Mjk3MWYwYjljNjY0ZiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.fdTPNOlbOesfHzGR_OJgP5cd1aivBC8JnoWzMJYGQBU)
Here's the plot of the calls to one of the many origins I am calling, from the sample sampled time (Y is duration in seconds)
How to fix this request lag issue ? Currently, I'm trying to isolate the APIs and create one Pool per origin, but it does not seem to work.
The amount of concurrent requests is also not that big, 250 in a second is nothing, compared to undici's benchmark of 10,000 req/s with 50 connections. My pools don't have a caped connection number and are basically constructed as such:
this.pools[origin] = new Pool(origin);
I'm starting to think it could come from somewhere other than Undici.
I'd be glad to take any kind of idea/advice, thank you.
Beta Was this translation helpful? Give feedback.
All reactions