Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scale up automatically based on available file descriptors #501

Open
roman-kruglov opened this issue Apr 10, 2022 · 3 comments
Open

Scale up automatically based on available file descriptors #501

roman-kruglov opened this issue Apr 10, 2022 · 3 comments

Comments

@roman-kruglov
Copy link
Contributor

roman-kruglov commented Apr 10, 2022

Why don't we (I may try too) implement something like checking the (maximum) available number of file descriptors (possible ports / sockets / connections) on app's start and automatically scaling the number of jobs up to something close to this limit?

On my Mac the default max number of descriptors was something like 255, I raised it to the max allowed number of 10560 or something around that. I tried running with --scale 11 which created 8624 jobs instead of default 766 (approximate number, don't recall exactly), which isn't even the limit.. I could run with a higher scale factor. And I experience no problems whatsoever - and the report numbers are looking much more promising.

We could detect the max allowed number, print out a suggestion how to increase it and or include it in the doc, and use all available connections instead of some default. Potentially it would increase "productivity" ten-folds.

@roman-kruglov
Copy link
Contributor Author

idea based on this comment #491 (comment)

@arriven
Copy link
Owner

arriven commented Apr 11, 2022

that would probably be something more fit for #500

@roman-kruglov
Copy link
Contributor Author

I meant the app itself assessing available resources and taking up as much as is possible. But after a bit of experimentation I see problems there - e.g. it was a piece of cake for my Mac, but somehow connections started breaking much quicker on Win instances - they couldn't hold even 10 000 connections even though on Win the number of file handles (which I believe are also used for connections) is in millions by default. Maybe it has to do with default C runtime limits which are said to be much lower..

And the number of file descriptors could be much higher than net's resources or VPN resources - which even seem to differ between servers.

Anyway, a corrected idea - assess the maximum number of available file descriptors, begin starting new jobs in batches until problems arise - like inability to open a connection - then scale a bit down. Sounds more complicated, but I guess doable. I'll be thinking it over more thoroughly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants