-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possibly use rayon for the worker threads #417
Comments
I'm not sure I understand; why does this matter? Rocket isn't continuously spawning threads; it simply starts |
A fixed thread pool only works when the requests are CPU-bound. If you throw IO-bound requests in the mix they will tie the pool and starve other requests. This is made worse by not having async support. I'm not sure |
When you make a blocking I/O call, the OS deschedules the responsible thread and marks it as waiting. None of the other threads in the pool are blocked as a result of this. As long as there are threads available to handle requests, requests will be handled. If you're concerned about every thread being descheduled by the OS as a result of an I/O call, then this is a valid concern, but this has nothing to do with sharing or not sharing a thread-pool. |
This. If the application is doing I/O, it's probably doing I/O on most of the requests it gets, while the thread pool is fixed and pretty small. |
What does that have to do with sharing or not sharing a threadpool? This is simply an artifact of synchronous I/O, and it will continue to occur regardless of which threadpool we use. |
"Sufficiently smart" thread pools will ramp up the number of threads (up to a limit, of course) to absorb slower tasks. Of course, there is the workaround of starting Rocket with 200 workers or something like that, but it would be better to have something that can scale up when needed. |
That's not how Rocket uses thread-pools, nor how thread-pools are usually used outside of the context of a task-based system, like Rayon. Rocket (Hyper, really) starts up If we did have a "smart" thread-pool, we could change the way Rocket and Hyper work by "spawning" a new thread every time we want to try to That would be a fine route to take if we were dead-set on sticking to a synchronous server. But we'll definitely be migrating Rocket to an asynchronous solution (see #17), so I don't see any benefit in trying to be overly clever to solve a problem that will inevitably be solved later. |
I agree that it's not worth in if there are plans to migrate to async soon. A better thread pool might still be useful afterwards, but that's |
This is not about ressources. I guess what I'm actually asking for is access to the thread pool in order to share the threads between ws and rocket. Rayon felt like the natural choice (because it has the ability to share a thread pool between all crates), but I actually don't really care about the underlying technology. |
@nicokoch depending on the application and the thread pool implementation, you might not want to share the same one for different libraries. In the case of Rocket, like @SergioBenitez mentioned, the thread pool comes almost for free and might change when switching to async Hyper, so it's probably not worth the hassle at this point. |
Ah! Perhaps you want Rocket to handle websockets directly, @nicokoch? If so, it will! But not yet; #90 is tracking that. In terms of other kinds of thread-pool sharing, it's not something I see Rocket doing. The benefit simply isn't there. For that reason, I'm closing this issue as declined. Thank you for opening this up, however! It's appreciated. :) |
Having rocket handling websockets directly would indeed be even better. Thanks for pointing me to the issue :) |
I am developing a web app based on rocket and ws-rs (websockets).
It would be helpful if rocket provided the worker threads based on rayon, so that the websockets and web framework can share the same threadpool.
Rayon has the big advantage that any (dependency) crate will share the same thread pool, avoiding that each crate spawns X new threads that possibly end up doing close to nothing.
Maybe it's possible to create a feature for this?
The text was updated successfully, but these errors were encountered: