New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add this_thread_executors #1669
Conversation
hkaiser
commented
Jul 18, 2015
- also add executor_traits<>:;has_pending_closures()
- refactoring interface of scheduling_loop
- exit scheduling loop without delay if called from inner scheduler (fixes performance regression)
- add missing #includes
- also add executor_traits<>:;has_pending_closures() - refactoring interface of scheduling_loop - exit scheduling loop without delay if called from inner scheduler - add missing #includes
What is the rationale behind has_pending_closures? |
We essentially have 2 types of executors: stateless and stateful executors (I'm not sure if these terms are appropriate, though). Stateless executors run the tasks directly, while stateful executors employ the services of more complex scheduling engines to make sure the tasks are run. The |
template <typename Executor> | ||
static auto call(wrap_int, Executor& exec) -> bool | ||
{ | ||
return hpx::get_os_thread_count(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are the pending closures really the number of OS threads? wouldn't this rather return false
for stateless executors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, that's a bug (copy&paste problem).
Is the purpose of this executor to confine all scheduled work on the current OS thread? If yes, wouldn't a simple, single threaded scheduler be enough? |
/// \param min_punits [in] The minimum number of processing units to | ||
/// associate with the newly created executor | ||
/// (default: 1). | ||
/// |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like a copy and paste error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, forgot to update the comments
Well, this is a single threaded scheduler guaranteeing that all work is executed on the same thread as it is initially executed. I needed this for HPXrx (https://github.com/STEllAR-GROUP/hpxrx), I think I can completely move it over to that project, it does not have to go into the main HPX repo. OTOH, it might be useful in other contexts as well. |
On 07/21/2015 03:33 PM, Hartmut Kaiser wrote:
I am not saying it shouldn't go into the main HPX repo, it could indeed |
Right, it's not done anywhere currently, but should be possible to do.
Do you mean the static_priority and local_queue variants I added? The static_priority one does not do work-stealing but enables priorities, while the local one does not expose priorities. We don't have a static local scheduler at this point, otherwise I would have used it instead. |
All comments have been addressed. I also added a new static scheduler (no priorities and no work stealing) to reduce the overheads for this use case. |
c8dce65
to
ca06038
Compare
ca06038
to
68f2ede
Compare