Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add this_thread_executors #1669

Merged
merged 4 commits into from Jul 26, 2015
Merged

Add this_thread_executors #1669

merged 4 commits into from Jul 26, 2015

Conversation

hkaiser
Copy link
Member

@hkaiser hkaiser commented Jul 18, 2015

  • also add executor_traits<>:;has_pending_closures()
  • refactoring interface of scheduling_loop
  • exit scheduling loop without delay if called from inner scheduler (fixes performance regression)
  • add missing #includes

- also add executor_traits<>:;has_pending_closures()
- refactoring interface of scheduling_loop
- exit scheduling loop without delay if called from inner scheduler
- add missing #includes
@dcbdan
Copy link
Contributor

dcbdan commented Jul 19, 2015

What is the rationale behind has_pending_closures?

@hkaiser
Copy link
Member Author

hkaiser commented Jul 20, 2015

What is the rationale behind has_pending_closures?

We essentially have 2 types of executors: stateless and stateful executors (I'm not sure if these terms are appropriate, though). Stateless executors run the tasks directly, while stateful executors employ the services of more complex scheduling engines to make sure the tasks are run. The has_pending_closures is most useful for stateful executors allowing to detect whether there are still tasks running, i.e. to detect whether it is safe to destroy the executor instance.

template <typename Executor>
static auto call(wrap_int, Executor& exec) -> bool
{
return hpx::get_os_thread_count();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the pending closures really the number of OS threads? wouldn't this rather return false for stateless executors?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, that's a bug (copy&paste problem).

@sithhell
Copy link
Member

Is the purpose of this executor to confine all scheduled work on the current OS thread? If yes, wouldn't a simple, single threaded scheduler be enough?

/// \param min_punits [in] The minimum number of processing units to
/// associate with the newly created executor
/// (default: 1).
///
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a copy and paste error.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, forgot to update the comments

@hkaiser
Copy link
Member Author

hkaiser commented Jul 21, 2015

Is the purpose of this executor to confine all scheduled work on the current OS thread? If yes, wouldn't a simple, single threaded scheduler be enough?

Well, this is a single threaded scheduler guaranteeing that all work is executed on the same thread as it is initially executed. I needed this for HPXrx (https://github.com/STEllAR-GROUP/hpxrx), I think I can completely move it over to that project, it does not have to go into the main HPX repo. OTOH, it might be useful in other contexts as well.

@sithhell
Copy link
Member

On 07/21/2015 03:33 PM, Hartmut Kaiser wrote:

Is the purpose of this executor to confine all scheduled work on the current OS thread? If yes, wouldn't a simple, single threaded scheduler be enough?

Well, this is a single threaded scheduler guaranteeing that all work is executed on the same thread as it is initially executed. I needed this for HPXrx (https://github.com/STEllAR-GROUP/hpxrx), I think I can completely move it over to that project, it does not have to go into the main HPX repo. OTOH, it might be useful in other contexts as well.

I am not saying it shouldn't go into the main HPX repo, it could indeed
be useful, what I am trying to get at is that it seems overkill for a
scheduler which is only used on a single core to include all the
atomics, lockfree algorithms and locks etc..
If I understand correctly, you never enqueue new work from more than one
thread and never steal any work so this might be a valuable optimization
which could come in the future.
With that being said, do you think we absolutely need multiple
schedulers for this executor?

@hkaiser
Copy link
Member Author

hkaiser commented Jul 21, 2015

If I understand correctly, you never enqueue new work from more than one thread and never steal any work so this might be a valuable optimization which could come in the future.

Right, it's not done anywhere currently, but should be possible to do.

With that being said, do you think we absolutely need multiple schedulers for this executor?

Do you mean the static_priority and local_queue variants I added? The static_priority one does not do work-stealing but enables priorities, while the local one does not expose priorities. We don't have a static local scheduler at this point, otherwise I would have used it instead.

@hkaiser
Copy link
Member Author

hkaiser commented Jul 22, 2015

All comments have been addressed. I also added a new static scheduler (no priorities and no work stealing) to reduce the overheads for this use case.

@hkaiser hkaiser force-pushed the this_thread_executors branch 3 times, most recently from c8dce65 to ca06038 Compare July 22, 2015 20:46
hkaiser added a commit that referenced this pull request Jul 26, 2015
@hkaiser hkaiser merged commit 19ee001 into master Jul 26, 2015
@hkaiser hkaiser deleted the this_thread_executors branch July 26, 2015 16:49
hkaiser added a commit that referenced this pull request Jul 26, 2015
…tors"

This reverts commit 19ee001, reversing
changes made to f03b321.
@hkaiser hkaiser restored the this_thread_executors branch July 26, 2015 18:47
This was referenced Aug 2, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants