Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inherited task priorities break certain DAG optimizations #2747

Closed
biddisco opened this issue Jul 7, 2017 · 3 comments · Fixed by #2752
Closed

Inherited task priorities break certain DAG optimizations #2747

biddisco opened this issue Jul 7, 2017 · 3 comments · Fixed by #2752

Comments

@biddisco
Copy link
Contributor

biddisco commented Jul 7, 2017

Debugging problems in performance with our matrix code has revealed that the following check

if (thread_priority_critical == threads::get_self_id()->get_priority())
data.priority = thread_priority_critical;

essentially makes it impossible to create complex DAGs that spawn sub tasks from parent tasks, when only some of the child tasks are lower priority.
The effect of this check is to make all child tasks HP. In our matrix solver the first row/col block on each diagonal is high priority and the first row/col is handled first, so all tasks are children of it and in turn, all become HP.

Two possible solutions spring to mind

  1. Remove the check so that child tasks are not promoted
  2. Add a new priority between thread_priority_normal and thread_priority_critical which would be thread_priority_high and would give the current task high priority, but not child tasks.
  3. Rename one policy to thread_priority_high_recursive to make it more clear that this is passed on to children

I suspect that I should have been using thread_priority_boost instead of thread_priority_critical right from the start, but had not appreciated the distinction between them regarding child tasks. Either way, some renaming of priorities might make this more clear.

@biddisco
Copy link
Contributor Author

biddisco commented Jul 7, 2017

In summary, I would like to see this page https://stellar-group.github.io/hpx/docs/html/hpx/threads/thread_priority.html look a bit more like this

This enumeration lists all possible thread-priorities for HPX threads.

thread_priority_low
Task goes onto a special low priority queue and will not be executed until all high/normal priority tasks are done, even if they are added after the low priority task.

thread_priority_default
Will assign the priority of the task to the default (normal) priority.

thread_priority_normal
Task will be executed when it is taken from the normal priority queue, this is usually a first in-first-out ordering of tasks (depending on scheduler choice). This is the default priority

thread_priority_boost
Same as thread_high_priority except that the thread will fall back to thread_priority_normal if suspended and resumed.

thread_priority_high
Task goes onto a special high priority queue and will be executed before normal/low priority tasks are taken (some schedulers modify the behaviour slightly and the documentation for those should be consulted).

thread_priority_high_recursive
The task is a high priority task and any child tasks spawned by this task will be made high priority as well - unless they are specifically flagged as non default priority.

@hkaiser
Copy link
Member

hkaiser commented Jul 7, 2017

Where does thread_priority_boost fit into your new list of priorities? This priority gives a thread high priority on the first invocation, however it falls back to normal priority if suspended/resumed.

Would you agree to add the following:

thread_priority_boost
Same as thread_high_priority except that the thread will fall back to thread_priority_normal if suspended and resumed.

?

@biddisco
Copy link
Contributor Author

biddisco commented Jul 7, 2017

Yes, sorry. I misunderstood the purpose of thread_priority_boost - that should certainly be included - I've updated the list above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants