New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Excessive overheads that loop_idx_n calls cancellation_token::was_cancelled per each loop. #2661

Open
taeguk opened this Issue May 30, 2017 · 4 comments

Comments

Projects
None yet
3 participants
@taeguk
Member

taeguk commented May 30, 2017

https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/parallel/util/loop.hpp#L485
https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/parallel/util/cancellation_token.hpp#L41-L44

Currently, loop_idx_n calls cancellation_token::was_cancelled per each loop.
But, It seems to be excessive to check whether token was cancelled.
If the cancellation occurs rarely, occasional checking the cancellation will be much better.
It's obviously excessive overhead to check for cancellation on every loop.
I know that it is very experimental and difficult to determine the cycle to check cancellation.
I think that adding a parameter for specifing the cycle can be one of solutions. (default value : 1)

I think this issue is very important because loop_idx_n is now used for parallel algorithms like find, is_heap, etc..

@hkaiser

This comment has been minimized.

Show comment
Hide comment
@hkaiser

hkaiser May 30, 2017

Member

@taeguk: I agree. Go for it! In the end this could be exposed through executor parameters (similar to chunk sizes, etc.)

Member

hkaiser commented May 30, 2017

@taeguk: I agree. Go for it! In the end this could be exposed through executor parameters (similar to chunk sizes, etc.)

@hkaiser

This comment has been minimized.

Show comment
Hide comment
@hkaiser

hkaiser Jun 1, 2017

Member

@taeguk, If we add this we might have to be especially careful with algorithms which roll back the work done before the cancellation_token was triggered (e.g. the uninitialized_...` family of algorithms).

Member

hkaiser commented Jun 1, 2017

@taeguk, If we add this we might have to be especially careful with algorithms which roll back the work done before the cancellation_token was triggered (e.g. the uninitialized_...` family of algorithms).

@taeguk

This comment has been minimized.

Show comment
Hide comment
@taeguk

taeguk Jun 1, 2017

Member

algorithms which roll back the work done before the cancellation_token was triggered

@hkaiser I can't find that case.
https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/parallel/util/loop.hpp#L401
In above codes, it seems that 'roll back' is performed by only a thread which cancels token. (https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/parallel/util/loop.hpp#L408-L411)

Member

taeguk commented Jun 1, 2017

algorithms which roll back the work done before the cancellation_token was triggered

@hkaiser I can't find that case.
https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/parallel/util/loop.hpp#L401
In above codes, it seems that 'roll back' is performed by only a thread which cancels token. (https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/parallel/util/loop.hpp#L408-L411)

@hkaiser

This comment has been minimized.

Show comment
Hide comment
@hkaiser

hkaiser Jun 1, 2017

Member

@taeguk This happens for instance here: https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/parallel/algorithms/uninitialized_copy.hpp#L99-L132. The partition which is cancelled directly rolls back, all other partitions are rolled back by the 3rd function passed to partitioner_with_cleanup.

However even for those algorithms it might not be a problem if we checked for cancellation not after every iteration. This is just something we need to be careful about.

Member

hkaiser commented Jun 1, 2017

@taeguk This happens for instance here: https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/parallel/algorithms/uninitialized_copy.hpp#L99-L132. The partition which is cancelled directly rolls back, all other partitions are rolled back by the 3rd function passed to partitioner_with_cleanup.

However even for those algorithms it might not be a problem if we checked for cancellation not after every iteration. This is just something we need to be careful about.

@msimberg msimberg removed this from the 1.1.0 milestone Dec 5, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment