New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support forward and bidirectional iterators in parallel::merge. #2826
Comments
And where can I see the requirements of iterator tag in the standard? |
@taeguk: a couple of thoughts:
|
If not otherwise stated all algorithms taking an execution policy should work for at least forward iterators. The committee explicitly ditched support for input iterators for those overloads. In the end cppreference is a perfect source of information for this. |
Let me rephrase this: all algorithms taking an execution policy should work for at least forward iterators except if there are stronger iterator requirements defined for the corresponding sequential algorithm (the one not taking an execution policy). In this case the stronger requirements apply. |
@hkaiser I already implemented one for non-random access iterators based on the double dereferencing technique. But, the performance is so terrible. Preparation for utilizing the way using double dereferencing takes very very much time... (https://github.com/taeguk/hpx/blob/tg_merge_all_iter_support_ver2/hpx/parallel/algorithms/merge.hpp#L347-L358) |
@hkaiser Anyway, I think that it is hard to implement parallel merge for forward and bidirectional iterator faster than sequential thing. So, for now, I want to submit PR including what I did until now.
So, we need to determine policy about forward and bidirectional iterator for now. |
For now, I restricted the requirements of parallel::merge to only random access iterator. (In other words, I selected first policy which I said above). |
Ok. Please create a ticket reminding us that we need to get back to this. |
@hkaiser Should I create new issue? I think that this issue is the ticket itself. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed. Please re-open if necessary. |
I'm implementing parallel::merge. And parallel::merge for random access iterator was already implemented.
But, the problem is supporting forward and bidirectional iterator.
It is very hard to support forward and bidirectional iterator.
I tried to support those, and I implemented in two ways.
First one is utilizing
random access
's thing with generic iterator function likestd::next
andstd::distance
. (https://github.com/taeguk/hpx/blob/tg_merge_all_iter_support/hpx/parallel/algorithms/merge.hpp#L229-L231)Second one is creating vector of iterator. And then utilizing
random access
's thing with double dereferencing. (https://github.com/taeguk/hpx/blob/tg_merge_all_iter_support_ver2/hpx/parallel/algorithms/merge.hpp#L347-L372)Resultingly, two ways are both very slow and using sequential thing is more much better.
And I can't search other implementations of forward and bidirectional iterator in parallel merge.
So, I want to get the ideas to implement forward and bidirectional iterator versions of parallel::merge if you have good ideas.
But, I think it is too hard. So, I want to support only random access iterator for now.
If we determine to just leave this issue for now, we have to determine the policy for handling forward and bidirectional iterators. There are two candidates.
First policy is just restricting iterator tag to random access. fallback to sequential execution is only performed when only HPX_WITH_ALGORITHM_INPUT_ITERATOR_SUPPORT is ON.
Second policy is fallback to sequential execution when forward or bidirectional iterator is given even if HPX_WITH_ALGORITHM_INPUT_ITERATOR_SUPPORT is OFF.
I want to hear your thoughts.
Added by after)
For now, we selected the first policy.
We should rethink and resolve this issue in the future.
The text was updated successfully, but these errors were encountered: