New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement parallel::unique. #2867

Merged
merged 27 commits into from Sep 25, 2017

Conversation

2 participants
@taeguk
Member

taeguk commented Aug 28, 2017

It is related to #1141, #2804.

Check Box

  • Implementation of parallel::unique using temporary storage. (eecac2c)
  • Implementation of parallel::unique using sequentially performed f3. (The lastest commit)
  • Unit tests.
  • Benchmark codes. (with various data type)
  • Adapt to Ranges TS. (Container version)
  • Some modifications of parallel::util::scan_partitioner.

Issues

  1. What way should we select between the two ways to implement parallel::unique discussed in #2804?
    -> I implemented both ways and benchmarked them. In conclusion, using the way to perform f3 sequentially is better. So, I choosed that.

Note for future

  1. The performance of parallel::unique is bad. The performance is different depending on the situation, but it is slightly faster than the sequential version, and sometimes it is slower than the sequential version. There is need to improve this algorithm if we can find the better way.
@taeguk

This comment has been minimized.

Show comment
Hide comment
@taeguk

taeguk Aug 28, 2017

Member

https://github.com/taeguk/hpx/blob/da7fd5d7dd38ca0074c579d248ef7ea916c200bf/hpx/parallel/util/scan_partitioner.hpp#L284-L289
@hkaiser I used explicit waiting like above.
But, you said "Prefer dataflow rather than explicit waiting".
https://github.com/taeguk/hpx/blob/da7fd5d7dd38ca0074c579d248ef7ea916c200bf/hpx/parallel/util/scan_partitioner.hpp#L304
Yes, I agree. But, if we want to use dataflow instead of explicit waiting, we should change the type of finalitems from std::vector<hpx::future<Result2>> to std::vector<hpx::shared_future<Result2>> because final item should be passed to both 'f3 on next partition' and 'f4'. And, the interface of f3 should be also changed. These changes should be also applied even if ScanPartTag is scan_partitioner_normal_tag because maybe it is better for same template to have same interface. So, f3 of existed codes using scan_partitioner should be a little modified.

How do you think about it? Keep using current explicit waiting? Or use dataflow with changing the interface of f3 in scan_partitioner.

Member

taeguk commented Aug 28, 2017

https://github.com/taeguk/hpx/blob/da7fd5d7dd38ca0074c579d248ef7ea916c200bf/hpx/parallel/util/scan_partitioner.hpp#L284-L289
@hkaiser I used explicit waiting like above.
But, you said "Prefer dataflow rather than explicit waiting".
https://github.com/taeguk/hpx/blob/da7fd5d7dd38ca0074c579d248ef7ea916c200bf/hpx/parallel/util/scan_partitioner.hpp#L304
Yes, I agree. But, if we want to use dataflow instead of explicit waiting, we should change the type of finalitems from std::vector<hpx::future<Result2>> to std::vector<hpx::shared_future<Result2>> because final item should be passed to both 'f3 on next partition' and 'f4'. And, the interface of f3 should be also changed. These changes should be also applied even if ScanPartTag is scan_partitioner_normal_tag because maybe it is better for same template to have same interface. So, f3 of existed codes using scan_partitioner should be a little modified.

How do you think about it? Keep using current explicit waiting? Or use dataflow with changing the interface of f3 in scan_partitioner.

@hkaiser

This comment has been minimized.

Show comment
Hide comment
@hkaiser

hkaiser Sep 5, 2017

Member

Please fix the conflicts, otherwise this can't go ahead.

Member

hkaiser commented Sep 5, 2017

Please fix the conflicts, otherwise this can't go ahead.

taeguk added some commits Sep 6, 2017

Use static_scan_partitioner_helper instead of static_scan_partitioner…
…_async for fixing a bug when parallel_task_policy is used.

The bug    : static_scan_partitioner doesn't work as async when parallel_task_policy is used.
The reason : because of ```finalitems.back().wait();``` when
             scan_partitioner_sequential_f3_tag is used as ScanPartTag.
How to fix : Use static_scan_partitioner_helper which returns R instead of hpx::future<R>.
@taeguk

This comment has been minimized.

Show comment
Hide comment
@taeguk

taeguk Sep 10, 2017

Member

@hkaiser I'm ready to be reviewed and merged.

Member

taeguk commented Sep 10, 2017

@hkaiser I'm ready to be reviewed and merged.

@hkaiser

This comment has been minimized.

Show comment
Hide comment
@hkaiser

hkaiser Sep 22, 2017

Member

@taeguk Please fix the conflicts for this to be merged. And sorry for the long delay ...

Member

hkaiser commented Sep 22, 2017

@taeguk Please fix the conflicts for this to be merged. And sorry for the long delay ...

@taeguk

This comment has been minimized.

Show comment
Hide comment
@taeguk

taeguk Sep 24, 2017

Member

@hkaiser I resolved conflicts. And I did somethings related to #2908, too.
I'm ready to be merged :)

Member

taeguk commented Sep 24, 2017

@hkaiser I resolved conflicts. And I did somethings related to #2908, too.
I'm ready to be merged :)

@hkaiser

This comment has been minimized.

Show comment
Hide comment
@hkaiser

hkaiser Sep 24, 2017

Member

@taeguk: thanks a lot for your continued work on this! Just a heads up - there are still problems reported by inspect.

Member

hkaiser commented Sep 24, 2017

@taeguk: thanks a lot for your continued work on this! Just a heads up - there are still problems reported by inspect.

@hkaiser

LGTM, thanks a lot!

@hkaiser hkaiser merged commit 0e82bf9 into STEllAR-GROUP:master Sep 25, 2017

2 checks passed

ci/circleci Your tests passed on CircleCI!
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details

@hkaiser hkaiser added this to Merged to master in Standard Algorithms Oct 29, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment