Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel heap algorithm implementations WIP #1914

Closed
wants to merge 24 commits into from

Conversation

Syntaf
Copy link
Member

@Syntaf Syntaf commented Dec 12, 2015

Implementation moving forward from #1888 , still WIP so this PR is for code review and inspect.

To-Do:

  • Write is_heap, is_heap_until
  • Benchmark algorithms to test performance
  • Determine how chunk size should interact within these algorithms, as the chunk size is not guaranteed in level based parallelism
  • Decide whether to create practitioners for these algorithms, or leave the implementation within the algorithm itself.

Currently:

  • make_heap algorithm is implemented and all tests pass

wrote additional tests for custom predicate in make_heap
Partial impl of execption throwing, does not pass tests
Added additional overload for parallel_task_execution_policy which wraps exceptions within the future
@Syntaf Syntaf mentioned this pull request Dec 12, 2015
@hkaiser hkaiser added this to the 0.9.12 milestone Dec 12, 2015
@hkaiser
Copy link
Member

hkaiser commented Jan 8, 2016

@Syntaf: what's the status of this work?

@Syntaf
Copy link
Member Author

Syntaf commented Jan 8, 2016

@hkaiser All of the tests pass and the algorithms work, but I ran out of time to benchmark them and refine their implementation. A couple things to note

  • I don't believe settings the chunk size via executor works, it currently splits it events among partitions and additionally divides it by 2. I chose to divide by some constant because an evenly partitioned set with the amount of cores would almost all be run sequentially in the heap algorithms.
  • If the amount of items in a level is less than the chunk size, it is run sequentially
  • I had to overload the parallel function in both is_heap and make_heap to account for throwing exceptions within futures, other than that their implementation is identical.

I created two functions inside of parallel/util/detail/chunk_size.hpp , get_bottomup_heap_bulk_iteration_shape and get_topdown_heap_bulk_iteration_shape. Both of these functions return a shape vector as the other chunk_size functions do, and runs any sequential work (if the level is less than the chunk_size). The difference is that they do not partition the level, they only return what levels can be parallelized. So inside parallel I loop through the shape and chunk/execute the items, this is because each element in shape must synchronize before moving on to the next element.

@sithhell
Copy link
Member

sithhell commented May 2, 2016

What's the status here? Can we close the PR and you open it again once everything is done?

@hkaiser
Copy link
Member

hkaiser commented May 3, 2016

@sithhell I'm working on this ...

@hkaiser hkaiser modified the milestones: 1.0.0, 0.9.99 Jun 24, 2016
@hkaiser hkaiser modified the milestones: 1.1.0, 1.0.0 Apr 18, 2017
@hkaiser
Copy link
Member

hkaiser commented Jun 2, 2017

@taeguk Before I close this PR without merging, does this contain anything you might be of value for your work?

@taeguk
Copy link
Member

taeguk commented Jun 2, 2017

@hkaiser I don't refer the implementation in this PR.

@hkaiser
Copy link
Member

hkaiser commented Jun 2, 2017

Closing this unmerged as similar work is being done independently.

@hkaiser hkaiser closed this Jun 2, 2017
@hkaiser hkaiser mentioned this pull request Feb 18, 2018
47 tasks
hkaiser added a commit to hkaiser/hpx that referenced this pull request Jul 31, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants