Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apex integration #1726

Closed
wants to merge 81 commits into from
Closed

Conversation

khuck
Copy link
Contributor

@khuck khuck commented Aug 19, 2015

Hello Hartmut, others -

I think I updated apex_integration with the latest changes in master (actually, you did that), and I included all the fixes from the previous merge request (that I can't find any more). At any rate, here are the changes again, I am sure we will have to go through the code review process a second time, since the other one never completed.

I fixed a few things in the code that were missing from when you created the apex_integration branch initially. In particular, there is the "throttle" scheduler that Allan and Nick worked on to restrict concurrency. In fairness, I haven't fully tested that with this code base.

Thanks -
Kevin

Bcorde5 and others added 22 commits August 10, 2015 09:06
Created to default to hard limit of 90, excludes documented files, CMakeLists.txt, and #error.
…ine_limit

Conflicts:
	examples/1d_hydro/1d_hydro_upwind.cpp
	examples/transpose/transpose_block.cpp
	hpx/lcos/detail/full_empty_entry.hpp
	hpx/parallel/executors/service_executors.hpp
	hpx/runtime/threads/policies/static_priority_queue_scheduler.hpp
	hpx/runtime/threads/threadmanager.hpp
	src/hpx_init.cpp
	src/runtime/threads/executors/thread_pool_executors.cpp
	src/runtime_impl.cpp
	tests/performance/local/vector_foreach.cpp
…ine_limit

Conflicts:
	hpx/runtime/serialization/array.hpp
When put_parcel(s) gets called, a special serialization pass is triggered to
extract possible non ready futures from any parcel.
…ine_limit

Conflicts:
	hpx/plugins/parcel/coalescing_message_handler.hpp
	hpx/plugins/parcel/message_buffer.hpp
	hpx/runtime/parcelset/decode_parcels.hpp
	hpx/runtime/parcelset/parcel.hpp
	hpx/runtime/parcelset/parcelhandler.hpp
	hpx/runtime/parcelset/parcelport_impl.hpp
	plugins/parcelport/mpi/parcelport_mpi.cpp
	src/runtime_impl.cpp
Adds a tool for inspect that checks for character limits
- Rename register_id_with_basename --> register_with_basename
- Rename find_id[s]_from_basename --> find_from_basename
- Rename find_all_ids_from_basename --> find_all_from_basename
- Rename unregister_id_with_basename --> unregister with_basename
- Added equivalent functions taking/returning client objects instead of ids/futures to ids
fixes from a couple of months ago. Trying this merge again...
@hkaiser
Copy link
Member

hkaiser commented Aug 19, 2015

Basic test fail now fail: https://circleci.com/gh/STEllAR-GROUP/hpx/1388

    - call_for_each now takes the vector of parcels and calls each handler
      with it's associated parcel
    - The MPI parcelport has been switched back to run with connection caches
Instead of capturing every future and store it in a vector, we just
attach a continuation to the future and maintain a count to see when
all passed futures are ready
hkaiser and others added 29 commits August 23, 2015 18:58
Fixed STEllAR-GROUP#1688: Add timer counters for tfunc_total and exec_total
    - Resorting to shared_ptr for pending parcels for gcc < 4.9
    - Explicitly defining move ctor and assignment for unwrapped_impl to help intel13
to APEX standalone build to find some of the problems encountered in this
integration.
…sion.hpp out of config.hpp and include it explicitly in the few places it is needed
added convenience c-tor and begin()/end() to serialize_buffer
…ig-includes"

This reverts commit 2d5c62b, reversing
changes made to 01a8ff0.
…ed the

second static scheduler constructor, and added a second one for the throttled
scheduler.
@khuck khuck closed this Sep 11, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants