Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add utility member functions for partitioned_vector #1913

Merged
merged 6 commits into from Dec 15, 2015

Conversation

atrantan
Copy link

This pull request does 2 small changes:

  • Give a public access to the type named data_type in hpx::server::partitioned_vector
  • Add a member action (get_copied_data) allowing to asynchronously get a copy of the data
    owned by the server

{
std::vector< hpx::id_type > ids;

for(auto const part_data : partitions_)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be: for(auto const& part_data : partitions_)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in commit 239ed21

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Just out of curiosity: why do you need this functionality?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It allows my array_view to get all it needs to return the data of a partition. Saving the ids is costless as well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, the idea was for the array_view to access the local data only. Why do you need to get a copy of the local data for this? Wouldn't it be no view anymore?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, misunderstanding - what I wanted to ask is why you need the get_copied_data() functionality above.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for another functionality extending the concept of array_view. If an owner of a array_view needs a data that is not in its locality, make a copy and return it by value.

@hkaiser
Copy link
Member

hkaiser commented Dec 12, 2015

This is for another functionality extending the concept of array_view. If an owner of a array_view needs a data that is not in its locality, make a copy and return it by value.

Well, let's talk about this in more detail. I'm not convinced yet.

Wrt get_partitions_ids(): I don't think you need to add this as all you need for the local access is to create the proper iterators. For instance:

typedef typename hpx::partitioned_vector<int>::iterator iterator;
typedef hpx::traits::segmented_iterator_traits<iterator> traits;

hpx::id_type here = hpx::find_here();
hpx::partitioned_vector<int> v(...);

// extract iterators representing local data segments
auto segment_end = v.segment_end(here);
for (auto seg_it = v.segment_begin(here); seg_it != segment_end; ++seg_it)
{
    // extract iterators allowing to (natively) access local data
    auto local_it = traits::begin(seg_it);
    auto local_end = traits::end(seg_it);

    // [local_it, local_end) represents the local data, the iterators
    // are essentially equivalent to the iterators exposed by the 
    // underlying data
}

So, you can use the exposed iterators directly without even accessing the partition table inside the partitioned_vector.

hkaiser added a commit that referenced this pull request Dec 15, 2015
Add utility member functions for partitioned_vector
@hkaiser hkaiser merged commit f9c8d7b into STEllAR-GROUP:master Dec 15, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants