Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a new facility lcos::split_all #2239

Closed
3 tasks done
hkaiser opened this issue Jul 8, 2016 · 7 comments · Fixed by #2246
Closed
3 tasks done

Create a new facility lcos::split_all #2239

hkaiser opened this issue Jul 8, 2016 · 7 comments · Fixed by #2246

Comments

@hkaiser
Copy link
Member

hkaiser commented Jul 8, 2016

Several people have been asking for a facility which allows to convert a future storing a tuple of values into a tuple of futures, each storing one of the elements of the original tuple:

tuple<future<T>...> split_all(future<tuple<T...>> &&);

Other possible names discussed were unfuse() or split_fused().

Other things to add:

@hkaiser
Copy link
Member Author

hkaiser commented Jul 8, 2016

@ltroska @biddisco is this what you were looking for?

@hkaiser
Copy link
Member Author

hkaiser commented Jul 8, 2016

A related facility would be:

vector<future<T>...> split_all(future<vector<T...>> &&);

@hkaiser hkaiser changed the title Create a fnew facility lcos::split_all Create a new facility lcos::split_all Jul 8, 2016
@sithhell
Copy link
Member

sithhell commented Jul 8, 2016

Is the usecase more like this:

future<T> f= ...;
auto ff = f.then(
    [](...){
        future<U> f1 = ...;
        future<R> f2 = ...;
        return XXX(move(f1), move(f2)); 
});
ff = f.then(...);
dataflow(..., get<0>(ff));
dataflow(..., get<1>(ff));

?

@sithhell
Copy link
Member

sithhell commented Jul 8, 2016

In any case, the usecase as sketched above should be already supported out of the by the split_all functionality.

@biddisco
Copy link
Contributor

biddisco commented Jul 8, 2016

@hkaiser yes. This is what I was asking for. @rasolca has a use case where matrix sub-blocks are processed and when a task completes, it returns a future which is consumed by two new tasks. One task needs the matrix data, the other task only needs to be triggered when the operation completes. In the larger DAG, each task is using a future from the upper-left and upper-right - taking the matrix from one, and synchronization only from the other. As this pattern repeats, one finds that one needs a shared_future for each task since two other tasks are always using each future. It turns out that it messes the code up because the shared_future returns a const ref which makes the code more complicated than it needs to be. The solution would be for each task to return a pair<matrix,void> and to turn that into future<matrix>, future<void> using the new split_all feature - then the future<matrix> can be passed to the left sub-task and the future<void> to the right- and each can take a left/right pair from their other neighbours etc etc. Making the code simpler, more readable and generally better.
Allowing async functions to return more than one future is essentially what we want, but providing an split_future<> utility would work as well.

@biddisco
Copy link
Contributor

biddisco commented Jul 8, 2016

PS. I like the name split_future<> as it does what it says on the tin.

@hkaiser
Copy link
Member Author

hkaiser commented Jul 8, 2016

PS. I like the name split_future<> as it does what it says on the tin.

I like that!

A related facility would be:
vector<future<T>...> split_all(future<vector<T...>> &&);

Unfortunately, it isn't possible to implement this without calling get on the argument. @sithhell however suggested to add this:

array<future<T>, N> split_future(future<array<T, N>> &&);

@sithhell sithhell modified the milestones: 0.9.99, 1.0.0 Jul 15, 2016
@hkaiser hkaiser mentioned this issue Jul 22, 2016
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants