Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vec::truncate_par #490

Closed
njaard opened this issue Dec 11, 2017 · 2 comments · Fixed by #787
Closed

Vec::truncate_par #490

njaard opened this issue Dec 11, 2017 · 2 comments · Fixed by #787

Comments

@njaard
Copy link

njaard commented Dec 11, 2017

I have some code that truncates a large Vec and I found that the drops were very costly. It would be nice to see a truncate_par in Rayon.

I use the following unsafe code to drop the tail of the vec in parallel:

self.par_iter_mut()
    .skip(len)
    .for_each(|r| std::ptr::drop_in_place(r));
let n = std::cmp::min(self.len(), len);
self.set_len( n );
@cuviper
Copy link
Member

cuviper commented Dec 11, 2017

Note that skip is not as efficient as it could be (#352), and your example also has a problem with panic safety. If any of those drops panic, then the vector's length will not be adjusted, and the vector's own drop will try to re-drop some items!

You can see how rayon::vec::IntoIter deals with this here.

Parallel truncate seems kind of niche to me -- we should think if there may be a more general approach. A parallel drain would let you write something like vec.par_drain(len..), but we need stable RangeArgument to allow different range types. (Or we could create our own equivalent to RangeArgument.)

@njaard
Copy link
Author

njaard commented Dec 15, 2017

I agree that a parallel drain is a better and more general solution.

@cuviper cuviper linked a pull request Aug 16, 2020 that will close this issue
@bors bors bot closed this as completed in #787 Sep 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants