Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tracking issue for shrink_to feature #56431

Closed
2 of 3 tasks
ordovicia opened this issue Dec 2, 2018 · 9 comments
Closed
2 of 3 tasks

Tracking issue for shrink_to feature #56431

ordovicia opened this issue Dec 2, 2018 · 9 comments

Comments

@ordovicia
Copy link
Contributor

@ordovicia ordovicia commented Dec 2, 2018

Steps

Questions

  • Does this need an RFC?

cc @Diggsey

@kennytm
Copy link
Member

@kennytm kennytm commented Dec 2, 2018

Someone needs to update BinaryHeap::shrink_to to point to this issue.

@ordovicia
Copy link
Contributor Author

@ordovicia ordovicia commented Dec 2, 2018

I was just doing it 😄

Centril added a commit to Centril/rust that referenced this issue Dec 2, 2018
Update issue number of `shrink_to` methods to point the tracking issue

Tracking issue: rust-lang#56431
kennytm added a commit to kennytm/rust that referenced this issue Dec 3, 2018
Update issue number of `shrink_to` methods to point the tracking issue

Tracking issue: rust-lang#56431
@Amanieu
Copy link
Contributor

@Amanieu Amanieu commented Dec 5, 2018

Currently the shrink_to method will panic if the given capacity is greater than the current one. I think this should be changed to silently ignore such cases instead.

I can see shrink_to being particularly useful when you want to reuse an existing allocation for something different (for example another pass of a multi-pass algorithm). The container will usually be empty at this point and you just want to ensure that a large allocation used for one pass doesn't clog up memory for the rest of the program's lifetime.

@ErichDonGubler
Copy link
Contributor

@ErichDonGubler ErichDonGubler commented Apr 3, 2019

I would agree with @Amanieu here -- I've just recently found a case where shrink_to would be handy, particularly if it silently no-op'd in the case of capacity actually being smaller than the specified maximum.

bors bot added a commit to rust-lang/hashbrown that referenced this issue Apr 22, 2019
67: Change shrink_to to not panic if min_capacity < capacity r=Amanieu a=Amanieu

cc rust-lang/rust#56431

Co-authored-by: Amanieu d'Antras <amanieu@gmail.com>
@Amanieu
Copy link
Contributor

@Amanieu Amanieu commented Apr 22, 2019

I have changed the behavior in hashbrown to no-op if the current capacity is smaller than the minimum (https://github.com/Amanieu/hashbrown/pull/67). There is currently still an assert in the libstd wrapper around hashbrown that panics though.

@In-line
Copy link
Contributor

@In-line In-line commented May 7, 2019

IMHO I would prefer shrink_to not to panic. I think it is better to return Result<usize, TooSmallError> or maybe Option<usize>?

Ok(..) would contain previous capacity.
Err(...) will be returned in case of failure, containing required minimum.

With this change if I need code to panic, I will just unwrap(), but with current behavior it seems to be error prone.

bors added a commit to rust-lang-ci/rust that referenced this issue Jan 27, 2021
…k-Simulacrum

Trying to shrink_to greater than capacity should be no-op

Per the discussion in rust-lang#56431, `shrink_to` shouldn't panic if you try to make a vector shrink to a capacity greater than its current capacity.
@marmeladema
Copy link
Contributor

@marmeladema marmeladema commented Feb 18, 2021

I think #81335 fixed the last remaining concern.
Could it be stabilized, at least for Vec?
I am ready to do the PR if needed 👍

@kwf
Copy link

@kwf kwf commented May 11, 2021

@marmeladema I'd love for this API to be stabilized. Is there anything I can do to help out?

@Folyd
Copy link
Contributor

@Folyd Folyd commented May 17, 2021

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

11 participants