Skip to content

Commit

Permalink
Unrolled build for rust-lang#120116
Browse files Browse the repository at this point in the history
Rollup merge of rust-lang#120116 - the8472:only-same-alignments, r=cuviper

Remove alignment-changing in-place collect

This removes the alignment-changing in-place collect optimization introduced in rust-lang#110353
Currently stable users can't benefit from the optimization because GlobaAlloc doesn't support alignment-changing realloc and neither do most posix allocators. So in practice it has a negative impact on performance.

Explanation from rust-lang#120091 (comment):

> > You mention that in case of alignment mismatch -- when the new alignment is less than the old -- the implementation calls `mremap`.
>
> I was trying to note that this isn't really the case in practice, due to the semantics of Rust's allocator APIs. The only use of the allocator within the `in_place_collect` implementation itself is [a call to `Allocator::shrink()`](https://github.com/rust-lang/rust/blob/db7125f008cfd72e8951c9a863178956e2cbacc3/library/alloc/src/vec/in_place_collect.rs#L299-L303), which per its documentation [allows decreasing the required alignment](https://doc.rust-lang.org/1.75.0/core/alloc/trait.Allocator.html). However, in stable Rust, the only available `Allocator` is [`Global`](https://doc.rust-lang.org/1.75.0/alloc/alloc/struct.Global.html), which delegates to the registered `GlobalAlloc`. Since `GlobalAlloc::realloc()` [cannot change the required alignment](https://doc.rust-lang.org/1.75.0/core/alloc/trait.GlobalAlloc.html#method.realloc), the implementation of [`<Global as Allocator>::shrink()`](https://github.com/rust-lang/rust/blob/db7125f008cfd72e8951c9a863178956e2cbacc3/library/alloc/src/alloc.rs#L280-L321) must fall back to creating a brand-new allocation, `memcpy`ing the data into it, and freeing the old allocation, whenever the alignment doesn't remain exactly the same.
>
> Therefore, the underlying allocator, provided by libc or some other source, has no opportunity to internally `mremap()` the data when the alignment is changed, since it has no way of knowing that the allocation is the same.
  • Loading branch information
rust-timer committed Jan 20, 2024
2 parents 1828461 + 85d1787 commit fe5341d
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 12 deletions.
13 changes: 8 additions & 5 deletions library/alloc/src/vec/in_place_collect.rs
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,9 @@ const fn in_place_collectible<DEST, SRC>(
step_merge: Option<NonZeroUsize>,
step_expand: Option<NonZeroUsize>,
) -> bool {
if const { SRC::IS_ZST || DEST::IS_ZST || mem::align_of::<SRC>() < mem::align_of::<DEST>() } {
// Require matching alignments because an alignment-changing realloc is inefficient on many
// system allocators and better implementations would require the unstable Allocator trait.
if const { SRC::IS_ZST || DEST::IS_ZST || mem::align_of::<SRC>() != mem::align_of::<DEST>() } {
return false;
}

Expand All @@ -188,7 +190,8 @@ const fn in_place_collectible<DEST, SRC>(

const fn needs_realloc<SRC, DEST>(src_cap: usize, dst_cap: usize) -> bool {
if const { mem::align_of::<SRC>() != mem::align_of::<DEST>() } {
return src_cap > 0;
// FIXME: use unreachable! once that works in const
panic!("in_place_collectible() prevents this");
}

// If src type size is an integer multiple of the destination type size then
Expand Down Expand Up @@ -276,8 +279,8 @@ where
let dst_guard = InPlaceDstBufDrop { ptr: dst_buf, len, cap: dst_cap };
src.forget_allocation_drop_remaining();

// Adjust the allocation if the alignment didn't match or the source had a capacity in bytes
// that wasn't a multiple of the destination type size.
// Adjust the allocation if the source had a capacity in bytes that wasn't a multiple
// of the destination type size.
// Since the discrepancy should generally be small this should only result in some
// bookkeeping updates and no memmove.
if needs_realloc::<I::Src, T>(src_cap, dst_cap) {
Expand All @@ -290,7 +293,7 @@ where
let src_size = mem::size_of::<I::Src>().unchecked_mul(src_cap);
let old_layout = Layout::from_size_align_unchecked(src_size, src_align);

// The must be equal or smaller for in-place iteration to be possible
// The allocation must be equal or smaller for in-place iteration to be possible
// therefore the new layout must be ≤ the old one and therefore valid.
let dst_align = mem::align_of::<T>();
let dst_size = mem::size_of::<T>().unchecked_mul(dst_cap);
Expand Down
14 changes: 7 additions & 7 deletions library/alloc/src/vec/spec_from_iter.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,13 @@ use super::{IntoIter, SpecExtend, SpecFromIterNested, Vec};
/// +-+-----------+
/// |
/// v
/// +-+-------------------------------+ +---------------------+
/// |SpecFromIter +---->+SpecFromIterNested |
/// |where I: | | |where I: |
/// | Iterator (default)----------+ | | Iterator (default) |
/// | vec::IntoIter | | | TrustedLen |
/// | SourceIterMarker---fallback-+ | +---------------------+
/// +---------------------------------+
/// +-+---------------------------------+ +---------------------+
/// |SpecFromIter +---->+SpecFromIterNested |
/// |where I: | | |where I: |
/// | Iterator (default)------------+ | | Iterator (default) |
/// | vec::IntoIter | | | TrustedLen |
/// | InPlaceCollect--(fallback to)-+ | +---------------------+
/// +-----------------------------------+
/// ```
pub(super) trait SpecFromIter<T, I> {
fn from_iter(iter: I) -> Self;
Expand Down

0 comments on commit fe5341d

Please sign in to comment.