Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix two bugs in clone_from one of which is mine 😅 (was introduced in #458) #465

Merged
merged 5 commits into from Sep 1, 2023

Conversation

JustForFun88
Copy link
Contributor

My bad 😅, I totally forgot that we can modify table from another thread, which can result in an invalid self table in case if we have a panic during the cloning of elements (we will have an empty table with invalid control bytes to which we can get accessed via RawIterRange during parallel iteration or via the find function).
This pull request fixes my bug by partially reverting the old implementation of the clone_from function.

Also fixes leaking allocator when self_.buckets() != source.buckets. We used to rewrite the table (&mut **self_ as *mut Self).write()), forgetting that although we freed the old table memory, we forgot about the old allocator (this behavior was found during testing). Now, to allocate a new table, we use the old allocator, via ptr::read(&self_.table.alloc).

The latter is worth thinking about. Maybe instead of using the old allocator, it's better to use the new one via alloc.clone() and just drop the old one?

P.S. Added tests for the cases described above.

// Avoid `Result::unwrap_or_else` because it bloats LLVM IR.
//
// SAFETY: This is safe as we are taking the size of an already allocated table
// and therefore сapacity overflow cannot occur, `self.table.buckets()` is power
// of two and all allocator errors will be caught inside `RawTableInner::new_uninitialized`.
match Self::new_uninitialized(
self.table.alloc.clone(),
alloc,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it's better to use alloc.clone() here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, your current code looks fine and is arguably more correct than cloning the allocator.

In the long term, I would like to remove the Clone bound on the allocator. It is only required due to an implementation quirk where we have two RawTable with an allocator each when resizing a table.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting idea. I got an idea how to do it. We can simply split out the part without the allocator from RawTableInner, and shove all the code that does not depend on the allocator there. This will allow us to have two RawTableInner_Inner and one allocator at the same time (well, in theory, I haven't thought deeply yet).

Can I try to implement this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's probably a simpler way to do this: don't include the allocator in RawTableInner, and instead have a reference to it the allocator passed in wherever it is needed. I want to avoid an explosion of type in the RawTable code, which is already quite messy (mostly due to efforts to reduce compilation times).

src/raw/mod.rs Outdated Show resolved Hide resolved
// Avoid `Result::unwrap_or_else` because it bloats LLVM IR.
//
// SAFETY: This is safe as we are taking the size of an already allocated table
// and therefore сapacity overflow cannot occur, `self.table.buckets()` is power
// of two and all allocator errors will be caught inside `RawTableInner::new_uninitialized`.
match Self::new_uninitialized(
self.table.alloc.clone(),
alloc,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, your current code looks fine and is arguably more correct than cloning the allocator.

In the long term, I would like to remove the Clone bound on the allocator. It is only required due to an implementation quirk where we have two RawTable with an allocator each when resizing a table.

@Amanieu
Copy link
Member

Amanieu commented Sep 1, 2023

@bors r+

@bors
Copy link
Collaborator

bors commented Sep 1, 2023

📌 Commit 8a4a1c7 has been approved by Amanieu

It is now in the queue for this repository.

@bors
Copy link
Collaborator

bors commented Sep 1, 2023

⌛ Testing commit 8a4a1c7 with merge a33acbe...

@bors
Copy link
Collaborator

bors commented Sep 1, 2023

☀️ Test successful - checks-actions
Approved by: Amanieu
Pushing a33acbe to master...

@bors bors merged commit a33acbe into rust-lang:master Sep 1, 2023
26 checks passed
@JustForFun88 JustForFun88 deleted the fix_bugs_clone_from branch September 2, 2023 03:33
bors added a commit that referenced this pull request Sep 5, 2023
Make allocator not `Clone`

`@Amanieu` Implements the idea proposed in #465 (comment). The `A: Allocator + Clone` requirement remains only for `Clone` implementations.

~~**The pull request is basically complete, just a final review (and maybe some tweaks if needed). Also, I haven't documented the functions yet. This can be done after the initial approval of this pull request.**~~

Main changes made:
1. Allacator moved from `RawTableInner` to `RawTable`;
2. `IS_ZERO_SIZED_TYPE` and `DATA_NEEDS_DROP` constants moved from separate structures to `SizedTypeProperties` trait;
3. For ease of implementation and avoidance of duplication, some functions are downgraded from `RawTable` to `RawTableInner`. These are: `with_capacity`, `iter`, `drop_elements`. Additionally, the `drop_inner_table` function has been created through which `impl Drop for RawTable<T, A>` is now implemented.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants