Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential RefCell unsafety due to mem::forget #25841

Closed
Veedrac opened this Issue May 27, 2015 · 10 comments

Comments

Projects
None yet
10 participants
@Veedrac
Copy link
Contributor

Veedrac commented May 27, 2015

I don't have a 32-bit computer to test on, but I suspect that this will overflow RefCell's counter on 32 bit builds:

let overflower = RefCell::new(());
for _ in 0u64..4294967296 {
    std::mem::forget(overflower.borrow());
}

This can allow shared mutable pointers. I imagine similar problems hold for Rc and co. and I imagine the problem also happens on 64 bit builds if you have it running for a few hundred years.

This is what made me think of the problem:

// Values [1, MAX-1] represent the number of `Ref` active
// (will not outgrow its range since `usize` is the size of the address space)

But this isn't true if memory can be reclaimed without the destructors running (I think).


To fix it you just need to use saturating addition since only leaks can allow this to happen, and once you've leaked in this way you can't ever hope to run all the destructors anyway. (Actually, this isn't strictly true if you allow for compressed storage.)

@huonw

This comment has been minimized.

Copy link
Member

huonw commented May 27, 2015

Good catch! (I wonder if this applies to std::sync::RwLock's read guard too.)

cc https://internals.rust-lang.org/t/rc-is-unsafe-mostly-on-32-bit-targets-due-to-overflow/2120

@nikomatsakis

This comment has been minimized.

Copy link
Contributor

nikomatsakis commented May 27, 2015

This case seems relatively easy to fix, since there are no threading complications -- saturating add (and checking for max before decrementing) is certainly one option.

@Veedrac

This comment has been minimized.

Copy link
Contributor Author

Veedrac commented May 27, 2015

AFAICT you don't need to check on decrement, assuming you don't allow for compressed storage. After all, you can't run 2^32-1 destructors without 2^32-1 objects, which is infeasible.

@huonw

This comment has been minimized.

Copy link
Member

huonw commented May 27, 2015

std::sync::RwLock's read guard

On unix, it's not a direct problem with the underlying API:

     The pthread_rwlock_rdlock(), pthread_rwlock_timedrdlock(), and
     pthread_rwlock_tryrdlock() functions may fail if:

     [EAGAIN]           The lock could not be acquired because the maximum
                        number of read locks against lock has been exceeded.

However, it is a problem with how we use it at the moment:

    #[inline]
    pub unsafe fn read(&self) {
        let r = ffi::pthread_rwlock_rdlock(self.inner.get());
        debug_assert_eq!(r, 0);
    }

In particular, creating too many readers will only be detected and "handled" (panic) in debug builds. (cc @alexcrichton)

I can't find any info about this for the windows APIs.


AFAICT you don't need to check on decrement, assuming you don't allow for compressed storage. After all, you can't run 2^32-1 destructors without 2^32-1 objects, which is infeasible.

RefCell behaves just like Rc in this respect, and there is more detailed discussion on this point on the internals thread I linked above.

@whataloadofwhat

This comment has been minimized.

Copy link
Contributor

whataloadofwhat commented May 27, 2015

const WRITING: BorrowFlag = !0;

Doesn't this means that if you ever get to the stage where you will overflow, you'll just trigger an "already mutably borrowed" error, no matter how you try to borrow?

I modified the example (because I'm on 64 bit) and ran it (it was optimised out and it ran instantly, whoop):

use std::cell::RefCell;

fn main() {
    let overflower = RefCell::new(());
    loop {
        std::mem::forget(overflower.borrow());
    }
}

Output:

thread '<main>' panicked at 'RefCell<T> already mutably borrowed', /home/rustbuild/src/rust-buildbot/slave/nightly-dist-rustc-linux/build/src/libcore/cell.rs:392

I guess that the error message is technically wrong but I think it's safe at least?

@Veedrac

This comment has been minimized.

Copy link
Contributor Author

Veedrac commented May 27, 2015

@whataloadofwhat I can agree with that. Alternatively you can hit a debug_assert! in drop or get the wrong result from borrow_state (unstable), both of which are safe.

@retep998

This comment has been minimized.

Copy link
Member

retep998 commented May 28, 2015

I can't find any info about this for the windows APIs.

On 32-bit upon reaching exactly 0x10000000 readers, attempts to lock it using the try version will return a failure, while the non-try version will simply deadlock.
I am still running the 64-bit test, and I will get back to you if I manage to reach the limit.

@rkjnsn

This comment has been minimized.

Copy link
Contributor

rkjnsn commented May 29, 2015

AFAICT you don't need to check on decrement, assuming you don't allow for compressed storage. After all, you can't run 2^32-1 destructors without 2^32-1 objects, which is infeasible.

@Stebalien noted the possibility of a wider-than-address-space collection on the internals discussion about overflows in Rc. Rust's memcpy-based move semantics enable the possibility of a collection that can store more objects than would fit in the address space by swapping objects to disk. This would allow the user to create and subsequently destroy more than 2^32 objects.

@steveklabnik steveklabnik added the A-libs label May 29, 2015

@apasel422

This comment has been minimized.

Copy link
Member

apasel422 commented Oct 14, 2016

Was this fixed by #33960?

@alexcrichton

This comment has been minimized.

Copy link
Member

alexcrichton commented Oct 17, 2016

Indeed fixed!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.