Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate Read::initializer in favor of ptr::freeze #58363

Closed
wants to merge 11 commits into from

Conversation

sfackler
Copy link
Member

Read implementations should only write into the buffer passed to them,
but have the ability to read from it. Access of uninitialized memory can
easily cause UB, so there's then a question of what a user of a reader
should do to initialize buffers.

Previously, we allowed a Read implementation to promise it wouldn't look
at the contents of the buffer, which allows the user to pass
uninitialized memory to it.

Instead, this PR adds a method to "freeze" undefined bytes into
arbitrary-but-defined bytes. This is currently done via an inline
assembly directive noting the address as an output, so LLVM no longer
knows it's uninitialized. There is a proposed "freeze" operation in LLVM
itself that would do this directly, but it hasn't been fully
implemented.

Some targets don't support inline assembly, so there we instead pass the
pointer to an extern "C" function, which is similarly opaque to LLVM.

The current approach is very low level. If we stabilize, we'll probably
want to add something like slice.freeze() to make this easier to use.

r? @alexcrichton

@rust-highfive rust-highfive added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Feb 10, 2019
@sfackler
Copy link
Member Author

This doesn't currently work on asmjs and wasm, since those don't support inline assembly. What's the right way to link an extern function to libcore? It doesn't seem like we currently do anything like that.

/// ```
#[inline]
#[unstable(feature = "ptr_freeze", issue = "0")]
pub unsafe fn freeze<T>(dst: *mut T, count: usize) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the right interface? It's currently a bit weird in that we don't actually use the count. It could alternatively just take the pointer, and say that it freezes all memory reachable through it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, should this be unsafe in the first place? Since it's not actually modifying any of the pointed-to data, does it matter if it's valid or not?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since it's already "basically stable", I wonder if this should take &mut [T] and move to std::mem?

I'd naively think that it could be safe and probably should be, but I'm not an expert!

We also briefly discussed maybe only taking u8 for now? I'm not sure how useful this would be beyond u8 and other POD types

Copy link
Member

@RalfJung RalfJung Feb 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should take raw pointers so people don't have to create references to uninitialized data.

count being unused just comes from the fact that LLVM does not support "real" freeze, but I think this is a much better interface than "reachable from".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we change this to T: ?Sized so that you can pass a slice in? Then the count parameter would no longer be necessary.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK there's currently no way to form a *mut [T] without going through a reference first.

@rust-highfive
Copy link
Collaborator

The job x86_64-gnu-llvm-6.0 of your PR failed on Travis (raw log). Through arcane magic we have determined that the following fragments from the build log may contain information about the problem.

Click to expand the log.
travis_time:end:33a0c50a:start=1549838546979018656,finish=1549838547900758182,duration=921739526
$ git checkout -qf FETCH_HEAD
travis_fold:end:git.checkout

Encrypted environment variables have been removed for security reasons.
See https://docs.travis-ci.com/user/pull-requests/#pull-requests-and-security-restrictions
$ export SCCACHE_BUCKET=rust-lang-ci-sccache2
$ export SCCACHE_REGION=us-west-1
Setting environment variables from .travis.yml
$ export IMAGE=x86_64-gnu-llvm-6.0
---

[00:04:00] travis_fold:start:tidy
travis_time:start:tidy
tidy check
[00:04:00] tidy error: /checkout/src/libcore/ptr.rs:972: unexplained "```ignore" doctest; try one:
[00:04:00] 
[00:04:00] * make the test actually pass, by adding necessary imports and declarations, or
[00:04:00] * use "```text", if the code is not Rust code, or
[00:04:00] * use "```compile_fail,Ennnn", if the code is expected to fail at compile time, or
[00:04:00] * use "```should_panic", if the code is expected to fail at run time, or
[00:04:00] * use "```no_run", if the code should type-check but not necessary linkable/runnable, or
[00:04:00] * explain it like "```ignore (cannot-test-this-because-xxxx)", if the annotation cannot be avoided.
[00:04:00] 
[00:04:02] some tidy checks failed
[00:04:02] 
[00:04:02] 
[00:04:02] 
[00:04:02] command did not execute successfully: "/checkout/obj/build/x86_64-unknown-linux-gnu/stage0-tools-bin/tidy" "/checkout/src" "/checkout/obj/build/x86_64-unknown-linux-gnu/stage0/bin/cargo" "--no-vendor" "--quiet"
[00:04:02] 
[00:04:02] 
[00:04:02] failed to run: /checkout/obj/build/bootstrap/debug/bootstrap test src/tools/tidy
[00:04:02] Build completed unsuccessfully in 0:00:47
[00:04:02] Build completed unsuccessfully in 0:00:47
[00:04:02] Makefile:68: recipe for target 'tidy' failed
[00:04:02] make: *** [tidy] Error 1
The command "stamp sh -x -c "$RUN_SCRIPT"" exited with 2.
travis_time:start:16e24d55
$ date && (curl -fs --head https://google.com | grep ^Date: | sed 's/Date: //g' || true)
Sun Feb 10 22:46:41 UTC 2019
---
travis_time:end:315bb9a3:start=1549838801911730052,finish=1549838801916853624,duration=5123572
travis_fold:end:after_failure.3
travis_fold:start:after_failure.4
travis_time:start:2859078d
$ ln -s . checkout && for CORE in obj/cores/core.*; do EXE=$(echo $CORE | sed 's|obj/cores/core\.[0-9]*\.!checkout!\(.*\)|\1|;y|!|/|'); if [ -f "$EXE" ]; then printf travis_fold":start:crashlog\n\033[31;1m%s\033[0m\n" "$CORE"; gdb --batch -q -c "$CORE" "$EXE" -iex 'set auto-load off' -iex 'dir src/' -iex 'set sysroot .' -ex bt -ex q; echo travis_fold":"end:crashlog; fi; done || true
travis_fold:end:after_failure.4
travis_fold:start:after_failure.5
travis_time:start:10c67751
travis_time:start:10c67751
$ cat ./obj/build/x86_64-unknown-linux-gnu/native/asan/build/lib/asan/clang_rt.asan-dynamic-i386.vers || true
cat: ./obj/build/x86_64-unknown-linux-gnu/native/asan/build/lib/asan/clang_rt.asan-dynamic-i386.vers: No such file or directory
travis_fold:end:after_failure.5
travis_fold:start:after_failure.6
travis_time:start:02585b08
$ dmesg | grep -i kill

I'm a bot! I can only do what humans tell me to, so if this was not helpful or you have suggestions for improvements, please ping or otherwise contact @TimNN. (Feature Requests)

Read implementations should only write into the buffer passed to them,
but have the ability to read from it. Access of uninitialized memory can
easily cause UB, so there's then a question of what a user of a reader
should do to initialize buffers.

Previously, we allowed a Read implementation to promise it wouldn't look
at the contents of the buffer, which allows the user to pass
uninitialized memory to it.

Instead, this PR adds a method to "freeze" undefined bytes into
arbitrary-but-defined bytes. This is currently done via an inline
assembly directive noting the address as an output, so LLVM no longer
knows it's uninitialized. There is a proposed "freeze" operation in LLVM
itself that would do this directly, but it hasn't been fully
implemented.

Some targets don't support inline assembly, so there we instead pass the
pointer to an extern "C" function, which is similarly opaque to LLVM.

The current approach is very low level. If we stabilize, we'll probably
want to add something like `slice.freeze()` to make this easier to use.
/// // We're passing this buffer to an arbitrary reader and aren't
/// // guaranteed they won't read from it, so freeze to avoid UB.
/// let mut buf: [u8; 4] = mem::uninitialized();
/// ptr::freeze(&mut buf, 1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this use buf.as_mut_ptr() and 4?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it really matters either way - we're either freezing a single [u8; 4] value, or 4 u8 values.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh sure, I just figured it was a bit odd compared to how we'd expect it to idiomatically be used

@alexcrichton
Copy link
Member

I think one thing that might be good to add here as well is a few tests that exercise ptr::freeze in either codegen or run-pass tests. We want to basically make sure that undef doesn't show up in LLVM IR I think.

@petrochenkov
Copy link
Contributor

cc @RalfJung

src/libcore/ptr.rs Outdated Show resolved Hide resolved
@RalfJung
Copy link
Member

Cc @nagisa @rkruppe with whom I had a brief chat about this on Zulip the other day.

@RalfJung
Copy link
Member

Also, do we have some plan for how to let Miri do this? Miri can actually meaningfully take count into account. However, it would have to recognize this function as special and intercept it, or it will bail on the inline assembly. What is a good way to do that? An intrinsic? Cc @oli-obk

Copy link
Contributor

@Centril Centril left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nits :)

/// Uninitialized memory has undefined contents, and interation with that data
/// can easily cause undefined behavior. This function "freezes" memory
/// contents, converting uninitialized memory to initialized memory with
/// arbitrary conents so that use of it is well defined.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// arbitrary conents so that use of it is well defined.
/// arbitrary contents so that use of it is well defined.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"arbitrary but fixed contents" might be a better formulation.

Also it might be worth noting that use is only well defined for integer type -- even with arbitrary but fixed contents, using this with bool or &T is UB.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also it might be worth noting that use is only well defined for integer type -- even with arbitrary but fixed contents, using this with bool or &T is UB.

That's a great point! Definitely worth noting.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have this phrased as "Every bit representation of T must be a valid value", but I don't think that's the best way of saying that. Ideas?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have this phrased as "Every bit representation of T must be a valid value",

Is there perhaps an auto-trait waiting to be invented for that? We could ostensibly make the API a bit safer by adding a constraint T: TheTrait...?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's been talk of this kind of thing for quite a while (pub auto trait Pod {}), but I think that'd be relevant as part of a safe interface over this specific function.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this can be an auto trait -- auto traits are always implemented for fieldless enums, but this function is not sufficient to make a fieldless enum valid.

///
/// * `dst` must be [valid] for reads.
///
/// Note that even if `T` has size `0`, the pointer must be non-NULL and properly aligned.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// Note that even if `T` has size `0`, the pointer must be non-NULL and properly aligned.
/// Note that even if `size_of::<T>() == 0`, the pointer must be non-NULL and properly aligned.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this formulation is used consistently throughout this file, so I'd prefer that if it gets changed that happens in a separate PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah; that's a good idea; I'll see if I can remember to make such a PR... :)

@oli-obk
Copy link
Contributor

oli-obk commented Feb 11, 2019

Either an intrinsic or a lang item. I think an intrinsic is the Right Thing here, because it would allow the compiler to choose which magic to apply. So the on some platforms the intrinsic lowering would emit assembly, on others a call to the opaque extern function.

Although I can believe that writing this with cfg in pure Rust is easier (like done by this PR). If that is the preferred way, we can just make the function a lang item and thus miri knows when it's calling it and can intercept the call.

Co-Authored-By: sfackler <sfackler@gmail.com>
@nagisa
Copy link
Member

nagisa commented Feb 11, 2019

This doesn't currently work on asmjs and wasm

A cursory look at the LLVM code reveals that an assembly parser exists, which would suggest that wasm does in fact support asm!. And indeed it does :)

@nagisa
Copy link
Member

nagisa commented Feb 11, 2019

I think an intrinsic is the Right Thing here, because it would allow the compiler to choose which magic to apply.

It is also the right thing given that freeze may eventually become an LLVM instruction, but the implementation is also something that can be changed in the future. Nevertheless value is in doing things the right way the first time 'round :)

@sfackler
Copy link
Member Author

A cursory look at the LLVM code reveals that an assembly parser exists, which would suggest that wasm does in fact support asm!. And indeed it does :)

Oh, great! I was just guessing off the fact that test::black_box is cfg'd off on those platforms.

I'll update it to an intrinsic tonight.

@@ -946,6 +946,58 @@ pub unsafe fn write_volatile<T>(dst: *mut T, src: T) {
intrinsics::volatile_store(dst, src);
}

/// Freezes `count * size_of::<T>()` bytes of memory, converting undefined data into
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is the first time we talk about this kind of data in the docs. I usually call it "uninitialized data" as I feel that is easier to understand. It also is further away from LLVM's undef, which is good -- Rust's "uninitialized data" is much more like poision than undef.

/// arbitrary contents so that use of it is well defined.
///
/// This function has no runtime effect; it is purely an instruction to the
/// compiler. In particular, it does not actually write anything to the memory.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function does have an effect in the "Rust abstract machine" though, not just in the compiler. And of course it has a run-time effect by inhibiting optimizations.

Maybe a comparison with a compiler fence helps? Those also clearly have an effect on runtime behavior even though they do not have a runtime effect themselves.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, also I think we should be clear about this counting as a write access as far as mutability and data races are concerned.

Doing this on a shared reference is UB.

/// unsafe {
/// // We're passing this buffer to an arbitrary reader and aren't
/// // guaranteed they won't read from it, so freeze to avoid UB.
/// let mut buf: [u8; 4] = mem::uninitialized();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a FIXME somewhere about porting this to MaybeUninit?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(This is still open, from what I can see)

@RalfJung
Copy link
Member

Could you add to the PR description an explanation of why you want to move away from the now-deprecated scheme, or add a link to where this was documented?

@RalfJung
Copy link
Member

The docs say

This function has no runtime effect

I think this should be clearly marked as a detail of the current implementation. The specified behavior of this function is to freeze all uninitialized memory and keep initialized memory unchanged. How that is achieved and whether this has any run-time cost is up to the implementation.

I think this is like black_box, where the programmer may only rely on it being the identity function but implementations are encouraged to use this to inhibit optimizations.

@ghost
Copy link

ghost commented Feb 12, 2019

How does everyone feel about adding a wrapper struct akin to MaybeUninit, ManuallyDrop, and UnsafeCell? So something like this:

pub struct Frozen<T>;

impl<T> Frozen<T> {
    pub fn new(t: T) -> Frozen<T>;
    pub fn into_inner(this: Frozen<T>) -> T;
}

impl<T: ?Sized> Frozen<T> {
    pub fn from_mut(t: &mut T) -> &mut Frozen<T>;
}

impl<T: ?Sized> Deref for Frozen<T> {
    type Target = T;
}
impl<T: ?Sized> DerefMut for Frozen<T> {}

@cramertj
Copy link
Member

@eternaleye If the problem is literally just with the number zero, that seems solvable by making it 42 / 24601 / funny arbitrary-but-not-random-number-of-choice.

@eternaleye
Copy link
Contributor

eternaleye commented Feb 20, 2019

@cramertj: The problem is that any fixed choice, precisely by failing to capture the nondeterminism, artificially hides risk. ~0 hides risk in BitAnd, and other values hide risk in other circumstances.

The source of danger is telling the compiler it's allowed to be certain, when the problem is that there may be uncertainty.

EDIT: Nasty case:

const fn foo(x: u64) -> u64 {
    ...
    ptr::freeze(...);
    ...
}

#[test]
fn check_foo() {
    assert!(foo(3) == 7)
}

This test silently became meaningless regarding runtime behavior, when the argument to foo is not a constant.

@Centril
Copy link
Contributor

Centril commented Feb 20, 2019

This seems wrong to me-- I'd expect that it'd initialize to zero or something similar.

@cramertj That might work during CTFE, but does freeze(args...) do that, for the same args..., when executed at run-time?

@cramertj
Copy link
Member

@Centril From your question I take it you view referential transparency of const fns run at runtime as a goal. Can you say more about why that's important to you? Do you know of usecases that it's needed for? I'd been (perhaps mistakenly) taking it as something of a given that we'd expose const fns that wouldn't be referentially transparent at runtime.

@oli-obk
Copy link
Contributor

oli-obk commented Feb 20, 2019

Do you know of usecases that it's needed for?

One problem that I can forsee is optimizations changing the behavior of code by const evaluating a call to a const fn. If that const fn has different output at runtime than at compile-time, such an optimization will result in behavioral changes. In the worst case, an array length check could be const evaluated, while the access is not, causing a wrong check to be optimized out.

That said, we already have such things as unstable const eval features (e.g. comparing pointers or casting pointers to usize). We marked these operations as unsafe to show that it's UB to use them in ways that cause different output with the same input depending on whether the function is const evaluated or not.

So we can just mark the freeze function as unsafe in const contexts.

@RalfJung
Copy link
Member

RalfJung commented Feb 20, 2019

Btw, this also implies that freeze will never be a const fn. CTFE cannot be non-deterministic.

This seems wrong to me-- I'd expect that it'd initialize to zero or something similar.

But then CTFE could yield a different result than run-time code, which is confusing at best.

If that const fn has different output at runtime than at compile-time, such an optimization will result in behavioral changes. In the worst case, an array length check could be const evaluated, while the access is not, causing a wrong check to be optimized out.

Well, as long as the behavior at compile-time is one of the possible behaviors at run-time, such an optimization is still correct. If we say "freeze is non-deterministic", and then some freeze calls use 0 for uninitialized data and others do not, that is a perfectly correct optimization.

@RalfJung RalfJung added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Feb 20, 2019
@Centril
Copy link
Contributor

Centril commented Feb 21, 2019

The following is what I think on const fns and determinism at run-time... if you want to discuss the subject further, let's do it somewhere else to not derail this PR too much. For now, we should not make freeze a const unsafe fn.


@Centril From your question I take it you view referential transparency of const fns run at runtime as a goal.

I think referential transparency is nice, but it is a stronger property than I think we can get away with, at least for polymorphic functions, due to the pervasiveness of &mut T in Rust. For example, we presumably want to make Iterator work in const contexts and so we must allow &mut self. See #57349 for an issue tracking &mut in const fn. In the absence of reachable &mut Ts in function arguments, referential transparency is retained and so you can at minimum ensure this for monomorphic functions. Given the loss of parametricity due to specialization, it becomes harder to ensure this for polymorphic functions, but I think you can still do so with a bound on type variables that rule out &mut T.

Instead of a type system guarantee of referential transparency, my goal is a weaker determinism property roughly as outlined by @RalfJung in their thoughts on compile-time function evaluation and type systems. Particularly, I think we should aspire to "CTFE correctness" in Ralf's terminology.

Can you say more about why that's important to you?

For the same reason as outlined by Ralf in the post and here (#58363 (comment)). It would be hugely surprising if execution is deterministic at compile-time but not at run-time. Moreover, I believe that the determinism of const fn represents an opportunity to give people tools to write more robust and maintainable software by being able to restrict power. Setting boundaries where you can divide code up into "functional cores" (pure stuff) and "imperative shells" (io) enhances local reasoning.

I don't think we can bear the complexity cost of another almost-const-fn-but-not and so const fn will have to be it if we are to have such controls. Hitherto I've also not seen use-cases that are significant enough to change anything.

Do you know of usecases that it's needed for?

One could imagine situations where relying on determinism at run-time is important.

If we move beyond const-generics and allow types to depend on run-time computations (this is not something that is on our roadmaps, but it would be sad to take steps to rule out such a long-term future...), e.g.

fn foo(n: usize) {
    let arr: [u8; dyn n] = ...;
}

we cannot say, given const fn bar(n: usize) -> usize which isn't ctfe-correct, that [u8; dyn bar(n)] is the same type as [u8; dyn bar(n)]. Moreover, it would be unsound to encode:

-- This is fine, we can already fake this today in Rust:
data (=) : a -> b -> Type where
   Refl : x = x

-- With non-determinism, `f` may give varying results for equal inputs
-- and so it would be unsound to claim that `f a = f b`.
cong : {f : t -> u} -> a = b -> f a = f b

We do not need β-reduction to be strongly normalizing for such computations to be fine if const fns are ctfe-correct.

@bors
Copy link
Contributor

bors commented Feb 26, 2019

☔ The latest upstream changes (presumably #58357) made this pull request unmergeable. Please resolve the merge conflicts.

@gnzlbg
Copy link
Contributor

gnzlbg commented Mar 1, 2019

@oli-obk Even if the behavior of branching on frozen memory is defined, doing so accidentally is probably a bug. So I think I would prefer if miri would warn or error in branching on frozen memory by default, and if we could offer a way to suppress the lint (e.g. #[miri::allow(frozen_branch)]), so that functions that need to actually do this can document in the code why they are doing what they are doing, and all other code running under miri is protected from non-determinism related to frozen by default.

@gnzlbg
Copy link
Contributor

gnzlbg commented Mar 1, 2019

Maybe also detecting if a function "leaks" frozen memory, and requiring a #[miri::allow(frozen_leak)] on those functions to silence the error.

Also, I would prefer if the way in which miri initializes frozen memory would be slightly configurable, to not only be able to always use 0, but for example to also be able to tell miri to initialize this memory in a non-deterministically random way everytime.

@RalfJung
Copy link
Member

RalfJung commented Mar 1, 2019

There's an easy way to do this, which is to make freeze a NOP in Miri. But then also 0 + frozen would be considered an error. That's probably sufficient for most cases you are concerned about here.

To actually distinguish uninitialized and frozen data is certainly possible, but very expensive in terms of code complexity and run-time cost. I don't think it will happen any time soon.

Extending Miri with non-determinism in a configurable way is on my personal roadmap.

@Dylan-DPC-zz
Copy link

ping from triage @sfackler any updates on this?

@sfackler
Copy link
Member Author

This is blocked on an RFC that I'm probably not going to have time to write in the immediate future, so let's close this PR for now.

@sfackler sfackler closed this Mar 11, 2019
@RalfJung
Copy link
Member

I was told above that an RFC is not needed to experiment with this on nightly. The reason I didn't r+ is that there are review comments that didn't get addressed yet.

@RalfJung
Copy link
Member

During the discussion about my latest blog post, it was discovered that when the allocator uses MADV_FREE, uninitialized memory can actually be unstable in practice. In particular, Facebook ran into actual crashes because the contents of uninitialized (allocated but not written-to) memory can change when using jemalloc.

One consequence of this is that it is impossible to implement "by-reference" freeze as a NOP. To me it looks like that makes ptr:freeze entirely unsuited for the purpose of Read.

@gnzlbg
Copy link
Contributor

gnzlbg commented Jul 17, 2019

(allocated but not written-to) memory can change when using jemalloc.

AFAIK the C standard does not require anywhere that uninitialized memory must preserve its value, even if no program action modifies it.

For example, if the compiler can prove that the padding bytes of a struct are not modified, it can use them to store other objects:

struct S { char x; int32_t y; };
void foo(char input) {
    char z = 42;
    struct S s = S { 0, 0 };
    char b = read_first_padding_byte(&s);
    assert(z == first_padding_byte); 
    // MAYBE-OK: ^^^ the compiler might have stored z inside S
    if (input > 1) {
        z = 13;
    }
    char b2 = read_first_padding_byte(&s);
    assert(b == b2);  // MAYBE-FAIL, the z=13 modified the padding of S
}

In this example, the program reads the padding bytes of S, which are for all purposes always uninitialized memory. The program does not write to that memory, e.g., by performing a struct assignment that could copy the padding bytes from a different struct, or leaking a pointer to the struct in such a way that this can happen. So the compiler can put z in the stack in the location where the first padding byte of S lives, such that modifying z modifies the padding bytes of S, even if nothing modified S itself.

The same applies to heap memory returned by malloc. A sufficiently smart compiler that knows that this memory is uninitialized, and knows that the program does not write to it, can use it to store everything, such that if the programs reads uninitialized memory from it, it then reads different values each time.

In fact, modern compilers are even smarter than that. When you read uninitialized memory returned by malloc, the compiler does not even need to emit a read, whatever is in the registers is as good as any other result, and that can change - since there might not be any reads to the memory, this can allow eliding the allocation completely.

For example, clang optimizes this:

static char* global = (char*)malloc(1);
char uninit() { return *global; }

to:

uninit():                             # @uninit()
        ret

such that you will read different uninitialized memory each time you call uninit. It can do this, because it can prove nothing actually modifies global, so it always points to uninitialized memory, and while something reads it, all reads return undef, so there is no need to emit those reads, and since then nothing reads global, then there is no need to even allocate global at all.

So I don't know either how freeze could always guarantee that the value returned is always the same value. At best it can return some value that isn't undef, but that value can be different each time freeze is called.

@RalfJung
Copy link
Member

RalfJung commented Jul 17, 2019

@gnzlbg It seems like you are missing some of the context established further up in this thread.

AFAIK the C standard does not require anywhere that uninitialized memory must preserve its value, even if no program action modifies it.

The C standards committee thinks it is reasonable for uninitialized memory to change without an action of the program even if the standard does not say anywhere that this is the case. I think that's silly; this is a case of the standard being awfully ambiguous at best and the committee knowing about this and still not fixing the problem.

But that is besides the point. We always knew that the compiler does not preserve uninitialized memory. The proposal to implement freeze was to basically use the "black box" trick, so the compiler has to assume that memory did, in fact, get initialized. That takes care of everything you wrote.

The problem, however, is that the operating system does not preserve uninitialized memory, and hence using "black box" does not actually work. This is the new bit of information I was relaying above.

@sfackler
Copy link
Member Author

sfackler commented Jul 17, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet