Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A soundness bug in std::fs #32670

Closed
notriddle opened this Issue Apr 1, 2016 · 34 comments

Comments

Projects
None yet
@notriddle
Copy link
Contributor

notriddle commented Apr 1, 2016

This program writes to arbitrary memory, violating Rust's safety guarantees, despite using no unsafe code:

use std::fs;
use std::io;
use std::io::prelude::*;

fn main() {
    let i = 0;
    let j = &i as *const i32 as u64;
    let mut f = fs::OpenOptions::new().write(true).open("/proc/self/mem").unwrap();
    f.seek(io::SeekFrom::Start(j+16)).unwrap();
    let k = [16; 16];
    f.write(&k).unwrap();
}

Because the filesystem APIs cannot be made safe (blocking /proc paths specifically will not work, because symlinks can be created to it), File::create, File::open, and OpenOptions::open should be marked unsafe. I am working on an RFC for that right now.

@notriddle notriddle changed the title An unfixable bug in Rust's safety guarantees A soundness bug in std::fs Apr 1, 2016

@jonas-schievink

This comment has been minimized.

Copy link
Member

jonas-schievink commented Apr 1, 2016

These methods are stable, they cannot be marked unsafe (almost the entire ecosystem would break). I guess things like /proc/self/mem are out of Rust's safety scope.

@notriddle

This comment has been minimized.

Copy link
Contributor Author

notriddle commented Apr 1, 2016

According to the stability document:

We reserve the right to fix compiler bugs, patch safety holes, and change type inference in ways that may occasionally require new type annotations.

@jonas-schievink

This comment has been minimized.

Copy link
Member

jonas-schievink commented Apr 1, 2016

may occasionally require new type annotations

unsafe is neither a type annotation nor would it happen "occasionally" since these are so fundamental APIs (okay this just refers to inference changes)

@notriddle

This comment has been minimized.

Copy link
Contributor Author

notriddle commented Apr 1, 2016

It's the safety holes part that justifies this. This is a safety hole. It's essentially the same as the scoped thread API, only the scoped thread API was removed from Rust before it was stabilized.

@jonas-schievink

This comment has been minimized.

Copy link
Member

jonas-schievink commented Apr 1, 2016

I wouldn't consider this a safety hole. The fact that harmless file system operations can change arbitrary memory is unfortunate, but there will always be ways to somehow circumvent Rust's safety guarantees with external "help".

The scoped thread API was a process-internal API not provided by the OS but by the Rust standard lib alone, and was only unsafe because of wrong assumptions made while it was designed.

@rkruppe

This comment has been minimized.

Copy link
Member

rkruppe commented Apr 1, 2016

Marking a stable function as unsafe would probably be covered by the quoted wording even if the function was widely used, but that doesn't mean it's a reasonable interpretation in this case. Making opening a file unsafe is so baldly ridiculous that I am asking myself if this an April's fools joke.

Surely it is possible got around the symlink thing and avoid opening /proc/self/mem for writing?

@notriddle

This comment has been minimized.

Copy link
Contributor Author

notriddle commented Apr 1, 2016

@jonas-schievink

This comment has been minimized.

Copy link
Member

jonas-schievink commented Apr 1, 2016

It proposes to make println! unsafe. Am now reminded of why I don't like April's fools.

@petrochenkov

This comment has been minimized.

Copy link
Contributor

petrochenkov commented Apr 1, 2016

You can also use safe std::process::Command to launch an evil program which invades your process' memory and corrupts it.

@llogiq

This comment has been minimized.

Copy link
Contributor

llogiq commented Apr 1, 2016

What's worse, you could call unsafe code from safe code! This can only lead to security holes! Let's require unsafe { ..} around all the code and define every function as unsafe to remind programmers that it's a scary world out there...

@jonas-schievink

This comment has been minimized.

Copy link
Member

jonas-schievink commented Apr 1, 2016

The obvious solution is of course not to mark the methods as unsafe, but to drop Linux and Windows support and only support Redox (it's written in Rust so it must be safe) ;)

@rkruppe

This comment has been minimized.

Copy link
Member

rkruppe commented Apr 1, 2016

The obvious solution is of course not to mark the methods as unsafe, but to drop Linux and Windows support and only support Redox (it's written in Rust so it must be safe) ;)

That won't do. Redox uses unsafe. To make sure such mistakes don't happen again, we must stop using unsafe, perhaps even remove it after an appropriate waiting period (say, two releases). This will require a rewrite of the unsafe parts of the standard library, but I'm sure the compiler team will be receptive to migrating all the functions that currently use unsafe to compiler intrinsics (which will be safe because the compiler is never wrong).

FWIW I've long harbored thoughts that "safe" Rust does not go far enough. Removing unsafe is only the first step, supposedly "safe" code can still take incredibly dangerous actions (e.g., std::process::Command can be used to invoke rm -rf /). The Rust designers were wise to identify shared mutable access as a huge source of problems, but side effects are still permitted despite the overwhelming evidence that most unsafe code has side effects. Consequently, I'm currently drafting an RFC to make Rust a pure, 100% side-effect free language. main will become a pure action of type impl Unsafe<()>.

... aw crud. That requires a Monad trait. Yet another RFC blocked on HKT!

@notriddle

This comment has been minimized.

Copy link
Contributor Author

notriddle commented Apr 1, 2016

Except that deleting everything is not unsafe, while poking around arbitrarily in memory is.

@rkruppe

This comment has been minimized.

Copy link
Member

rkruppe commented Apr 1, 2016

That is unfortunate short-sighted corner-cutting. rm -rf / is the number one threat to cyber security. Ignoring it or declaring it out of scope will just diminish Rust's importance.

@jonas-schievink

This comment has been minimized.

Copy link
Member

jonas-schievink commented Apr 1, 2016

Won't rm -rf / also delete /proc/self/mem at some point? 😂

@notriddle

This comment has been minimized.

Copy link
Contributor Author

notriddle commented Apr 1, 2016

Nope. I get permission denied when I try to rm /proc/self/mem, even as root.

@nagisa

This comment has been minimized.

Copy link
Contributor

nagisa commented Apr 2, 2016

Since this seems to be an that April’s thing, and the thing’s over, closing. If my conjectures are incorrect, complain 😊

@nagisa nagisa closed this Apr 2, 2016

@rkruppe

This comment has been minimized.

Copy link
Member

rkruppe commented Apr 2, 2016

tbh if the code above actually works, maybe it should have an issue for real. I think it may be considered a soundness hole, although it may not be worth fixing. It shouldn't be this issue though, the silly will detract from serious discussion.

@pmarcelll

This comment has been minimized.

Copy link
Contributor

pmarcelll commented Apr 2, 2016

I guess this is similar to creating a safe wrapper for a C library, even if an unusual input is given, and it triggers a bug/memory error in the library, the interface should remain safe.

@notriddle

This comment has been minimized.

Copy link
Contributor Author

notriddle commented Apr 2, 2016

The example works.

On Sat, Apr 2, 2016, 16:06 Marcell Pardavi notifications@github.com wrote:

I guess this is similar to creating a safe wrapper for a C library, even
if an unusual input is given, and it triggers a bug/memory error in the
library, the interface should remain safe.


You are receiving this because you authored the thread.

Reply to this email directly or view it on GitHub
#32670 (comment)

@nagisa

This comment has been minimized.

Copy link
Contributor

nagisa commented Apr 3, 2016

I guess I’ll nominate this for libs team to see (since it is technically a way to circumvent memory safety), but I doubt the outcome will be any different.

Sorry for the premature close!

It shouldn't be this issue though

It doesn’t really matter IME, because the core of the issue is still here.

@nagisa nagisa reopened this Apr 3, 2016

@vi

This comment has been minimized.

Copy link
Contributor

vi commented Apr 3, 2016

I think triggering memory unsafety from outside (using filesystem, other process, etc.) is not in scope of Rust's safety gurantee.

Imagine implementing a library for controlling a repair robot. It has long flexible manupulator which can grab things, solder, connect to pins and do JTAG, etc. Should functions like Manipulator::raise or SolderingIron::activate be unsafe? No.

But one can program the RepairBot to loop back the manipulator to the robot's own back, open the lid, solder to JTAG pins and then trigger overwriting memory, hence causing memory unsafety in Rust terms.

More close-to-earth example: imagine implementing debugger like gdb or scanmem in Rust. Debugging other programs is safe, but debugging oneself is not.

When triggering the unsafety involves some external component like filesystem, it is out of scope for Rust's safe/unsafe. You can't reliably detect a Strange loop.

@notriddle

This comment has been minimized.

Copy link
Contributor Author

notriddle commented Apr 3, 2016

It's not like it's going to be fixed, whatever happens.

@notriddle notriddle closed this Apr 3, 2016

@vks

This comment has been minimized.

Copy link
Contributor

vks commented Apr 5, 2016

It is the responsibility of the one running the program to provide a safe interface to the operating system and hardware. Rust cannot guard against your computer literally blowing itself up because your program executed some booby-trapped instruction.

@notriddle

This comment has been minimized.

Copy link
Contributor Author

notriddle commented Apr 5, 2016

Are you saying that play.rust-lang.org, not to mention basically every Linux distribution in existence, is misconfigured?

@vks

This comment has been minimized.

Copy link
Contributor

vks commented Apr 5, 2016

Your program should be in a sandbox denying access to any file it does not need, if you don't want this to happen. rm -rf ~ is worse than most undefined behaviour. BTW, it seems your example does not segfault in release mode.

@vi

This comment has been minimized.

Copy link
Contributor

vi commented Apr 5, 2016

@vks, rm -Rf ~ is a predictable, reproducible, reversible (if there are backups) data loss.

Undefined behaviour on the other hand can lead to malware code execution (which, for example, sends private data from ~ somewhere) is unpredictable, poorly reproducible and may be irreversible.

@notriddle

This comment has been minimized.

Copy link
Contributor Author

notriddle commented Apr 5, 2016

  1. rm -rf ~ is a valid outcome of undefined behavior (as is anything else...)
  2. It doesn't segfault in release mode because it's dependent on the memory layout. Set the numbers right and it'll go off.
@rkruppe

This comment has been minimized.

Copy link
Member

rkruppe commented Apr 5, 2016

I do not buy the suggestion that memory unsafety is okay if it's caused by interacting badly with the operating system or specific hardware. Certainly it is unacceptable for a safe Rust library to say, invoke the read system call with an invalid pointer, overwriting arbitrary memory. The write is performed by the OS, but it's clearly both the fault of the Rust code and in its power to prevent that from happening. There's an often-repeated promise: that a Rust program that uses the standard library and no unsafe block of its own is memory safe. This promise is not fulfilled here.

The point of a safe Rust interface is to offer precisely that, a memory-safe interface, an API that Rust code can use without worry of memory unsafety, in all circumstances, whatever silly mistake the programmer might make. The point is not to assign blame but to isolate the unsafety and fight it, so that the end result are safer, more reliable programs. Therefore, functions that are not marked unsafe are held to a very high standard, no matter how contrived the circumstances in which they might misbehave. For example, nobody in their right mind would go out and leak four billion Rcs and overflow the refcount, but the standard library still got patched to prevent memory unsafety in that case.

Now, clearly, Rust cannot take care of everything. It can't prevent side channel attacks. It can't prevent authentification code from containing a logical bug that gives every user full privileges. It can't forsee a hardware error in future generations of CPUs that will cause it to write to address 0xDEADBEEE when a mov is executed with a destination of 0xDEADBEEF and pre-emptively add a check to all memory writes. It can't prevent a human (or a robot) from making the most of physical access to the hardware. It can't do a billion other things that are important for the secure and correct operation of computer systems. But that does not mean its responsibilities already end at the process boundary.

If a safe function in the Rust library is called, and it ends up overwriting memory of the running process, then that is a memory safety issue. It doesn't matter one bit if it goes through the OS, specifically, through the mock /proc file system — the system call works as documented and advertised. This is not a case of a single machine being ill-configured or a bug in an obscure pre-release version of a third party library. It is fully expected and works on millions of machines.

@notriddle

This comment has been minimized.

Copy link
Contributor Author

notriddle commented Apr 5, 2016

That's cool and all, but there's a reason I closed this bug; this particular safety hole is infeasible to fix. Other "memory-safe" languages (Python, Haskell, Java) share this hole, so there probably no easy way to fix it, and the hard ways of fixing it would get in the way far more than they help. (Marking the file open APIs unsafe would just be stupid.)

@rkruppe

This comment has been minimized.

Copy link
Member

rkruppe commented Apr 5, 2016

@notriddle It probably wasn't clear from my rant, but I am standing by my earlier position of "this may very well be not worth fixing". Like you, I am skeptical that it can be reasonably prevented. But several recent comments in this thread seem to veer too far into another direction, sounding like "apologists" for memory unsafety so to speak. I am strongly of the opinion that leaving such a hole open must come from a pragmatist "We'd like to fix it but it's just not practical" place, not from a "It's hard to trigger and not our fault so whatevs :shrug:" direction.

@sanmai-NL

This comment has been minimized.

Copy link

sanmai-NL commented Jan 3, 2018

@notriddle: can’t open() simply be patched to return an Err on opening /proc/self/mem for writing on Linux? Seems simple and fine.

@vi

This comment has been minimized.

Copy link
Contributor

vi commented Jan 3, 2018

@sanmai-NL , What if procfs mounted to /tmp and tmpfs mounted to /proc?

Also imagine an operating system that has socket analogue of /proc/self/mem. You connect to special address and socket becomes a view into memory of your own process. Now sockets are unsafe or must check where we actually do connect (and netfilter-like mechanism can redirect addresses)?

The main issue that OS-s allow memory introspection and it should not be considered Rust-unsafe.

Even more intricate example: imagine a robotics library for a electronics repair robot that allows user code to control special manipulators that move around using motors and connect to pins (e.g. JTAG) of various boards around. But what if we command it to connect to JTAG of the same board that is controlling the robot ("self-repair mode")? Now we can read/write any memory, including one mapped to Rust process. Does it make motor-controlling or pin digital input/ouput functions Rust-unsafe?

@ranma42

This comment has been minimized.

Copy link
Contributor

ranma42 commented Jan 4, 2018

There are also limitations/defects in common hardware, such as those exploited by https://en.wikipedia.org/wiki/Row_hammer to modify arbitrary memory. Does rowhammer imply that all memory accesses should be considered unsafe? XD

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.