Skip to content

Latest commit

 

History

History
77 lines (48 loc) · 6.12 KB

0000-memory-exploit-mitigation.md

File metadata and controls

77 lines (48 loc) · 6.12 KB
  • Start Date: 2014-06-26
  • RFC PR #:
  • Rust Issue #:

Summary

Modern C compilers use many techniques to make memory corruption harder to exploit. Rust should support these as a way of mitigating bugs in unsafe code, in foreign libraries, or in Rust itself. Because some hardening measures make code slower or have other undesirable impact, the user needs control over where and when hardening is applied.

A key part of this proposal is that users who don't have highly specific needs can express a general preference of "more secure and somewhat slower". This is much like specifying -O rather than enabling/disabling specific optimizations.

Motivation

I would like Rust to be a credible option for writing secure production software as soon as possible. The benefits of memory safety are enormous. But we have a compiler much younger than GCC or clang, and we need a good answer to concerns about compiler bugs. These hardening measures are battle-tested (quite literally) and will go a long way to resolve this concern.

We also need to protect unsafe code written in Rust. If we don't have hardening, that's a big regression from C. And some of this is necessary just for effective hardening of foreign libraries. For example a non-PIE Rust binary will provide ROP gadgets for an exploit in a perfectly hardened C library.

See the Rust ticket thread, my article on hardening with Autoconf, the Debian and Ubuntu wiki pages on hardening, etc.

Detailed design

First we introduce a harden attribute. Some attributes, such as those relating to linking, can only be specified on a crate. Others can be specified for a crate and then overridden within.

#![harden(aslr, relro, stack_canary(8))]

// ...

#[harden(not(stack_canary))]
fn not_protected() {
}

Here the parameter to stack_canary indicates how big a function's stack buffers must be (in bytes) before it gets a canary. The precise set of harden specifiers and their syntax will change over time. It's too much of an implementation detail to fully specify in an RFC.

The intent is that most users will not use this attribute, and will instead pass the compiler flag --harden-level <N>, or -H as shorthand for --harden-level 1. The meaning of the levels is:

  • --harden-level 0 — No hardening. We may still incidentally do things that make exploits harder, if they have no undesirable impact.
  • --harden-level 1 — A level of hardening suitable for most software. The user is willing to accept a slowdown on the order of a few percent. This will approximately match e.g. the hardening Ubuntu applies by default to all packages. The determination is platform-specific; for example it would include PIE on AMD64 (where the cost is ~1%) and not on i386 (where the cost is 10%+).
  • --harden-level 2 — A level of hardening suitable for production software which heavily prioritizes security. The user is willing to accept a several-fold slowdown. For example, you might use this when compiling a SSH daemon for a bastion host that doesn't need to support high throughput. This would enable e.g. stack canaries for all functions, regardless of their buffer size, PIE even on i386, etc. Level 2 and higher may also result in nondeterministic builds, for example randomizing the layout of structures (those without repr(C) of course!)
  • --harden-level 3 — Maximum hardening. The user does not care about performance at all, within reason. This will likely be the same as level 2 to start with, but could include things like ASan down the line (if it's not fast enough for level 2).

We will also introduce default-Allow lints for constructs that make hardening less effective. For example the {:?} format specifier can leak addresses, which could be used to circumvent ASLR. This default level for this lint would change to Warn at --harden-level 1. This would allow debug!("{:?}", foo) as long as hardened production builds use --cfg ndebug.

Drawbacks

Some generic stuff like "it makes the compiler more complicated".

Nondeterministic builds are bad for reproducibility, which is bad for security. Maybe we should have a way to take a RNG seed, so that (for example) a trusted cloud build service could provide a custom, randomized executable, along with a way to verify the build on your own at a later date. Some of these issues were mentioned in Prof. Michael Franz's talk at Mozilla.

Alternatives

We could ditch the attributes and just have command-line flags.

We could consider a more generic way to set crate attributes from the command line.

Unresolved questions

What happens if you specify --harden-level and also crate-level hardening attributes?

Should the harden attribute be feature-gated? Seems so, because the exact syntax is neither stabilized nor part of an RFC. In that case, do we require #![feature(harden)] even to use --harden-level? Perhaps it's only feature-gated at level 2 and higher, on the grounds that level 1 hardening is a harmless codegen implementation detail (we don't feature-gate everything that can cause a slight performance loss) but level 2 and higher can have observable untoward effects.

We can think about applying certain hardening (at certain levels) only to functions using the unsafe dialect — that is, unsafe fn as well as functions containing unsafe { ... }. However I'm not sure how much sense this makes. In the example

fn vulnerable(buf: &mut [u8]) {
    unsafe {
        // overflow buf here
    }
}

fn safe() {
    let mut buf: [u8, ..8] = [0, ..8];
    vulnerable(buf);
}

it's fn safe() that needs to establish and check a stack canary. Most functions will transitively call unsafe code, for example in the implementations of core data structures. So we can't isolate this kind of thing very well without analyzing the dataflow of pointers to stack objects.