Skip to content

Commit

Permalink
Revise std::thread API to join by default
Browse files Browse the repository at this point in the history
This commit is part of a series that introduces a `std::thread` API to
replace `std::task`.

In the new API, `spawn` returns a `JoinGuard`, which by default will
join the spawned thread when dropped. It can also be used to join
explicitly at any time, returning the thread's result. Alternatively,
the spawned thread can be explicitly detached (so no join takes place).

As part of this change, Rust processes now terminate when the main
thread exits, even if other detached threads are still running, moving
Rust closer to standard threading models. This new behavior may break code
that was relying on the previously implicit join-all.

In addition to the above, the new thread API also offers some built-in
support for building blocking abstractions in user space; see the module
doc for details.

Closes #18000

[breaking-change]
  • Loading branch information
aturon committed Dec 19, 2014
1 parent 13f302d commit a27fbac
Show file tree
Hide file tree
Showing 86 changed files with 464 additions and 1,786 deletions.
2 changes: 1 addition & 1 deletion src/compiletest/runtest.rs
Expand Up @@ -445,7 +445,7 @@ fn run_debuginfo_gdb_test(config: &Config, props: &TestProps, testfile: &Path) {
loop {
//waiting 1 second for gdbserver start
timer::sleep(Duration::milliseconds(1000));
let result = Thread::with_join(move || {
let result = Thread::spawn(move || {
tcp::TcpStream::connect("127.0.0.1:5039").unwrap();
}).join();
if result.is_err() {
Expand Down
32 changes: 17 additions & 15 deletions src/doc/guide-tasks.md
@@ -1,5 +1,7 @@
% The Rust Tasks and Communication Guide

**NOTE** This guide is badly out of date an needs to be rewritten.

# Introduction

Rust provides safe concurrent abstractions through a number of core library
Expand All @@ -22,7 +24,7 @@ from shared mutable state.
At its simplest, creating a task is a matter of calling the `spawn` function
with a closure argument. `spawn` executes the closure in the new task.

```{rust}
```{rust,ignore}
# use std::task::spawn;
// Print something profound in a different task using a named function
Expand All @@ -49,7 +51,7 @@ closure is limited to capturing `Send`-able data from its environment
ensures that `spawn` can safely move the entire closure and all its
associated state into an entirely different task for execution.

```{rust}
```{rust,ignore}
# use std::task::spawn;
# fn generate_task_number() -> int { 0 }
// Generate some state locally
Expand All @@ -75,7 +77,7 @@ The simplest way to create a channel is to use the `channel` function to create
of a channel, and a **receiver** is the receiving endpoint. Consider the following
example of calculating two results concurrently:

```{rust}
```{rust,ignore}
# use std::task::spawn;
let (tx, rx): (Sender<int>, Receiver<int>) = channel();
Expand All @@ -96,15 +98,15 @@ stream for sending and receiving integers (the left-hand side of the `let`,
`(tx, rx)`, is an example of a destructuring let: the pattern separates a tuple
into its component parts).

```{rust}
```{rust,ignore}
let (tx, rx): (Sender<int>, Receiver<int>) = channel();
```

The child task will use the sender to send data to the parent task, which will
wait to receive the data on the receiver. The next statement spawns the child
task.

```{rust}
```{rust,ignore}
# use std::task::spawn;
# fn some_expensive_computation() -> int { 42 }
# let (tx, rx) = channel();
Expand All @@ -123,7 +125,7 @@ computation, then sends the result over the captured channel.
Finally, the parent continues with some other expensive computation, then waits
for the child's result to arrive on the receiver:

```{rust}
```{rust,ignore}
# fn some_other_expensive_computation() {}
# let (tx, rx) = channel::<int>();
# tx.send(0);
Expand Down Expand Up @@ -154,7 +156,7 @@ spawn(move || {

Instead we can clone the `tx`, which allows for multiple senders.

```{rust}
```{rust,ignore}
let (tx, rx) = channel();
for init_val in range(0u, 3) {
Expand All @@ -179,7 +181,7 @@ Note that the above cloning example is somewhat contrived since you could also
simply use three `Sender` pairs, but it serves to illustrate the point. For
reference, written with multiple streams, it might look like the example below.

```{rust}
```{rust,ignore}
# use std::task::spawn;
// Create a vector of ports, one for each child task
Expand All @@ -203,7 +205,7 @@ getting the result later.

The basic example below illustrates this.

```{rust}
```{rust,ignore}
use std::sync::Future;
# fn main() {
Expand All @@ -230,7 +232,7 @@ called.
Here is another example showing how futures allow you to background
computations. The workload will be distributed on the available cores.

```{rust}
```{rust,ignore}
# use std::num::Float;
# use std::sync::Future;
fn partial_sum(start: uint) -> f64 {
Expand Down Expand Up @@ -268,7 +270,7 @@ Here is a small example showing how to use Arcs. We wish to run concurrently
several computations on a single large vector of floats. Each task needs the
full vector to perform its duty.

```{rust}
```{rust,ignore}
use std::num::Float;
use std::rand;
use std::sync::Arc;
Expand All @@ -295,7 +297,7 @@ The function `pnorm` performs a simple computation on the vector (it computes
the sum of its items at the power given as argument and takes the inverse power
of this value). The Arc on the vector is created by the line:

```{rust}
```{rust,ignore}
# use std::rand;
# use std::sync::Arc;
# fn main() {
Expand All @@ -309,7 +311,7 @@ the wrapper and not its contents. Within the task's procedure, the captured
Arc reference can be used as a shared reference to the underlying vector as
if it were local.

```{rust}
```{rust,ignore}
# use std::rand;
# use std::sync::Arc;
# fn pnorm(nums: &[f64], p: uint) -> f64 { 4.0 }
Expand Down Expand Up @@ -346,11 +348,11 @@ and `()`, callers can pattern-match on a result to check whether it's an `Ok`
result with an `int` field (representing a successful result) or an `Err` result
(representing termination with an error).

```{rust}
```{rust,ignore}
# use std::thread::Thread;
# fn some_condition() -> bool { false }
# fn calculate_result() -> int { 0 }
let result: Result<int, Box<std::any::Any + Send>> = Thread::with_join(move || {
let result: Result<int, Box<std::any::Any + Send>> = Thread::spawn(move || {
if some_condition() {
calculate_result()
} else {
Expand Down
12 changes: 7 additions & 5 deletions src/doc/guide.md
Expand Up @@ -5217,6 +5217,8 @@ the same function, so our binary is a little bit larger.
# Tasks
**NOTE**: this section is currently out of date and will be rewritten soon.
Concurrency and parallelism are topics that are of increasing interest to a
broad subsection of software developers. Modern computers are often multi-core,
to the point that even embedded devices like cell phones have more than one
Expand All @@ -5231,7 +5233,7 @@ library, and not part of the language. This means that in the future, other
concurrency libraries can be written for Rust to help in specific scenarios.
Here's an example of creating a task:
```{rust}
```{rust,ignore}
spawn(move || {
println!("Hello from a task!");
});
Expand Down Expand Up @@ -5261,7 +5263,7 @@ If tasks were only able to capture these values, they wouldn't be very useful.
Luckily, tasks can communicate with each other through **channel**s. Channels
work like this:
```{rust}
```{rust,ignore}
let (tx, rx) = channel();
spawn(move || {
Expand All @@ -5280,7 +5282,7 @@ which returns an `Result<T, TryRecvError>` and does not block.
If you want to send messages to the task as well, create two channels!
```{rust}
```{rust,ignore}
let (tx1, rx1) = channel();
let (tx2, rx2) = channel();
Expand Down Expand Up @@ -5340,7 +5342,7 @@ we'll just get the value immediately.
Tasks don't always succeed, they can also panic. A task that wishes to panic
can call the `panic!` macro, passing a message:
```{rust}
```{rust,ignore}
spawn(move || {
panic!("Nope.");
});
Expand All @@ -5349,7 +5351,7 @@ spawn(move || {
If a task panics, it is not possible for it to recover. However, it can
notify other tasks that it has panicked. We can do this with `task::try`:
```{rust}
```{rust,ignore}
use std::task;
use std::rand;
Expand Down
26 changes: 17 additions & 9 deletions src/doc/intro.md
Expand Up @@ -389,11 +389,13 @@ safe concurrent programs.
Here's an example of a concurrent Rust program:
```{rust}
use std::thread::Thread;
fn main() {
for _ in range(0u, 10u) {
spawn(move || {
Thread::spawn(move || {
println!("Hello, world!");
});
}).detach();
}
}
```
Expand All @@ -403,7 +405,8 @@ This program creates ten threads, who all print `Hello, world!`. The
double bars `||`. (The `move` keyword indicates that the closure takes
ownership of any data it uses; we'll have more on the significance of
this shortly.) This closure is executed in a new thread created by
`spawn`.
`spawn`. The `detach` method means that the child thread is allowed to
outlive its parent.
One common form of problem in concurrent programs is a 'data race.'
This occurs when two different threads attempt to access the same
Expand All @@ -418,13 +421,15 @@ problem.
Let's see an example. This Rust code will not compile:
```{rust,ignore}
use std::thread::Thread;

fn main() {
let mut numbers = vec![1i, 2i, 3i];

for i in range(0u, 3u) {
spawn(move || {
Thread::spawn(move || {
for j in range(0, 3) { numbers[j] += 1 }
});
}).detach();
}
}
```
Expand Down Expand Up @@ -469,20 +474,21 @@ mutation doesn't cause a data race.
Here's what using an Arc with a Mutex looks like:
```{rust}
use std::thread::Thread;
use std::sync::{Arc,Mutex};
fn main() {
let numbers = Arc::new(Mutex::new(vec![1i, 2i, 3i]));
for i in range(0u, 3u) {
let number = numbers.clone();
spawn(move || {
Thread::spawn(move || {
let mut array = number.lock();
(*array)[i] += 1;
println!("numbers[{}] is {}", i, (*array)[i]);
});
}).detach();
}
}
```
Expand Down Expand Up @@ -532,13 +538,15 @@ As an example, Rust's ownership system is _entirely_ at compile time. The
safety check that makes this an error about moved values:
```{rust,ignore}
use std::thread::Thread;
fn main() {
let vec = vec![1i, 2, 3];
for i in range(1u, 3) {
spawn(move || {
Thread::spawn(move || {
println!("{}", vec[i]);
});
}).detach();
}
}
```
Expand Down
5 changes: 3 additions & 2 deletions src/liballoc/arc.rs
Expand Up @@ -39,6 +39,7 @@ use heap::deallocate;
///
/// ```rust
/// use std::sync::Arc;
/// use std::thread::Thread;
///
/// fn main() {
/// let numbers = Vec::from_fn(100, |i| i as f32);
Expand All @@ -47,11 +48,11 @@ use heap::deallocate;
/// for _ in range(0u, 10) {
/// let child_numbers = shared_numbers.clone();
///
/// spawn(move || {
/// Thread::spawn(move || {
/// let local_numbers = child_numbers.as_slice();
///
/// // Work with the local numbers
/// });
/// }).detach();
/// }
/// }
/// ```
Expand Down
6 changes: 5 additions & 1 deletion src/libcollections/slice.rs
Expand Up @@ -1344,6 +1344,7 @@ pub mod raw {

#[cfg(test)]
mod tests {
use std::boxed::Box;
use std::cell::Cell;
use std::default::Default;
use std::mem;
Expand Down Expand Up @@ -1627,7 +1628,10 @@ mod tests {
#[test]
fn test_swap_remove_noncopyable() {
// Tests that we don't accidentally run destructors twice.
let mut v = vec![Box::new(()), Box::new(()), Box::new(())];
let mut v = Vec::new();
v.push(box 0u8);
v.push(box 0u8);
v.push(box 0u8);
let mut _e = v.swap_remove(0);
assert_eq!(v.len(), 2);
_e = v.swap_remove(1);
Expand Down
2 changes: 1 addition & 1 deletion src/libcore/borrow.rs
Expand Up @@ -92,7 +92,7 @@ impl<'a, T, Sized? B> BorrowFrom<Cow<'a, T, B>> for B where B: ToOwned<T> {

/// Trait for moving into a `Cow`
pub trait IntoCow<'a, T, Sized? B> {
/// Moves `serlf` into `Cow`
/// Moves `self` into `Cow`
fn into_cow(self) -> Cow<'a, T, B>;
}

Expand Down
6 changes: 3 additions & 3 deletions src/libcoretest/finally.rs
Expand Up @@ -9,7 +9,7 @@
// except according to those terms.

use core::finally::{try_finally, Finally};
use std::task::failing;
use std::thread::Thread;

#[test]
fn test_success() {
Expand All @@ -20,7 +20,7 @@ fn test_success() {
*i = 10;
},
|i| {
assert!(!failing());
assert!(!Thread::panicking());
assert_eq!(*i, 10);
*i = 20;
});
Expand All @@ -38,7 +38,7 @@ fn test_fail() {
panic!();
},
|i| {
assert!(failing());
assert!(Thread::panicking());
assert_eq!(*i, 10);
})
}
Expand Down
10 changes: 3 additions & 7 deletions src/librustc_driver/lib.rs
Expand Up @@ -475,22 +475,18 @@ pub fn monitor<F:FnOnce()+Send>(f: F) {
static STACK_SIZE: uint = 32000000; // 32MB

let (tx, rx) = channel();
let mut w = Some(io::ChanWriter::new(tx)); // option dance
let w = io::ChanWriter::new(tx);
let mut r = io::ChanReader::new(rx);

let mut cfg = thread::cfg().name("rustc".to_string());
let mut cfg = thread::Builder::new().name("rustc".to_string());

// FIXME: Hacks on hacks. If the env is trying to override the stack size
// then *don't* set it explicitly.
if os::getenv("RUST_MIN_STACK").is_none() {
cfg = cfg.stack_size(STACK_SIZE);
}

let f = proc() {
std::io::stdio::set_stderr(box w.take().unwrap());
f()
};
match cfg.with_join(f).join() {
match cfg.spawn(move || { std::io::stdio::set_stderr(box w); f() }).join() {
Ok(()) => { /* fallthrough */ }
Err(value) => {
// Task panicked without emitting a fatal diagnostic
Expand Down

0 comments on commit a27fbac

Please sign in to comment.