Permalink
Browse files

auto merge of #19654 : aturon/rust/merge-rt, r=alexcrichton

This PR substantially narrows the notion of a "runtime" in Rust, and allows calling into Rust code directly without any setup or teardown. 

After this PR, the basic "runtime support" in Rust will consist of:

* Unwinding and backtrace support
* Stack guards

Other support, such as helper threads for timers or the notion of a "current thread" are initialized automatically upon first use.

When using Rust in an embedded context, it should now be possible to call a Rust function directly as a C function with absolutely no setup, though in that case panics will cause the process to abort. In this regard, the C/Rust interface will look much like the C/C++ interface.

In more detail, this PR:

* Merges `librustrt` back into `std::rt`, undoing the facade. While doing so, it removes a substantial amount of redundant functionality (such as mutexes defined in the `rt` module). Code using `librustrt` can now call into `std::rt` to e.g. start executing Rust code with unwinding support.

* Allows all runtime data to be initialized lazily, including the "current thread", the "at_exit" infrastructure, and the "args" storage.

* Deprecates and largely removes `std::task` along with the widespread requirement that there be a "current task" for many APIs in `std`. The entire task infrastructure is replaced with `std::thread`, which provides a more standard API for manipulating and creating native OS threads. In particular, it's possible to join on a created thread, and to get a handle to the currently-running thread. In addition, threads are equipped with some basic blocking support in the form of `park`/`unpark` operations (following a tradition in some OSes as well as the JVM). See the `std::thread` documentation for more details.

* Channels are refactored to use a new internal blocking infrastructure that itself sits on top of `park`/`unpark`.

One important change here is that a Rust program ends when its main thread does, following most threading models. On the other hand, threads will often be created with an RAII-style join handle that will re-institute blocking semantics naturally (and with finer control).

This is very much a:

[breaking-change]

Closes #18000
r? @alexcrichton
  • Loading branch information...
bors committed Dec 19, 2014
2 parents 6bdce25 + 903c5a8 commit 0efafac398ff7f28c5f0fe756c15b9008b3e0534
Showing with 3,960 additions and 5,596 deletions.
  1. +3 −4 mk/crates.mk
  2. +3 −3 src/compiletest/runtest.rs
  3. +19 −17 src/doc/guide-tasks.md
  4. +7 −5 src/doc/guide.md
  5. +17 −9 src/doc/intro.md
  6. +3 −2 src/liballoc/arc.rs
  7. +7 −7 src/libcollections/slice.rs
  8. +3 −3 src/libcoretest/finally.rs
  9. +2 −2 src/liblibc/lib.rs
  10. +4 −5 src/librustc_driver/lib.rs
  11. +11 −6 src/librustc_trans/back/write.rs
  12. +8 −3 src/librustdoc/lib.rs
  13. +3 −2 src/librustdoc/test.rs
  14. +0 −65 src/librustrt/at_exit_imp.rs
  15. +0 −61 src/librustrt/bookkeeping.rs
  16. +0 −132 src/librustrt/lib.rs
  17. +0 −131 src/librustrt/local.rs
  18. +0 −404 src/librustrt/local_ptr.rs
  19. +0 −727 src/librustrt/mutex.rs
  20. +0 −559 src/librustrt/thread.rs
  21. +0 −115 src/librustrt/thread_local_storage.rs
  22. +0 −136 src/librustrt/util.rs
  23. +17 −15 src/{librustrt → libstd}/c_str.rs
  24. +83 −0 src/libstd/comm/blocking.rs
  25. +62 −123 src/libstd/comm/mod.rs
  26. +50 −52 src/libstd/comm/oneshot.rs
  27. +47 −37 src/libstd/comm/select.rs
  28. +68 −76 src/libstd/comm/shared.rs
  29. +28 −30 src/libstd/comm/stream.rs
  30. +136 −151 src/libstd/comm/sync.rs
  31. +32 −66 src/libstd/failure.rs
  32. +6 −6 src/libstd/io/comm_adapters.rs
  33. +3 −2 src/libstd/io/mod.rs
  34. +1 −1 src/libstd/io/net/pipe.rs
  35. +11 −8 src/libstd/io/net/tcp.rs
  36. +7 −4 src/libstd/io/process.rs
  37. +13 −25 src/libstd/io/stdio.rs
  38. +6 −8 src/libstd/lib.rs
  39. +4 −2 src/libstd/macros.rs
  40. +30 −391 src/libstd/os.rs
  41. +7 −7 src/libstd/path/posix.rs
  42. +7 −7 src/libstd/path/windows.rs
  43. +8 −7 src/libstd/rand/os.rs
  44. +19 −23 src/{librustrt → libstd/rt}/args.rs
  45. +75 −0 src/libstd/rt/at_exit_imp.rs
  46. +3 −979 src/libstd/rt/backtrace.rs
  47. +5 −5 src/{librustrt → libstd/rt}/exclusive.rs
  48. 0 src/{librustrt → libstd/rt}/libunwind.rs
  49. +3 −4 src/{librustrt → libstd/rt}/macros.rs
  50. +103 −101 src/libstd/rt/mod.rs
  51. +23 −30 src/{librustrt → libstd/rt}/task.rs
  52. +46 −70 src/{librustrt → libstd/rt}/unwind.rs
  53. +139 −3 src/libstd/rt/util.rs
  54. +1 −1 src/libstd/rtdeps.rs
  55. +6 −4 src/libstd/sync/atomic.rs
  56. +3 −2 src/libstd/sync/barrier.rs
  57. +3 −3 src/libstd/sync/condvar.rs
  58. +3 −3 src/libstd/sync/future.rs
  59. +8 −7 src/libstd/sync/mutex.rs
  60. +2 −2 src/libstd/sync/once.rs
  61. +4 −14 src/libstd/sync/poison.rs
  62. +17 −17 src/libstd/sync/rwlock.rs
  63. +3 −4 src/libstd/sync/task_pool.rs
  64. +139 −0 src/libstd/sys/common/backtrace.rs
  65. +5 −7 src/libstd/sys/common/helper_thread.rs
  66. +4 −0 src/libstd/sys/common/mod.rs
  67. +2 −2 src/{librustrt → libstd/sys/common}/stack.rs
  68. +35 −0 src/libstd/sys/common/thread.rs
  69. +68 −0 src/libstd/sys/common/thread_info.rs
  70. +2 −4 src/libstd/sys/common/thread_local.rs
  71. +493 −0 src/libstd/sys/unix/backtrace.rs
  72. +3 −0 src/libstd/sys/unix/mod.rs
  73. +132 −3 src/libstd/sys/unix/os.rs
  74. +21 −151 src/{librustrt → libstd/sys/unix}/stack_overflow.rs
  75. +271 −0 src/libstd/sys/unix/thread.rs
  76. +371 −0 src/libstd/sys/windows/backtrace.rs
  77. +1 −1 src/libstd/sys/windows/fs.rs
  78. +3 −0 src/libstd/sys/windows/mod.rs
  79. +198 −3 src/libstd/sys/windows/os.rs
  80. +115 −0 src/libstd/sys/windows/stack_overflow.rs
  81. +96 −0 src/libstd/sys/windows/thread.rs
  82. +43 −22 src/libstd/sys/windows/thread_local.rs
  83. +19 −518 src/libstd/task.rs
  84. +652 −0 src/libstd/thread.rs
  85. +8 −8 src/libstd/thread_local/mod.rs
  86. +0 −1 src/libstd/thread_local/scoped.rs
  87. +3 −0 src/{librustrt → libstd}/thunk.rs
  88. +8 −9 src/libtest/lib.rs
  89. +5 −5 src/rt/rust_try.ll
  90. +5 −5 src/test/bench/msgsend-pipes-shared.rs
  91. +6 −6 src/test/bench/msgsend-pipes.rs
  92. +6 −6 src/test/bench/shootout-pfib.rs
  93. +1 −1 src/test/run-fail/main-panic.rs
  94. +4 −4 src/test/run-fail/panic-task-name-none.rs
  95. +4 −5 src/test/run-fail/panic-task-name-owned.rs
  96. +0 −21 src/test/run-fail/panic-task-name-send-str.rs
  97. +0 −19 src/test/run-fail/panic-task-name-static.rs
  98. +1 −2 src/test/run-fail/test-panic.rs
  99. +1 −3 src/test/run-fail/test-should-fail-bad-message.rs
  100. +0 −12 src/test/run-make/bootstrap-from-c-with-native/Makefile
  101. +0 −24 src/test/run-make/bootstrap-from-c-with-native/lib.rs
  102. +0 −16 src/test/run-make/bootstrap-from-c-with-native/main.c
  103. +2 −2 src/test/run-pass/cleanup-rvalue-temp-during-incomplete-alloc.rs
  104. +2 −3 src/test/run-pass/foreign-call-no-runtime.rs
  105. +4 −2 src/test/run-pass/issue-16671.rs
  106. +2 −2 src/test/run-pass/issue-2190-1.rs
  107. +2 −2 src/test/run-pass/match-ref-binding-in-guard-3256.rs
  108. +2 −0 src/test/run-pass/out-of-stack.rs
  109. +8 −8 src/test/run-pass/running-with-no-runtime.rs
  110. +3 −3 src/test/run-pass/spawning-with-debug.rs
  111. +4 −4 src/test/run-pass/task-comm-12.rs
  112. +3 −3 src/test/run-pass/task-comm-3.rs
  113. +3 −3 src/test/run-pass/task-comm-9.rs
  114. +3 −3 src/test/run-pass/task-stderr.rs
  115. +3 −3 src/test/run-pass/tcp-stress.rs
  116. +2 −2 src/test/run-pass/writealias.rs
  117. +6 −6 src/test/run-pass/yield.rs
  118. +4 −4 src/test/run-pass/yield1.rs
View
@@ -51,7 +51,7 @@
TARGET_CRATES := libc std flate arena term \
serialize getopts collections test time rand \
- log regex graphviz core rbml alloc rustrt \
+ log regex graphviz core rbml alloc \
unicode
RUSTC_CRATES := rustc rustc_typeck rustc_borrowck rustc_driver rustc_trans rustc_back rustc_llvm
HOST_CRATES := syntax $(RUSTC_CRATES) rustdoc regex_macros fmt_macros
@@ -62,9 +62,8 @@ DEPS_core :=
DEPS_libc := core
DEPS_unicode := core
DEPS_alloc := core libc native:jemalloc
-DEPS_rustrt := alloc core libc collections native:rustrt_native
-DEPS_std := core libc rand alloc collections rustrt unicode \
- native:rust_builtin native:backtrace
+DEPS_std := core libc rand alloc collections unicode \
+ native:rust_builtin native:backtrace native:rustrt_native
DEPS_graphviz := std
DEPS_syntax := std term serialize log fmt_macros arena libc
DEPS_rustc_driver := arena flate getopts graphviz libc rustc rustc_back rustc_borrowck \
@@ -32,7 +32,7 @@ use std::io;
use std::os;
use std::str;
use std::string::String;
-use std::task;
+use std::thread::Thread;
use std::time::Duration;
use test::MetricMap;
@@ -445,9 +445,9 @@ fn run_debuginfo_gdb_test(config: &Config, props: &TestProps, testfile: &Path) {
loop {
//waiting 1 second for gdbserver start
timer::sleep(Duration::milliseconds(1000));
- let result = task::try(move || {
+ let result = Thread::spawn(move || {
tcp::TcpStream::connect("127.0.0.1:5039").unwrap();
- });
+ }).join();
if result.is_err() {
continue;
}
View
@@ -1,5 +1,7 @@
% The Rust Tasks and Communication Guide
+**NOTE** This guide is badly out of date an needs to be rewritten.
+
# Introduction
Rust provides safe concurrent abstractions through a number of core library
@@ -22,7 +24,7 @@ from shared mutable state.
At its simplest, creating a task is a matter of calling the `spawn` function
with a closure argument. `spawn` executes the closure in the new task.
-```{rust}
+```{rust,ignore}
# use std::task::spawn;
// Print something profound in a different task using a named function
@@ -49,7 +51,7 @@ closure is limited to capturing `Send`-able data from its environment
ensures that `spawn` can safely move the entire closure and all its
associated state into an entirely different task for execution.
-```{rust}
+```{rust,ignore}
# use std::task::spawn;
# fn generate_task_number() -> int { 0 }
// Generate some state locally
@@ -75,7 +77,7 @@ The simplest way to create a channel is to use the `channel` function to create
of a channel, and a **receiver** is the receiving endpoint. Consider the following
example of calculating two results concurrently:
-```{rust}
+```{rust,ignore}
# use std::task::spawn;
let (tx, rx): (Sender<int>, Receiver<int>) = channel();
@@ -96,15 +98,15 @@ stream for sending and receiving integers (the left-hand side of the `let`,
`(tx, rx)`, is an example of a destructuring let: the pattern separates a tuple
into its component parts).
-```{rust}
+```{rust,ignore}
let (tx, rx): (Sender<int>, Receiver<int>) = channel();
```
The child task will use the sender to send data to the parent task, which will
wait to receive the data on the receiver. The next statement spawns the child
task.
-```{rust}
+```{rust,ignore}
# use std::task::spawn;
# fn some_expensive_computation() -> int { 42 }
# let (tx, rx) = channel();
@@ -123,7 +125,7 @@ computation, then sends the result over the captured channel.
Finally, the parent continues with some other expensive computation, then waits
for the child's result to arrive on the receiver:
-```{rust}
+```{rust,ignore}
# fn some_other_expensive_computation() {}
# let (tx, rx) = channel::<int>();
# tx.send(0);
@@ -154,7 +156,7 @@ spawn(move || {
Instead we can clone the `tx`, which allows for multiple senders.
-```{rust}
+```{rust,ignore}
let (tx, rx) = channel();
for init_val in range(0u, 3) {
@@ -179,7 +181,7 @@ Note that the above cloning example is somewhat contrived since you could also
simply use three `Sender` pairs, but it serves to illustrate the point. For
reference, written with multiple streams, it might look like the example below.
-```{rust}
+```{rust,ignore}
# use std::task::spawn;
// Create a vector of ports, one for each child task
@@ -203,7 +205,7 @@ getting the result later.
The basic example below illustrates this.
-```{rust}
+```{rust,ignore}
use std::sync::Future;
# fn main() {
@@ -230,7 +232,7 @@ called.
Here is another example showing how futures allow you to background
computations. The workload will be distributed on the available cores.
-```{rust}
+```{rust,ignore}
# use std::num::Float;
# use std::sync::Future;
fn partial_sum(start: uint) -> f64 {
@@ -268,7 +270,7 @@ Here is a small example showing how to use Arcs. We wish to run concurrently
several computations on a single large vector of floats. Each task needs the
full vector to perform its duty.
-```{rust}
+```{rust,ignore}
use std::num::Float;
use std::rand;
use std::sync::Arc;
@@ -295,7 +297,7 @@ The function `pnorm` performs a simple computation on the vector (it computes
the sum of its items at the power given as argument and takes the inverse power
of this value). The Arc on the vector is created by the line:
-```{rust}
+```{rust,ignore}
# use std::rand;
# use std::sync::Arc;
# fn main() {
@@ -309,7 +311,7 @@ the wrapper and not its contents. Within the task's procedure, the captured
Arc reference can be used as a shared reference to the underlying vector as
if it were local.
-```{rust}
+```{rust,ignore}
# use std::rand;
# use std::sync::Arc;
# fn pnorm(nums: &[f64], p: uint) -> f64 { 4.0 }
@@ -346,17 +348,17 @@ and `()`, callers can pattern-match on a result to check whether it's an `Ok`
result with an `int` field (representing a successful result) or an `Err` result
(representing termination with an error).
-```{rust}
-# use std::task;
+```{rust,ignore}
+# use std::thread::Thread;
# fn some_condition() -> bool { false }
# fn calculate_result() -> int { 0 }
-let result: Result<int, Box<std::any::Any + Send>> = task::try(move || {
+let result: Result<int, Box<std::any::Any + Send>> = Thread::spawn(move || {
if some_condition() {
calculate_result()
} else {
panic!("oops!");
}
-});
+}).join();
assert!(result.is_err());
```
View
@@ -5217,6 +5217,8 @@ the same function, so our binary is a little bit larger.
# Tasks
+**NOTE**: this section is currently out of date and will be rewritten soon.
+
Concurrency and parallelism are topics that are of increasing interest to a
broad subsection of software developers. Modern computers are often multi-core,
to the point that even embedded devices like cell phones have more than one
@@ -5231,7 +5233,7 @@ library, and not part of the language. This means that in the future, other
concurrency libraries can be written for Rust to help in specific scenarios.
Here's an example of creating a task:
-```{rust}
+```{rust,ignore}
spawn(move || {
println!("Hello from a task!");
});
@@ -5261,7 +5263,7 @@ If tasks were only able to capture these values, they wouldn't be very useful.
Luckily, tasks can communicate with each other through **channel**s. Channels
work like this:
-```{rust}
+```{rust,ignore}
let (tx, rx) = channel();
spawn(move || {
@@ -5280,7 +5282,7 @@ which returns an `Result<T, TryRecvError>` and does not block.
If you want to send messages to the task as well, create two channels!
-```{rust}
+```{rust,ignore}
let (tx1, rx1) = channel();
let (tx2, rx2) = channel();
@@ -5340,7 +5342,7 @@ we'll just get the value immediately.
Tasks don't always succeed, they can also panic. A task that wishes to panic
can call the `panic!` macro, passing a message:
-```{rust}
+```{rust,ignore}
spawn(move || {
panic!("Nope.");
});
@@ -5349,7 +5351,7 @@ spawn(move || {
If a task panics, it is not possible for it to recover. However, it can
notify other tasks that it has panicked. We can do this with `task::try`:
-```{rust}
+```{rust,ignore}
use std::task;
use std::rand;
View
@@ -389,11 +389,13 @@ safe concurrent programs.
Here's an example of a concurrent Rust program:
```{rust}
+use std::thread::Thread;
+
fn main() {
for _ in range(0u, 10u) {
- spawn(move || {
+ Thread::spawn(move || {
println!("Hello, world!");
- });
+ }).detach();
}
}
```
@@ -403,7 +405,8 @@ This program creates ten threads, who all print `Hello, world!`. The
double bars `||`. (The `move` keyword indicates that the closure takes
ownership of any data it uses; we'll have more on the significance of
this shortly.) This closure is executed in a new thread created by
-`spawn`.
+`spawn`. The `detach` method means that the child thread is allowed to
+outlive its parent.
One common form of problem in concurrent programs is a 'data race.'
This occurs when two different threads attempt to access the same
@@ -418,13 +421,15 @@ problem.
Let's see an example. This Rust code will not compile:
```{rust,ignore}
+use std::thread::Thread;
+
fn main() {
let mut numbers = vec![1i, 2i, 3i];
for i in range(0u, 3u) {
- spawn(move || {
+ Thread::spawn(move || {
for j in range(0, 3) { numbers[j] += 1 }
- });
+ }).detach();
}
}
```
@@ -469,20 +474,21 @@ mutation doesn't cause a data race.
Here's what using an Arc with a Mutex looks like:
```{rust}
+use std::thread::Thread;
use std::sync::{Arc,Mutex};
fn main() {
let numbers = Arc::new(Mutex::new(vec![1i, 2i, 3i]));
for i in range(0u, 3u) {
let number = numbers.clone();
- spawn(move || {
+ Thread::spawn(move || {
let mut array = number.lock();
(*array)[i] += 1;
println!("numbers[{}] is {}", i, (*array)[i]);
- });
+ }).detach();
}
}
```
@@ -532,13 +538,15 @@ As an example, Rust's ownership system is _entirely_ at compile time. The
safety check that makes this an error about moved values:
```{rust,ignore}
+use std::thread::Thread;
+
fn main() {
let vec = vec![1i, 2, 3];
for i in range(1u, 3) {
- spawn(move || {
+ Thread::spawn(move || {
println!("{}", vec[i]);
- });
+ }).detach();
}
}
```
View
@@ -39,6 +39,7 @@ use heap::deallocate;
///
/// ```rust
/// use std::sync::Arc;
+/// use std::thread::Thread;
///
/// fn main() {
/// let numbers = Vec::from_fn(100, |i| i as f32);
@@ -47,11 +48,11 @@ use heap::deallocate;
/// for _ in range(0u, 10) {
/// let child_numbers = shared_numbers.clone();
///
-/// spawn(move || {
+/// Thread::spawn(move || {
/// let local_numbers = child_numbers.as_slice();
///
/// // Work with the local numbers
-/// });
+/// }).detach();
/// }
/// }
/// ```
@@ -1344,8 +1344,7 @@ pub mod raw {
#[cfg(test)]
mod tests {
- extern crate rustrt;
-
+ use std::boxed::Box;
use std::cell::Cell;
use std::default::Default;
use std::mem;
@@ -1629,9 +1628,10 @@ mod tests {
#[test]
fn test_swap_remove_noncopyable() {
// Tests that we don't accidentally run destructors twice.
- let mut v = vec![rustrt::exclusive::Exclusive::new(()),
- rustrt::exclusive::Exclusive::new(()),
- rustrt::exclusive::Exclusive::new(())];
+ let mut v = Vec::new();
+ v.push(box 0u8);
+ v.push(box 0u8);
+ v.push(box 0u8);
let mut _e = v.swap_remove(0);
assert_eq!(v.len(), 2);
_e = v.swap_remove(1);
@@ -1736,7 +1736,7 @@ mod tests {
v2.dedup();
/*
* If the boxed pointers were leaked or otherwise misused, valgrind
- * and/or rustrt should raise errors.
+ * and/or rt should raise errors.
*/
}
@@ -1750,7 +1750,7 @@ mod tests {
v2.dedup();
/*
* If the pointers were leaked or otherwise misused, valgrind and/or
- * rustrt should raise errors.
+ * rt should raise errors.
*/
}
Oops, something went wrong.

0 comments on commit 0efafac

Please sign in to comment.