Skip to content

Commit

Permalink
Squeeze the last bits of tasks in documentation in favor of thread
Browse files Browse the repository at this point in the history
An automated script was run against the `.rs` and `.md` files,
subsituting every occurrence of `task` with `thread`. In the `.rs`
files, only the texts in the comment blocks were affected.
  • Loading branch information
barosl committed May 8, 2015
1 parent cf76e63 commit ff332b6
Show file tree
Hide file tree
Showing 75 changed files with 198 additions and 198 deletions.
4 changes: 2 additions & 2 deletions src/compiletest/compiletest.rs
Expand Up @@ -226,15 +226,15 @@ pub fn run_tests(config: &Config) {
}

// android debug-info test uses remote debugger
// so, we test 1 task at once.
// so, we test 1 thread at once.
// also trying to isolate problems with adb_run_wrapper.sh ilooping
env::set_var("RUST_TEST_THREADS","1");
}

match config.mode {
DebugInfoLldb => {
// Some older versions of LLDB seem to have problems with multiple
// instances running in parallel, so only run one test task at a
// instances running in parallel, so only run one test thread at a
// time.
env::set_var("RUST_TEST_THREADS", "1");
}
Expand Down
2 changes: 1 addition & 1 deletion src/doc/complement-design-faq.md
Expand Up @@ -96,7 +96,7 @@ code should need to run is a stack.
possibility is covered by the `match`, adding further variants to the `enum`
in the future will prompt a compilation failure, rather than runtime panic.
Second, it makes cost explicit. In general, the only safe way to have a
non-exhaustive match would be to panic the task if nothing is matched, though
non-exhaustive match would be to panic the thread if nothing is matched, though
it could fall through if the type of the `match` expression is `()`. This sort
of hidden cost and special casing is against the language's philosophy. It's
easy to ignore certain cases by using the `_` wildcard:
Expand Down
12 changes: 6 additions & 6 deletions src/doc/complement-lang-faq.md
Expand Up @@ -62,15 +62,15 @@ Data values in the language can only be constructed through a fixed set of initi
* There is no global inter-crate namespace; all name management occurs within a crate.
* Using another crate binds the root of _its_ namespace into the user's namespace.

## Why is panic unwinding non-recoverable within a task? Why not try to "catch exceptions"?
## Why is panic unwinding non-recoverable within a thread? Why not try to "catch exceptions"?

In short, because too few guarantees could be made about the dynamic environment of the catch block, as well as invariants holding in the unwound heap, to be able to safely resume; we believe that other methods of signalling and logging errors are more appropriate, with tasks playing the role of a "hard" isolation boundary between separate heaps.
In short, because too few guarantees could be made about the dynamic environment of the catch block, as well as invariants holding in the unwound heap, to be able to safely resume; we believe that other methods of signalling and logging errors are more appropriate, with threads playing the role of a "hard" isolation boundary between separate heaps.

Rust provides, instead, three predictable and well-defined options for handling any combination of the three main categories of "catch" logic:

* Failure _logging_ is done by the integrated logging subsystem.
* _Recovery_ after a panic is done by trapping a task panic from _outside_
the task, where other tasks are known to be unaffected.
* _Recovery_ after a panic is done by trapping a thread panic from _outside_
the thread, where other threads are known to be unaffected.
* _Cleanup_ of resources is done by RAII-style objects with destructors.

Cleanup through RAII-style destructors is more likely to work than in catch blocks anyways, since it will be better tested (part of the non-error control paths, so executed all the time).
Expand All @@ -91,8 +91,8 @@ We don't know if there's an obvious, easy, efficient, stock-textbook way of supp

There's a lot of debate on this topic; it's easy to find a proponent of default-sync or default-async communication, and there are good reasons for either. Our choice rests on the following arguments:

* Part of the point of isolating tasks is to decouple tasks from one another, such that assumptions in one task do not cause undue constraints (or bugs, if violated!) in another. Temporal coupling is as real as any other kind; async-by-default relaxes the default case to only _causal_ coupling.
* Default-async supports buffering and batching communication, reducing the frequency and severity of task-switching and inter-task / inter-domain synchronization.
* Part of the point of isolating threads is to decouple threads from one another, such that assumptions in one thread do not cause undue constraints (or bugs, if violated!) in another. Temporal coupling is as real as any other kind; async-by-default relaxes the default case to only _causal_ coupling.
* Default-async supports buffering and batching communication, reducing the frequency and severity of thread-switching and inter-thread / inter-domain synchronization.
* Default-async with transmittable channels is the lowest-level building block on which more-complex synchronization topologies and strategies can be built; it is not clear to us that the majority of cases fit the 2-party full-synchronization pattern rather than some more complex multi-party or multi-stage scenario. We did not want to force all programs to pay for wiring the former assumption into all communications.

## Why are channels half-duplex (one-way)?
Expand Down
6 changes: 3 additions & 3 deletions src/doc/grammar.md
Expand Up @@ -789,8 +789,8 @@ bound := path | lifetime

### Boxes

## Tasks
## Threads

### Communication between tasks
### Communication between threads

### Task lifecycle
### Thread lifecycle
2 changes: 1 addition & 1 deletion src/doc/reference.md
Expand Up @@ -3636,7 +3636,7 @@ that have since been removed):
* ML Kit, Cyclone: region based memory management
* Haskell (GHC): typeclasses, type families
* Newsqueak, Alef, Limbo: channels, concurrency
* Erlang: message passing, task failure, ~~linked task failure~~,
* Erlang: message passing, thread failure, ~~linked thread failure~~,
~~lightweight concurrency~~
* Swift: optional bindings
* Scheme: hygienic macros
Expand Down
4 changes: 2 additions & 2 deletions src/doc/style/errors/handling.md
@@ -1,7 +1,7 @@
% Handling errors

### Use task isolation to cope with failure. [FIXME]
### Use thread isolation to cope with failure. [FIXME]

> **[FIXME]** Explain how to isolate tasks and detect task failure for recovery.
> **[FIXME]** Explain how to isolate threads and detect thread failure for recovery.
### Consuming `Result` [FIXME]
8 changes: 4 additions & 4 deletions src/doc/style/errors/signaling.md
Expand Up @@ -11,13 +11,13 @@ Errors fall into one of three categories:
The basic principle of the convention is that:

* Catastrophic errors and programming errors (bugs) can and should only be
recovered at a *coarse grain*, i.e. a task boundary.
recovered at a *coarse grain*, i.e. a thread boundary.
* Obstructions preventing an operation should be reported at a maximally *fine
grain* -- to the immediate invoker of the operation.

## Catastrophic errors

An error is _catastrophic_ if there is no meaningful way for the current task to
An error is _catastrophic_ if there is no meaningful way for the current thread to
continue after the error occurs.

Catastrophic errors are _extremely_ rare, especially outside of `libstd`.
Expand All @@ -28,7 +28,7 @@ Catastrophic errors are _extremely_ rare, especially outside of `libstd`.

For errors like stack overflow, Rust currently aborts the process, but
could in principle panic, which (in the best case) would allow
reporting and recovery from a supervisory task.
reporting and recovery from a supervisory thread.

## Contract violations

Expand All @@ -44,7 +44,7 @@ existing borrows have been relinquished.

A contract violation is always a bug, and for bugs we follow the Erlang
philosophy of "let it crash": we assume that software *will* have bugs, and we
design coarse-grained task boundaries to report, and perhaps recover, from these
design coarse-grained thread boundaries to report, and perhaps recover, from these
bugs.

### Contract design
Expand Down
28 changes: 14 additions & 14 deletions src/doc/style/ownership/builders.md
Expand Up @@ -23,7 +23,7 @@ If `T` is such a data structure, consider introducing a `T` _builder_:
4. The builder should provide one or more "_terminal_" methods for actually building a `T`.

The builder pattern is especially appropriate when building a `T` involves side
effects, such as spawning a task or launching a process.
effects, such as spawning a thread or launching a process.

In Rust, there are two variants of the builder pattern, differing in the
treatment of ownership, as described below.
Expand Down Expand Up @@ -115,24 +115,24 @@ Sometimes builders must transfer ownership when constructing the final type
`T`, meaning that the terminal methods must take `self` rather than `&self`:

```rust
// A simplified excerpt from std::task::TaskBuilder
// A simplified excerpt from std::thread::ThreadBuilder

impl TaskBuilder {
/// Name the task-to-be. Currently the name is used for identification
impl ThreadBuilder {
/// Name the thread-to-be. Currently the name is used for identification
/// only in failure messages.
pub fn named(mut self, name: String) -> TaskBuilder {
pub fn named(mut self, name: String) -> ThreadBuilder {
self.name = Some(name);
self
}

/// Redirect task-local stdout.
pub fn stdout(mut self, stdout: Box<Writer + Send>) -> TaskBuilder {
/// Redirect thread-local stdout.
pub fn stdout(mut self, stdout: Box<Writer + Send>) -> ThreadBuilder {
self.stdout = Some(stdout);
// ^~~~~~ this is owned and cannot be cloned/re-used
self
}

/// Creates and executes a new child task.
/// Creates and executes a new child thread.
pub fn spawn(self, f: proc():Send) {
// consume self
...
Expand All @@ -141,7 +141,7 @@ impl TaskBuilder {
```

Here, the `stdout` configuration involves passing ownership of a `Writer`,
which must be transferred to the task upon construction (in `spawn`).
which must be transferred to the thread upon construction (in `spawn`).

When the terminal methods of the builder require ownership, there is a basic tradeoff:

Expand All @@ -158,17 +158,17 @@ builder methods for a consuming builder should take and returned an owned

```rust
// One-liners
TaskBuilder::new().named("my_task").spawn(proc() { ... });
ThreadBuilder::new().named("my_thread").spawn(proc() { ... });

// Complex configuration
let mut task = TaskBuilder::new();
task = task.named("my_task_2"); // must re-assign to retain ownership
let mut thread = ThreadBuilder::new();
thread = thread.named("my_thread_2"); // must re-assign to retain ownership

if reroute {
task = task.stdout(mywriter);
thread = thread.stdout(mywriter);
}

task.spawn(proc() { ... });
thread.spawn(proc() { ... });
```

One-liners work as before, because ownership is threaded through each of the
Expand Down
2 changes: 1 addition & 1 deletion src/doc/style/ownership/destructors.md
Expand Up @@ -8,7 +8,7 @@ go out of scope.
### Destructors should not fail. [FIXME: needs RFC]

Destructors are executed on task failure, and in that context a failing
Destructors are executed on thread failure, and in that context a failing
destructor causes the program to abort.

Instead of failing in a destructor, provide a separate method for checking for
Expand Down
8 changes: 4 additions & 4 deletions src/doc/style/style/comments.md
Expand Up @@ -5,15 +5,15 @@
Use line comments:

``` rust
// Wait for the main task to return, and set the process error code
// Wait for the main thread to return, and set the process error code
// appropriately.
```

Instead of:

``` rust
/*
* Wait for the main task to return, and set the process error code
* Wait for the main thread to return, and set the process error code
* appropriately.
*/
```
Expand Down Expand Up @@ -55,7 +55,7 @@ For example:
/// Sets up a default runtime configuration, given compiler-supplied arguments.
///
/// This function will block until the entire pool of M:N schedulers has
/// exited. This function also requires a local task to be available.
/// exited. This function also requires a local thread to be available.
///
/// # Arguments
///
Expand All @@ -64,7 +64,7 @@ For example:
/// * `main` - The initial procedure to run inside of the M:N scheduling pool.
/// Once this procedure exits, the scheduling pool will begin to shut
/// down. The entire pool (and this function) will only return once
/// all child tasks have finished executing.
/// all child threads have finished executing.
///
/// # Return value
///
Expand Down
2 changes: 1 addition & 1 deletion src/doc/style/style/naming/containers.md
Expand Up @@ -5,7 +5,7 @@ they enclose. Accessor methods often have variants to access the data
by value, by reference, and by mutable reference.

In general, the `get` family of methods is used to access contained
data without any risk of task failure; they return `Option` as
data without any risk of thread failure; they return `Option` as
appropriate. This name is chosen rather than names like `find` or
`lookup` because it is appropriate for a wider range of container types.

Expand Down
2 changes: 1 addition & 1 deletion src/doc/trpl/academic-research.md
Expand Up @@ -24,7 +24,7 @@ Recommended for inspiration and a better understanding of Rust's background.
* [Thread scheduling for multiprogramming multiprocessors](http://www.eecis.udel.edu/%7Ecavazos/cisc879-spring2008/papers/arora98thread.pdf)
* [The data locality of work stealing](http://www.aladdin.cs.cmu.edu/papers/pdfs/y2000/locality_spaa00.pdf)
* [Dynamic circular work stealing deque](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.170.1097&rep=rep1&type=pdf) - The Chase/Lev deque
* [Work-first and help-first scheduling policies for async-finish task parallelism](http://www.cs.rice.edu/%7Eyguo/pubs/PID824943.pdf) - More general than fully-strict work stealing
* [Work-first and help-first scheduling policies for async-finish thread parallelism](http://www.cs.rice.edu/%7Eyguo/pubs/PID824943.pdf) - More general than fully-strict work stealing
* [A Java fork/join calamity](http://www.coopsoft.com/ar/CalamityArticle.html) - critique of Java's fork/join library, particularly its application of work stealing to non-strict computation
* [Scheduling techniques for concurrent systems](http://www.ece.rutgers.edu/%7Eparashar/Classes/ece572-papers/05/ps-ousterhout.pdf)
* [Contention aware scheduling](http://www.blagodurov.net/files/a8-blagodurov.pdf)
Expand Down
2 changes: 1 addition & 1 deletion src/doc/trpl/concurrency.md
Expand Up @@ -6,7 +6,7 @@ and more cores, yet many programmers aren't prepared to fully utilize them.

Rust's memory safety features also apply to its concurrency story too. Even
concurrent Rust programs must be memory safe, having no data races. Rust's type
system is up to the task, and gives you powerful ways to reason about
system is up to the thread, and gives you powerful ways to reason about
concurrent code at compile time.

Before we talk about the concurrency features that come with Rust, it's important
Expand Down
2 changes: 1 addition & 1 deletion src/doc/trpl/iterators.md
Expand Up @@ -42,7 +42,7 @@ loop is just a handy way to write this `loop`/`match`/`break` construct.
`for` loops aren't the only thing that uses iterators, however. Writing your
own iterator involves implementing the `Iterator` trait. While doing that is
outside of the scope of this guide, Rust provides a number of useful iterators
to accomplish various tasks. Before we talk about those, we should talk about a
to accomplish various threads. Before we talk about those, we should talk about a
Rust anti-pattern. And that's using ranges like this.

Yes, we just talked about how ranges are cool. But ranges are also very
Expand Down
8 changes: 4 additions & 4 deletions src/liballoc/arc.rs
Expand Up @@ -31,7 +31,7 @@
//!
//! # Examples
//!
//! Sharing some immutable data between tasks:
//! Sharing some immutable data between threads:
//!
//! ```no_run
//! use std::sync::Arc;
Expand All @@ -48,7 +48,7 @@
//! }
//! ```
//!
//! Sharing mutable data safely between tasks with a `Mutex`:
//! Sharing mutable data safely between threads with a `Mutex`:
//!
//! ```no_run
//! use std::sync::{Arc, Mutex};
Expand Down Expand Up @@ -89,9 +89,9 @@ use heap::deallocate;
///
/// # Examples
///
/// In this example, a large vector of floats is shared between several tasks.
/// In this example, a large vector of floats is shared between several threads.
/// With simple pipes, without `Arc`, a copy would have to be made for each
/// task.
/// thread.
///
/// When you clone an `Arc<T>`, it will create another pointer to the data and
/// increase the reference counter.
Expand Down
4 changes: 2 additions & 2 deletions src/liballoc/lib.rs
Expand Up @@ -26,14 +26,14 @@
//! There can only be one owner of a `Box`, and the owner can decide to mutate
//! the contents, which live on the heap.
//!
//! This type can be sent among tasks efficiently as the size of a `Box` value
//! This type can be sent among threads efficiently as the size of a `Box` value
//! is the same as that of a pointer. Tree-like data structures are often built
//! with boxes because each node often has only one owner, the parent.
//!
//! ## Reference counted pointers
//!
//! The [`Rc`](rc/index.html) type is a non-threadsafe reference-counted pointer
//! type intended for sharing memory within a task. An `Rc` pointer wraps a
//! type intended for sharing memory within a thread. An `Rc` pointer wraps a
//! type, `T`, and only allows access to `&T`, a shared reference.
//!
//! This type is useful when inherited mutability (such as using `Box`) is too
Expand Down
10 changes: 5 additions & 5 deletions src/libcore/atomic.rs
Expand Up @@ -52,20 +52,20 @@
//! spinlock_clone.store(0, Ordering::SeqCst);
//! });
//!
//! // Wait for the other task to release the lock
//! // Wait for the other thread to release the lock
//! while spinlock.load(Ordering::SeqCst) != 0 {}
//! }
//! ```
//!
//! Keep a global count of live tasks:
//! Keep a global count of live threads:
//!
//! ```
//! use std::sync::atomic::{AtomicUsize, Ordering, ATOMIC_USIZE_INIT};
//!
//! static GLOBAL_TASK_COUNT: AtomicUsize = ATOMIC_USIZE_INIT;
//! static GLOBAL_THREAD_COUNT: AtomicUsize = ATOMIC_USIZE_INIT;
//!
//! let old_task_count = GLOBAL_TASK_COUNT.fetch_add(1, Ordering::SeqCst);
//! println!("live tasks: {}", old_task_count + 1);
//! let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1, Ordering::SeqCst);
//! println!("live threads: {}", old_thread_count + 1);
//! ```

#![stable(feature = "rust1", since = "1.0.0")]
Expand Down
4 changes: 2 additions & 2 deletions src/libcore/cell.rs
Expand Up @@ -24,7 +24,7 @@
//! claim temporary, exclusive, mutable access to the inner value. Borrows for `RefCell<T>`s are
//! tracked 'at runtime', unlike Rust's native reference types which are entirely tracked
//! statically, at compile time. Because `RefCell<T>` borrows are dynamic it is possible to attempt
//! to borrow a value that is already mutably borrowed; when this happens it results in task panic.
//! to borrow a value that is already mutably borrowed; when this happens it results in thread panic.
//!
//! # When to choose interior mutability
//!
Expand Down Expand Up @@ -100,7 +100,7 @@
//! // Recursive call to return the just-cached value.
//! // Note that if we had not let the previous borrow
//! // of the cache fall out of scope then the subsequent
//! // recursive borrow would cause a dynamic task panic.
//! // recursive borrow would cause a dynamic thread panic.
//! // This is the major hazard of using `RefCell`.
//! self.minimum_spanning_tree()
//! }
Expand Down
2 changes: 1 addition & 1 deletion src/libcore/macros.rs
Expand Up @@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.

/// Entry point of task panic, for details, see std::macros
/// Entry point of thread panic, for details, see std::macros
#[macro_export]
macro_rules! panic {
() => (
Expand Down
4 changes: 2 additions & 2 deletions src/liblog/lib.rs
Expand Up @@ -228,7 +228,7 @@ thread_local! {
}
}

/// A trait used to represent an interface to a task-local logger. Each task
/// A trait used to represent an interface to a thread-local logger. Each thread
/// can have its own custom logger which can respond to logging messages
/// however it likes.
pub trait Logger {
Expand Down Expand Up @@ -324,7 +324,7 @@ pub fn log(level: u32, loc: &'static LogLocation, args: fmt::Arguments) {
#[inline(always)]
pub fn log_level() -> u32 { unsafe { LOG_LEVEL } }

/// Replaces the task-local logger with the specified logger, returning the old
/// Replaces the thread-local logger with the specified logger, returning the old
/// logger.
pub fn set_logger(logger: Box<Logger + Send>) -> Option<Box<Logger + Send>> {
let mut l = Some(logger);
Expand Down

0 comments on commit ff332b6

Please sign in to comment.