TaskParallelism
The package parallel
provides functions for expressing task-based parallel algorithms, by spawning each task in its own goroutine, and managing task synchronization and panics implicitly to make the semantics similar to a sequential execution of the same program. (Parallelism is not concurrency. Unlike in concurrent programming, mimicking sequential semantics is actually desirable to a certain extent when using parallelism to speed up otherwise sequential algorithms.)
Some of the functions are also available as speculative variants in the speculative
package. The implementations in the parallel
package always wait for all spawned tasks to terminate, whereas the implementations in the speculative
package terminate earlier if they can.
While functions in the speculative
package may terminate early, they do not proactively stop the execution of the spawned goroutines. To ensure that compute resources are freed up in such cases, user programs need to use some other safe form of communication to gracefully stop their execution, for example by using the cancelation feature of Go's context package. (Any such additional communication is likely to add additional performance overhead, which may not be desirable and which is why this is not done by default.)
All functions are also available as purely sequential implementations in the sequential
package, which are useful for testing and debugging. (It is not recommended to use the implementations of the sequential
package for any other purpose, because they are almost certainly too inefficient for regular sequential programs.)
Function | Parameters | Result |
---|---|---|
Do |
...func() |
() |
And |
...func() bool |
bool |
Or |
...func() bool |
bool |
Do
, And
, and Or
are useful for executing two or more tasks in parallel, by invoking them each in their own goroutine. Do
does this for tasks implemented as functions without return values. And
and Or
do this for predicates that return boolean values, and combine these return values with the && or || operator.
parallel.Do
, parallel.And
, and parallel.Or
wait for all spawned tasks to terminate before returning.
The speculative
package additionally provides the following variants:
-
speculative.Do
terminates early if any of the spawned tasks returns true. -
speculative.And
andspeculative.Or
terminate early if the final return value is known early (if any of the spawned predicates returnsfalse
forAnd
, ortrue
forOr
).
Function | Parameters | Result |
---|---|---|
Reduce |
join func(x, y interface{}) interface{}, functions ...func() interface{} |
interface{} |
ReduceFloat64 |
join func(x, y float64) float64, functions ...func() float64 |
float64 |
ReduceFloat64Product |
functions ...func() float64 |
float64 |
ReduceFloat64Sum |
functions ...func() float64 |
float64 |
ReduceInt |
join func(x, y int) int, functions ...func() int |
int |
ReduceIntProduct |
functions ...func() int |
int |
ReduceIntSum |
functions ...func() int |
int |
ReduceString |
join func(x, y string) string, functions ...func() string |
string |
ReduceStringSum |
functions ...func() string |
string |
Reduce
is useful for executing two or more tasks in parallel, by invoking them each in their own goroutine, and combining their results using the provided join function. (The actual API ensures that at least one function is passed.)
[ReduceFloat64
], [ReduceInt
], and [ReduceString
] can be used in case it is known that the result types are float64
, int
, and string
respectively. The Product
and Sum
variants can be used in case it is known that the partial results should be combined with the *
or +
operator respectively.
parallel.Reduce
and its variants wait for all spawned tasks to terminate before returning.
The speculative
package additionally provides the following variants:
-
speculative.Reduce
,speculative.ReduceFloat64
,speculative.ReduceInt
, andspeculative.ReduceString
terminate early if any of the spawned tasks returns true as a second return value. - The
Product
andSum
variants do not have speculative counterparts.
The Range
-functions described below are useful for expressing parallel algorithms over slices, or other domains that can be covered by a range of consecutive integers. They all receive a range and a single range function, divide the range up into subranges, and spawn the range function for each of these subranges. Some of the functions described below also combine return values of the subrange tasks into an overall result, and do so in parallel.
Function | Parameters | Result |
---|---|---|
Range |
low, high, n int, f, func (low, high int) |
() |
RangeAnd |
low, high, n int, f, func (low, high int) bool |
bool |
RangeOr |
low, high, n int, f, func (low, high int) bool |
bool |
RangeReduce |
low, high, n int, reduce func (low, high int) interface{}, join func (x, y interface{}) interface{} |
interface{} |
RangeReduceFloat64 |
low, high, n int, reduce func (low, high int) float64, join func (x, y float64) float64 |
float64 |
RangeReduceFloat64Product |
low, high, n int, reduce func (low, high int) float64 |
float64 |
RangeReduceFloat64Sum |
low, high, n int, reduce func (low, high int) float64 |
float64 |
RangeReduceInt |
low, high, n int, reduce func (low, high int) int, join func(x, y int) int |
int |
RangeReduceIntProduct |
low, high, n int, reduce func (low, high int) int |
int |
RangeReduceIntSum |
low, high, n int, reduce func (low, high int) int |
int |
RangeReduceString |
low, high, n int, reduce func (low, high int) string, join func (x, y string) string |
string |
RangeReduceStringSum |
low, high, n int, reduce func (low, high int) string |
string |
The range is always specified by a low
and high
integer, with low <= high
, and the functions will cover the half-open interval ranging from low
to high
, including low
but excluding high
.
The n
parameter can typically be set to 0. It is used to determine how the size of the overall range high - low
is divided up over the available worker threads. If n
is 0, a reasonable default is chosen that typically works well. If n
is greater than 0, the size of the overall range high - low
is divided by n
. (This means that n
tasks are spawned. Quite often, n
should be larger than the number of available worker threads to improve load balancing. Passing 0 for n
ensures that this is taken into account.)
The subrange functions that are passed by user programs as parameters receive a low
and high
integer specifying the respective subrange, with low <= high
. They should in turn cover the respective half-open interval ranging from low
to high
, including low
but excluding high
, and usually do so sequentially with a simple for
loop.
Range
, RangeAnd
, and RangeOr
are similar to Do
, And
, and Or
, except they receive a range and a single range function each. They divide the range up into subranges and spawn the range function for each of these subranges. RangeAnd
and RangeOr
additionally combine the boolean return values of the range predicates with the &&
or ||
operator.
RangeReduce
is similar to RangeAnd
and RangeOr
, except that the results are of type interface{}
instead of bool
, and the results are combined using another function that is explicitly passed as a parameter. RangeReduceFloat64
, RangeReduceInt
, and RangeReduceString
can be used in case it is known that the result types are int
, float64
, or string
respectively.
The Product
and Sum
variants can be used in case it is known that the partial results should be combined with the *
or +
operator respectively.
parallel.Range
and its variants wait for all spawned tasks to terminate before returning.
The speculative
package additionally provides the following variants:
-
speculative.Range
,speculative.RangeReduce
,speculative.Float64RangeReduce
, [speculative.RangeReduceInt
][specangeReduceInt], andspeculative.RangeReduceString
terminate early if any of the spawned tasks returns true as a second return value. -
speculative.RangeAnd
andspeculative.RangeOr
terminate early if the final return value is known early (if any of the spawned predicates returnsfalse
forRangeAnd
, ortrue
forRangeOr
). - The
Product
andSum
variants do not have speculative counterparts.
All of the functions from the parallel
package described above also take care of dealing with panics: If one or more of the tasks spawned by a parallel
function panics, the panic is recovered in the corresponding goroutine, and the parallel
function then panics with the left-most of the recovered panic values.
The handling of panics is done left-to-right to ensure that the behavior is semantically similar to a corresponding sequential left-to-right execution.
Hoever, functions in the speculative
package may not propagate panics from one task in case they terminate early because of a known return value or a non-nil
error value originating from a different task. See the documentations of the speculative
functions for more precise details of the semantics.