Skip to content

Commit 1b5df49

Browse files
committed
Clean up concurrency docs
1) move high-level APIs to the top (partially done) 2) use less speculative language ("would") 3) supply output along with examples 4) start with simpler examples 5) more links, including to external sources for general topics
1 parent 330d4c9 commit 1b5df49

File tree

1 file changed

+149
-124
lines changed

1 file changed

+149
-124
lines changed

lib/Language/concurrency.pod

Lines changed: 149 additions & 124 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,15 @@
55
=SUBTITLE Concurrency and Asynchronous Programming
66
77
In common with most modern programming languages Perl 6 is designed
8-
to support concurrency (allowing more than one thing to happen at the
8+
to L<support concurrency|http://en.wikipedia.org/wiki/Concurrent_computing>
9+
(allowing more than one thing to happen at the
910
same time,) and asynchronous programming (sometime called event driven
1011
or reactive programming - that is an event or change in some part of a
1112
program may lead to an event or change in some other part of the program
1213
asynchronously to the program flow. )
1314
14-
The aim of the Perl concurrency design is to provide a consistent
15+
The aim of the Perl concurrency design is to provide a high-level,
16+
composable, consistent
1517
interface regardless of how a virtual machine may implement it for a
1618
particular operating system, through layers of facilities as described
1719
below.
@@ -26,129 +28,45 @@ hyper-operators, autothreading junctions?
2628
2729
Additionally certain Perl features may implicitly operate in am asynchronous
2830
fashion, so in order to ensure predictable interoperation with these features
29-
user code should, where possible, avoid the lower level concurrency APIs
31+
user code should, where possible, avoid the lower level concurrency APIs
3032
(i.e. L<Thread> and L<Scheduler> ) and use the higher-level interfaces.
3133
32-
=head2 Threads
33-
34-
The lowest level interface for concurrency is provided by L<Thread>. A
35-
thread can be thought of as a piece of code that may eventually be run
36-
on a processor, the arrangement for which is made almost entirely by the
37-
virtual machine and/or operating system. Threads should be considered,
38-
for all intents, largely un-managed and their direct use should be
39-
avoided in user code.
40-
41-
A thread can either be created and then actually run later:
42-
43-
my $thread = Thread.new(code => { for 1 .. 10 -> $v { say $v }});
44-
# ...
45-
$thread.run;
46-
47-
Or can be created and run at a single invocation:
48-
49-
my $thread = Thread.start({ for 1 .. 10 -> $v { say $v }});
50-
51-
In both cases the completion of the code encapsulated by the L<Thread>
52-
object can be waited on with the C<finish> method which will block until
53-
the thread completes:
54-
55-
$thread.finish;
56-
57-
Beyond that there are no further facilities for synchronization or resource
58-
sharing which is largely why it should be emphasised that threads are unlikely
59-
to be useful directly in user code.
60-
61-
62-
=head2 Schedulers
63-
64-
The next level of the concurrency API is that supplied by classes that
65-
provide the interface defined by the role L<Scheduler>. The intent
66-
of the scheduler interface is to provide a mechanism to determine which
67-
resources to use to run a particular task and when to run it. The majority
68-
of the higher level concurrency APIs are built upon a scheduler and it
69-
may not be necessary for user code to use them at all, although some
70-
methods such as those found in L<Proc::Async>, L<Promise> and L<Supply>
71-
allow you to explicitly supply a scheduler.
72-
73-
The current default global scheduler is available in the variable
74-
C<$*SCHEDULER>.
75-
76-
The primary interface of a scheduler (indeed the only method required
77-
by the L<Scheduler> interface) is the C<cue> method:
78-
79-
method cue(:&code, Instant :$at, :$in, :$every, :$times = 1; :&catch)
80-
81-
This will schedule the L<Callable> in C<&code> to be executed in the
82-
manner determined by the adverbs (as documented in L<Scheduler>) using
83-
the execution scheme as implemented by the scheduler. For example:
84-
85-
my $i = 0;
86-
my $cancellation = $*SCHEDULER.cue({ say $i++}, every => 2 );
87-
sleep 20;
88-
89-
Assuming that the C<$*SCHEDULER> hasn't been changed from the default,
90-
will print the numbers 0 to 10 approximately (i.e with operating system
91-
scheduling tolerances) every two seconds. In this case the code will
92-
be scheduled to run until the program ends normally, however the method
93-
returns a L<Cancellation> object which can be used to cancel the scheduled
94-
execution before normal completion:
95-
96-
my $i = 0;
97-
my $cancellation = $*SCHEDULER.cue({ say $i++}, every => 2 );
98-
sleep 10;
99-
$cancellation.cancel;
100-
sleep 10;
101-
102-
should only output 0 to 5,
103-
104-
Despite the apparent advantage the L<Scheduler> interface provides over
105-
that of L<Thread> all of functionality is available through higher level
106-
interfaces and it shouldn't be necessary to use a scheduler directly,
107-
except perhaps in the cases mentioned above where a scheduler can be
108-
supplied explicitly to certain methods.
109-
110-
A library may wish to provide an alternative scheduler implementation if
111-
it has special requirements, for instance a UI library may want all code
112-
to be run within a single UI thread, or some custom priority mechanism
113-
may be required, however the implementations provided as standard and
114-
described below should suffice for most user code.
115-
116-
=head3 ThreadPoolScheduler
117-
118-
The L<ThreadPoolScheduler> is the default scheduler, it maintains a pool
119-
of threads that are allocated on demand, creating new ones as necessary up
120-
to maximum number given as a parameter when the scheduler object was created
121-
(the default is 16.) If the maximum is exceeded then C<cue> may queue the
122-
code until such time as a thread becomes available.
123-
124-
Rakudo allows the maximum number of threads allowed in the default scheduler
125-
to be set by the environment variable C<RAKUDO_MAX_THREADS> at the time
126-
the program is started.
127-
128-
=head3 CurrentThreadScheduler
129-
130-
The L<CurrentThreadScheduler> is a very simple scheduler that will always
131-
schedule code to be run straight away on the current thread. The implication
132-
is that C<cue> on this scheduler will block until the code finishes
133-
execution, limiting its utility to certain special cases such as testing.
34+
=head1 High-level APIs
13435
13536
=head2 Promises
13637
137-
A L<Promise> can be thought of as encapsulating the result of the execution
138-
of some code that may not have completed or even started at the time the
139-
promise is obtained. They provide much of the functionality that user code
140-
will need to operate in a concurrent or asynchronous manner.
38+
A L<Promise|/type/Promise> (also called I<future> in other programming
39+
environments) encapsulates the result of a computation
40+
that may not have completed or even started at the time the
41+
promise is obtained. It provides much of the functionality that user code
42+
needs to operate in a concurrent or asynchronous manner.
14143
142-
At simplest promises can be thought of as a mechanism for asynchronously
143-
chaining the results of various callable code:
44+
=begin code
45+
my $p1 = Promise.new;
46+
say $p1.status; # Planned;
47+
$p1.keep('result');
48+
say $p1.status; # Kept
49+
say $p1.result; # result
14450
145-
my $promise1 = Promise.new();
146-
my $promise2 = $promise1.then(-> $v { say $v.result; "Second Result"});
147-
$promise1.keep("First Result");
148-
say $promise2.result;
51+
my $p2 = Promise.new;
52+
$p2.break('oh no');
53+
say $p2.status; # Broken
54+
say $p2.result; # dies with "oh no"
55+
=end code
14956
150-
Here the C<then> schedules code to be executed when the first L<Promise>
151-
is kept or broken, itself returning a new L<Promise> which will be kept
57+
Promises gain much of their power by being composable, for example by
58+
chaining:
59+
60+
my $promise1 = Promise.new();
61+
my $promise2 = $promise1.then(
62+
-> $v { say $v.result; "Second Result"}
63+
);
64+
$promise1.keep("First Result");
65+
say $promise2.result; # First Result \n Second Result
66+
67+
Here the L<then|/type/Promise#method_then> schedules code to be executed
68+
when the first L<Promise> is kept or broken, itself returning a new
69+
L<Promise> which will be kept
15270
with the result of the code when it is executed (or broken if the code
15371
fails.) C<keep> changes the status of the promise to C<Kept> setting
15472
the result to the positional argument. C<result> blocks the current
@@ -161,7 +79,7 @@ latter behaviour is illustrated with:
16179
my $promise2 = $promise1.then(-> $v { say "Handled but : "; say $v.result});
16280
$promise1.break("First Result");
16381
try $promise2.result;
164-
say $promise2.cause;
82+
say $promise2.cause; # Handled but : \n First Result
16583
16684
Here the C<break> will cause the code block of the C<then> to throw an
16785
exception when it calls the C<result> method on the original promise
@@ -301,12 +219,6 @@ events. Calling C<done> on the supply object will similarly call the
301219
C<done> callback that may be specified for any taps but will not prevent any
302220
further events being emitted to the stream, or taps receiving them.
303221
304-
=begin comment
305-
306-
I couldn't think of a non-contrived but succinct example for a use-case
307-
308-
=end comment
309-
310222
The method C<interval> will return a new supply which will automatically
311223
emit a new event at the specified interval, the data that is emitted will
312224
be an integer starting at 0 that will be incremented for each event. The
@@ -357,6 +269,119 @@ to the C<map> will be emitted:
357269
358270
359271
272+
=head1 Low-level APIs
273+
274+
=head2 Threads
275+
276+
The lowest level interface for concurrency is provided by L<Thread>. A
277+
thread can be thought of as a piece of code that may eventually be run
278+
on a processor, the arrangement for which is made almost entirely by the
279+
virtual machine and/or operating system. Threads should be considered,
280+
for all intents, largely un-managed and their direct use should be
281+
avoided in user code.
282+
283+
A thread can either be created and then actually run later:
284+
285+
my $thread = Thread.new(code => { for 1 .. 10 -> $v { say $v }});
286+
# ...
287+
$thread.run;
288+
289+
Or can be created and run at a single invocation:
290+
291+
my $thread = Thread.start({ for 1 .. 10 -> $v { say $v }});
292+
293+
In both cases the completion of the code encapsulated by the L<Thread>
294+
object can be waited on with the C<finish> method which will block until
295+
the thread completes:
296+
297+
$thread.finish;
298+
299+
Beyond that there are no further facilities for synchronization or resource
300+
sharing which is largely why it should be emphasised that threads are unlikely
301+
to be useful directly in user code.
302+
303+
304+
305+
=head2 Schedulers
306+
307+
The next level of the concurrency API is that supplied by classes that
308+
provide the interface defined by the role L<Scheduler>. The intent
309+
of the scheduler interface is to provide a mechanism to determine which
310+
resources to use to run a particular task and when to run it. The majority
311+
of the higher level concurrency APIs are built upon a scheduler and it
312+
may not be necessary for user code to use them at all, although some
313+
methods such as those found in L<Proc::Async>, L<Promise> and L<Supply>
314+
allow you to explicitly supply a scheduler.
315+
316+
The current default global scheduler is available in the variable
317+
C<$*SCHEDULER>.
318+
319+
The primary interface of a scheduler (indeed the only method required
320+
by the L<Scheduler> interface) is the C<cue> method:
321+
322+
method cue(:&code, Instant :$at, :$in, :$every, :$times = 1; :&catch)
323+
324+
This will schedule the L<Callable> in C<&code> to be executed in the
325+
manner determined by the adverbs (as documented in L<Scheduler>) using
326+
the execution scheme as implemented by the scheduler. For example:
327+
328+
my $i = 0;
329+
my $cancellation = $*SCHEDULER.cue({ say $i++}, every => 2 );
330+
sleep 20;
331+
332+
Assuming that the C<$*SCHEDULER> hasn't been changed from the default,
333+
will print the numbers 0 to 10 approximately (i.e with operating system
334+
scheduling tolerances) every two seconds. In this case the code will
335+
be scheduled to run until the program ends normally, however the method
336+
returns a L<Cancellation> object which can be used to cancel the scheduled
337+
execution before normal completion:
338+
339+
my $i = 0;
340+
my $cancellation = $*SCHEDULER.cue({ say $i++}, every => 2 );
341+
sleep 10;
342+
$cancellation.cancel;
343+
sleep 10;
344+
345+
should only output 0 to 5,
346+
347+
Despite the apparent advantage the L<Scheduler> interface provides over
348+
that of L<Thread> all of functionality is available through higher level
349+
interfaces and it shouldn't be necessary to use a scheduler directly,
350+
except perhaps in the cases mentioned above where a scheduler can be
351+
supplied explicitly to certain methods.
352+
353+
A library may wish to provide an alternative scheduler implementation if
354+
it has special requirements, for instance a UI library may want all code
355+
to be run within a single UI thread, or some custom priority mechanism
356+
may be required, however the implementations provided as standard and
357+
described below should suffice for most user code.
358+
359+
=head3 ThreadPoolScheduler
360+
361+
The L<ThreadPoolScheduler> is the default scheduler, it maintains a pool
362+
of threads that are allocated on demand, creating new ones as necessary up
363+
to maximum number given as a parameter when the scheduler object was created
364+
(the default is 16.) If the maximum is exceeded then C<cue> may queue the
365+
code until such time as a thread becomes available.
366+
367+
Rakudo allows the maximum number of threads allowed in the default scheduler
368+
to be set by the environment variable C<RAKUDO_MAX_THREADS> at the time
369+
the program is started.
370+
371+
=head3 CurrentThreadScheduler
372+
373+
The L<CurrentThreadScheduler> is a very simple scheduler that will always
374+
schedule code to be run straight away on the current thread. The implication
375+
is that C<cue> on this scheduler will block until the code finishes
376+
execution, limiting its utility to certain special cases such as testing.
377+
378+
=begin comment
379+
380+
I couldn't think of a non-contrived but succinct example for a use-case
381+
382+
=end comment
383+
384+
360385
=begin comment
361386
362387
=head3 tap

0 commit comments

Comments
 (0)