-
Notifications
You must be signed in to change notification settings - Fork 32
Manual
This manual is for Fibers (version 1.3.1, updated 30 May 2023)
Copyright 2016 Andy Wingo
Copyright 2021 Maxime Devos
Copyright 2022,2023 Aleix Conchillo Flaqué
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
Next: API reference, Previous: Fibers, Up: Fibers
Fibers is a facility for lightweight concurrency in Guile.
Next: Fibers design, Up: Introduction
Modern machines have the raw capability to serve hundreds of thousands of simultaneous long-lived connections, but it’s often hard to manage this at the software level. Fibers tries to solve this problem in a nice way. Before discussing the approach taken in Fibers, it’s worth spending some time on history to see how we got here.
One of the most dominant patterns for concurrency these days is
“callbacks”, notably in the Twisted library for Python and the
Node.js run-time for JavaScript. The basic observation in the
callback approach to concurrency is that the efficient way to handle
tens of thousands of connections at once is with low-level operating
system facilities like poll
, epoll
or kqueue
. By
default, Fibers will try to use the native events system facility
implementation (e.g. epoll
on Linux systems) but
libevent
is also supported for portability reasons. You add
all of the file descriptors that you are interested in to a “poll
set” and then ask the operating system which ones are readable or
writable, as appropriate. Once the operating system says “yes, file
descriptor 7145 is readable”, you can do something with that socket;
but what? With callbacks, the answer is “call a user-supplied
closure”: a callback, representing the continuation of the
computation on that socket.
Building a network service with a callback-oriented concurrency system means breaking the program into little chunks that can run without blocking. Wherever a program could block, instead of just continuing the program, you register a callback. Unfortunately this requirement permeates the program, from top to bottom: you always pay the mental cost of inverting your program’s control flow by turning it into callbacks, and you always incur run-time cost of closure creation, even when the particular I/O could proceed without blocking. It’s a somewhat galling requirement, given that this contortion is required of the programmer, but could be done by the compiler. We Schemers demand better abstractions than manual, obligatory continuation-passing-style conversion.
Callback-based systems also encourage unstructured concurrency, as in practice callbacks are not the only path for data and control flow in a system: usually there is mutable global state as well. Without strong patterns and conventions, callback-based systems often exhibit bugs caused by concurrent reads and writes to global state.
Some of the problems of callbacks can be mitigated by using “promises” or other library-level abstractions; if you’re a Haskell person, you can think of this as lifting all possibly-blocking operations into a monad. If you’re not a Haskeller, that’s cool, neither am I! But if your typey spidey senses are tingling, it’s for good reason: with promises, your whole program has to be transformed to return promises-for-values instead of values anywhere it would block.
An obvious solution to the control-flow problem of callbacks is to use threads. In the most generic sense, a thread is a language feature which denotes an independent computation. Threads are created by other threads, but fork off and run independently instead of returning to their caller. In a system with threads, there is implicitly a scheduler somewhere that multiplexes the threads so that when one suspends, another can run.
In practice, the concept of threads is often conflated with a particular implementation, kernel threads. Kernel threads are very low-level abstractions that are provided by the operating system. The nice thing about kernel threads is that they can use any CPU that is the kernel knows about. That’s an important factor in today’s computing landscape, where Moore’s law seems to be giving us more cores instead of more gigahertz.
However, as a building block for a highly concurrent system, kernel threads have a few important problems.
One is that kernel threads simply aren’t designed to be allocated in huge numbers, and instead are more optimized to run in a one-per-CPU-core fashion. Their memory usage is relatively high for what should be a lightweight abstraction: some 10 kilobytes at least and often some megabytes, in the form of the thread’s stack. There are ongoing efforts to reduce this for some systems but we cannot expect wide deployment in the next 5 years, if ever. Even in the best case, a hundred thousand kernel threads will take at least a gigabyte of memory, which seems a bit excessive for book-keeping overhead.
Kernel threads can be a bit irritating to schedule, too: when one thread suspends, it’s for a reason, and it can be that user-space knows a good next thread that should run. However because kernel threads are scheduled in the kernel, it’s rarely possible for the kernel to make informed decisions. There are some “user-mode scheduling” facilities that are in development for some systems, but again only for some systems.
The other significant problem is that building non-crashy systems on top of kernel threads is hard to do, not to mention “correct” systems. It’s an embarrassing situation. For one thing, the low-level synchronization primitives that are typically provided with kernel threads, mutexes and condition variables, are not composable. Also, as with callback-oriented concurrency, one thread can silently corrupt another via unstructured mutation of shared state. It’s worse with kernel threads, though: a kernel thread can be interrupted at any point, not just at I/O. And though callback-oriented systems can theoretically operate on multiple CPUs at once, in practice they don’t. This restriction is sometimes touted as a benefit by proponents of callback-oriented systems, because in such a system, the callback invocations have a single, sequential order. With multiple CPUs, this is not the case, as multiple threads can run at the same time, in parallel.
Kernel threads can work. The Java virtual machine does at least manage to prevent low-level memory corruption and to do so with high performance, but still, even Java-based systems that aim for maximum concurrency avoid using a thread per connection because threads use too much memory.
In this context it’s no wonder that there’s a third strain of concurrency: shared-nothing message-passing systems like Erlang. Erlang isolates each thread (called processes in the Erlang world), giving each it its own heap and “mailbox”. Processes can spawn other processes, and the concurrency primitive is message-passing. A process that tries receive a message from an empty mailbox will “block”, from its perspective. In the meantime the system will run other processes. Message sends never block, oddly; instead, sending to a process with many messages pending makes it more likely that Erlang will pre-empt the sending process. It’s a strange trade off, but it makes sense when you realize that Erlang was designed for network transparency: the same message send/receive interface can be used to send messages to processes on remote machines as well.
No network is truly transparent, however. At the most basic level, the performance of network sends should be much slower than local sends. Whereas a message sent to a remote process has to be written out byte-by-byte over the network, there is no need to copy immutable data within the same address space. The complexity of a remote message send is O(n) in the size of the message, whereas a local immutable send is O(1). This suggests that hiding the different complexities behind one operator is the wrong thing to do. And indeed, given byte read and write operators over sockets, it’s possible to implement remote message send and receive as a process that serializes and parses messages between a channel and a byte sink or source. In this way we get cheap local channels, and network shims are under the programmer’s control. This is the approach that the Go language takes, and is the one we use in Fibers.
Structuring a concurrent program as separate threads that communicate over channels is an old idea that goes back to Tony Hoare’s work on “Communicating Sequential Processes” (CSP). CSP is an elegant tower of mathematical abstraction whose layers form a pattern language for building concurrent systems that you can still reason about. Interestingly, it does so without any concept of time at all, instead representing a thread’s behavior as a trace of instantaneous events. Threads themselves are like functions that unfold over the possible events to produce the actual event trace seen at run-time.
This view of events as instantaneous happenings extends to communication as well. In CSP, one communication between two threads is modelled as an instantaneous event, partitioning the traces of the two threads into “before” and “after” segments.
Practically speaking, this has ramifications in the Go language, which was heavily inspired by CSP. You might think that a channel is just a an asynchronous queue that blocks when writing to a full queue, or when reading from an empty queue. That’s a bit closer to the Erlang conception of how things should work, though as we mentioned, Erlang simply slows down writes to full mailboxes rather than blocking them entirely. However, that’s not what Go and other systems in the CSP family do; sending a message on a channel will block until there is a receiver available, and vice versa. The threads are said to “rendezvous” at the event.
Unbuffered channels have the interesting property that you can
select
between sending a message on channel a or channel
b, and in the end only one message will be sent; nothing happens
until there is a receiver ready to take the message. In this way
messages are really owned by threads and never by the channels
themselves. You can of course add buffering if you like, simply by
making a thread that waits on either sends or receives on a channel,
and which buffers sends and makes them available to receives. It’s
also possible to add explicit support for buffered channels, as Go
does, which can reduce the number of context switches as there is no
explicit buffer thread.
Whether to buffer or not to buffer is a tricky choice. It’s possible
to implement singly-buffered channels in a system like Erlang via an
explicit send/acknowledge protocol, though it seems difficult to
implement completely unbuffered channels. As we mentioned, it’s
possible to add buffering to an unbuffered system by the introduction
of explicit buffer threads. In the end though in Fibers we follow
CSP’s lead so that we can implement the nice select
behavior
that we mentioned above.
As a final point, select
is OK but is not a great language
abstraction. Say you call a function and it returns some kind of
asynchronous result which you then have to select
on. It could
return this result as a channel, and that would be fine: you can add
that channel to the other channels in your select
set and you
are good. However, what if what the function does is receive a
message on a channel, then do something with the message? In that
case the function should return a channel, plus a continuation (as a
closure or something). If select
results in a message being
received over that channel, then we call the continuation on the
message. Fine. But, what if the function itself wanted to
select
over some channels? It could return multiple channels
and continuations, but that becomes unwieldy.
What we need is an abstraction over asynchronous operations, and that
is the main idea of a CSP-derived system called “Concurrent ML”
(CML). Originally implemented as a library on top of Standard ML of
New Jersey by John Reppy, CML provides this abstraction, which in
Fibers is called an operation1. Calling
send-operation
on a channel returns an operation, which is just
a value. Operations are like closures in a way; a closure wraps up
code in its environment, which can be later called many times or not
at all. Operations likewise can be performed2 many times or not at all; performing an operation
is like calling a function. The interesting part is that you can
compose operations via the wrap-operation
and
choice-operation
combinators. The former lets you bundle up an
operation and a continuation. The latter lets you construct an
operation that chooses over a number of operations. Calling
perform-operation
on a choice operation will perform one and
only one of the choices. Performing an operation will call its
wrap-operation
continuation on the resulting values.
While it’s possible to implement Concurrent ML in terms of Go’s
channels and baked-in select
statement, it’s more expressive to
do it the other way around, as that also lets us implement other
operations types besides channel send and receive, for example
timeouts and condition variables.
Next: Parallelism, Previous: A brief history of language facilities for concurrency, Up: Introduction
In Fibers, the unit of computation is the fiber, a lightweight
thread managed by Guile. A fiber communicates with the outside world
via normal Guile ports: get-bytevector
, put-string
, and
all that. Within a single Guile process fibers communicate by sending
and receiving Scheme values over channels.
Whenever a fiber tries to read but no data is available, or tries to write but no data can be written, Guile will suspend the fiber and arrange for it to be resumed when the port or channel operation can proceed. In the meantime, Guile will run other fibers. When no fiber is runnable, Guile will use efficient system facilities to sleep until input or output can proceed.
When a fiber would block, it suspends to the scheduler from the current thread. The scheduler will arrange to re-start the fiber when the port or channel becomes readable or writable, as appropriate. For ports, the scheduler adds the file descriptor associated with the port as a pending event. In either case, the scheduler remembers which fibers are waiting and for what, so that the user can inspect the state of their system.
Currently in Fibers there is no ambient scheduler running; an error is
signalled if a user calls spawn-fiber
while not inside a
run-fibers
invocation. However it is possible to communicate
with fibers via channels or other Concurrent ML-like operations, even
outside of a run-fibers
invocation. If an operation would
block, it suspends the entire kernel thread until the operation can
proceed.
On the Scheme level, a fiber is a delimited continuation. When a scheduler runs a fiber, it does so within a prompt; when the fiber suspends, it suspends to the prompt. The scheduler saves the resulting continuation as part of the fiber’s state. In this way the per-fiber computational state overhead is just the size of the pending stack frames of the fiber, which can be just a handful of words.
By default, Fibers takes advantage of all available cores on your system. See Parallelism, for full details.
Ports are how fibers communicate with the world; channels are how fibers communicate with each other. Channels are meeting places between fibers, or between threads. A fiber or thread that goes to send a message over a channel will block until there is a fiber or thread ready to receive the message, and vice versa. Once both parties are ready, the message is exchanged and both parties resume. There can be multiple fibers and threads waiting to read and write on a channel, allowing channels to express not only pipelines but also common concurrency patterns such as fan-in and fan-out.
Unlike Erlang channels, channels in Fibers are purely local and do not attempt to provide the illusion of network transparency. This does have the positive advantage that we are able to provide better backpressure support than Erlang, blocking when no receiver is available to handle a message instead of letting the sender keep sending many messages.
To avoid starvation, a fiber can only run once within a “turn”. Each turn starts with a poll on file descriptors of interest and marks the associated fibers as runnable. If no fiber is runnable at the start of the poll, the poll call will ask the kernel to wait for a runnable descriptor. Otherwise the poll call will still check for runnable file descriptors, but also ask the kernel to return immediately. There is an additional FD added to the poll set that is used to interrupt a blocking poll, for example if a fiber becomes runnable due to I/O on a channel from a separate kernel thread while the first scheduler was still polling.
If a fiber runs for too long (by default, 10 milliseconds), it will be preempted: interrupted and rescheduled for the next turn. The preemption frequency can be tuned by the user or turned off for a fully cooperative scheduling model.
To enable expressive cross-kernel-thread communications, channel sends and receives are atomic and thread-safe.
Previous: Fibers design, Up: Introduction
By default, Fibers will take advantage of all CPU cores available to
it. The degree of parallelism is controlled by the
#:parallelism
keyword argument to run-fibers
, which
defaults to (current-processor-count)
.
See Threads in Guile Reference Manual, for more
information on current-processor-count
. Pass a different
argument to #:parallelism
to choose a different degree of
parallelism, for example 1
for single-threaded operation. To
allocate specific cores to a Guile process, use the taskset
command-line utility.
A newly spawned fiber will be scheduled on the kernel thread in which
it was created, unless #:parallel? #t
was passed to the
spawn-fiber
invocation, in which case its initial kernel thread
will be selected at random. In this way the default is to preserve
locality of memory access and minimize cross-thread coordination.
Additionally, after a scheduler has exhausted its run queue for the current turn, if it has nothing scheduled for the next turn it will try to steal work from other schedulers. This work stealing allows a set of parallel schedulers to automatically rebalance and burn through the current global run queue as fast as possible.
After processing its current run queue, possibly including stolen work
if its next run queue was empty, a scheduler will then ask the
operating system for any file descriptors that have pending activity.
The scheduler puts a time limit on this sleep phase if there are
pending timeouts, but otherwise the sleep will only wake up when a
file descriptor becomes readable or writable, or if another thread
wakes up the scheduler. Schedulers that are sleeping do not
participate in work stealing. For this reason there is another source
of work rebalancing in Fibers, work sharing. As mentioned
above, to schedule a fiber on a random remote scheduler, use
spawn-fiber
with the #:parallel? #t
keyword argument.
The specifics of the scheduling algorithm may change, and it may be that there is no global “best scheduler”. We look forward to experimenting and finding not only a good default algorithm, but also a library that you can use to find your own local maximum in the scheduling space.
As far as performance goes, we have found that computationally intensive tasks parallelize rather well. Expect near-linear speedup as you make more cores available to fibers.
On the other hand, although allocation rate improves with additional cores, it currently does not scale linearly, and works best when all cores are on the same NUMA node. This is due to details about how Guile manages its memory.
In general there may be many bottlenecks that originate in Guile, Fibers, and in your application, and these bottlenecks constrain the ability of an application to scale linearly.
Probably the best way to know if Fibers scales appropriately for your
use case is to make some experiments. To restrict the set of cores
available to Guile, run Guile from within taskset -c
. See
taskset
’s manual page. For machines with multiple sockets you
will probably want to use numactl --membind
as well. Then to
test scalability on your machine, run ./env guile
tests/speedup.scm
from within your Fibers build directory, or
benchmark your application directly. In time we should be able to
develop some diagnostic facilities to help the Fibers user determine
where a scaling bottleneck is in their application.
Next: Pitfalls, Previous: Introduction, Up: Fibers
Fibers is a library built on Guile. It consists of a public interface, base support for asynchronous operations, implementations of operations for channels and timers, and an internals interface.
- Using Fibers
- Operations
- Channels
- Timers
- Conditions
- Port Readiness
- REPL Commands
- Schedulers and Tasks
Next: Operations, Up: API reference
The public interface of fibers right now is quite minimal. To use it,
import the (fibers)
module:
(use-modules (fibers))
To create a new fibers scheduler and run it in the current Guile
thread, use run-fibers
.
-
Function: run-fibers [init-thunk=
#f
] [#:install-suspendable-ports?=#t
] [#:scheduler=#f
] [#:parallelism=(current-processor-count)
] [#:cpus=(getaffinity 0)
] [#:hz=100
] [#:drain?=#f
] ¶ -
Run init-thunk within a fiber in a fresh scheduler, blocking until init-thunk returns. Return the value(s) returned by the call to init-thunk.
For example:
(run-fibers (lambda () 1)) ⇒ 1
(run-fibers (lambda () (spawn-fiber (lambda () (display "hey!\n")))) #:drain? #t) -| hey!
Calling
run-fibers
will ensure that Guile’s port implementation allows fibers to suspend if a read or a write on a port would block. See Non-Blocking I/O in Guile Reference Manual, for more details on suspendable ports. If for some reason you want port reads or writes to prevent other fibers from running, pass#f
as the#:install-suspendable-ports?
keyword argument.By default,
run-fibers
will create a fresh scheduler, and destroy it afterrun-fibers
finishes. If you happen to have a pre-existing scheduler (because you used the low-level scheduler interface to create one), you can pass it torun-fibers
using the#:scheduler
keyword argument. In that case the scheduler will not be destroyed whenrun-fibers
finishes.run-fibers
will return when the init-thunk call returns. To make it additionally wait until there are no more runnable fibers or pending timeouts, specify the#:drain? #t
keyword argument.If
run-fibers
creates a scheduler on your behalf, it will arrange for a number of “peer” schedulers to also be created, up to a total scheduler count controlled by the parallelism keyword argument. These peer schedulers will be run in separate threads and will participate in work rebalancing. The fibers will be run on the CPUs specified by cpus. See Parallelism.By default hz is 100, indicating that running fibers should be preempted 100 times per every second of CPU time (not wall-clock time). Note that preemption will only occur if the fiber can actually be suspended; See Barriers, for more information. Pass
0
for hz to disable preemption, effectively making scheduling fully cooperative.
-
Function: spawn-fiber thunk [scheduler=
(require-current-scheduler)
] [#:parallel?=#f
] ¶ -
Spawn a new fiber that will run thunk. Return the new fiber. The new fiber will run concurrently with other fibers.
The fiber will be added to the current scheduler, which is usually what you want. It’s also possible to spawn the fiber on a specific scheduler, which is useful to ensure that the fiber runs on a different kernel thread. In that case, pass the optional
scheduler
argument.If parallel? is true, the fiber will be started not (necessarily) on scheduler, but on a random member of the peer set of scheduler. See Parallelism. Note that every scheduler is a member of its own peer set.
The fiber will inherit the fluid–value associations (the dynamic state) in place when
spawn-fiber
is called. Anyfluid-set!
or parameter set within the fiber will not affect fluid or parameter bindings outside the fiber.
- Function: sleep seconds ¶
Wake up the current fiber after seconds of wall-clock time have elapsed. This definition will replace the binding for
sleep
in the importing module, effectively overriding Guile’s “core” definition.
Next: Channels, Previous: Using Fibers, Up: API reference
Operations are first-class abstractions for asynchronous events.
There are primitive operation types, such as waiting for a timer
(see Timers) or waiting for a message on a channel
(see Channels). Operations can also be combined and transformed
using the choice-operation
and wrap-operation
from this module:
(use-modules (fibers operations))
- Function: wrap-operation op f ¶
Given the operation op, return a new operation that, if and when it succeeds, will apply f to the values yielded by performing op, and yield the result as the values of the wrapped operation.
- Function: choice-operation . ops ¶
Given the operations ops, return a new operation that if it succeeds, will succeed with one and only one of the sub-operations ops.
Finally, once you have an operation, you can perform it using
perform-operation
.
- Function: perform-operation op ¶
Perform the operation op and return the resulting values. If the operation cannot complete directly, block until it can complete.
See Introduction, for more on the “Concurrent ML” system that introduced the concept of the operation abstraction. In the context of Fibers, “blocking” means to suspend the current fiber, or to suspend the current kernel thread if the operation is performed outside of a fiber.
There is also a low-level constructor for other modules that implement primitive operation types:
- Function: make-base-operation wrap-fn try-fn block-fn ¶
Make a fresh base operation.
This is a low-level constructor, though; if you ever feel the need to
call make-base-operation
, make sure you’re familiar with the
Concurrent ML literature. Godspeed!
Next: Timers, Previous: Operations, Up: API reference
Channels are the way to communicate between fibers. To use them, load the channels module:
(use-modules (fibers channels))
- Function: make-channel ¶
Make a fresh channel.
- Function: channel? obj ¶
Return
#t
if obj is a channel, or#f
otherwise.
- Function: put-operation channel message ¶
Make an operation that if and when it completes will rendezvous with a receiving operation to send message over channel.
- Function: get-operation channel ¶
Make an operation that if and when it completes will rendezvous with a sending operation to receive one value from channel.
- Function: put-message channel message ¶
-
Send message on channel, and return zero values. If there is already a receiver waiting to receive a message on this channel, give it our message and continue. Otherwise, block until a receiver becomes available.
Equivalent to:
(perform-operation (put-operation channel message))
- Function: get-message channel ¶
-
Receive a message from channel and return it. If there is already a sender waiting to send a message on this channel, take its message directly. Otherwise, block until a sender becomes available.
Equivalent to:
(perform-operation (get-operation channel))
Channels are thread-safe; you can use them to send and receive values between fibers on different kernel threads.
Next: Conditions, Previous: Channels, Up: API reference
Timers are a kind of operation that, you guessed it, let you sleep until a certain time.
(use-modules (fibers timers))
- Function: sleep-operation seconds ¶
Make an operation that will succeed with no values when seconds have elapsed.
- Function: timer-operation expiry ¶
Make an operation that will succeed when the current time is greater than or equal to expiry, expressed in internal time units. The operation will succeed with no values.
- Function: sleep seconds ¶
Block the calling fiber or kernel thread until seconds have elapsed.
Next: Port Readiness, Previous: Timers, Up: API reference
Condition variables are a simple one-bit form of concurrent communication. A condition variable has two states: it starts in the unsignalled state and later may transition to the signalled state. When a condition becomes signalled, any associated waiting operations complete.
(use-modules (fibers conditions))
- Function: make-condition ¶
Make a new condition variable.
- Function: condition? obj ¶
Return
#t
if obj is a condition variable, or#f
otherwise.
- Function: signal-condition! cvar ¶
Signal cvar, notifying all waiting fibers and preventing blocking of future fibers waiting on this condition.
- Function: wait-operation cvar ¶
Make an operation that will succeed with no values when cvar becomes signalled.
- Function: wait cvar ¶
Block the calling fiber or kernel thread until cvar becomes signalled. Equivalent to
(perform-operation (wait-operation cvar))
.
Next: REPL Commands, Previous: Conditions, Up: API reference
These two operations can be used on file ports to wait until they are readable or writable. Spurious wake-ups are possible. This is complementary to Guile’s suspendable ports.
(use-modules (fibers io-wakeup))
- Function: wait-until-port-readable-operation port ¶
Make an operation that will succeed with no values when the input port port becomes readable. For passive sockets, this operation succeeds when a connection becomes available.
- Function: wait-until-port-writable-operation port ¶
Make an operation that will succeed with no values when the output port port becomes writable.
Next: Schedulers and Tasks, Previous: Port Readiness, Up: API reference
Fibers implements some basic extensions to the Guile command-line
interface (its Read-Eval-Print Loop, or the REPL). Prefix these
commands with a comma (,
) to run them at the REPL; see
,help fibers
for full details, once you have loaded the
(fibers)
module of course.
- REPL Command: scheds ¶
Show a list of all schedulers.
- REPL Command: spawn-sched ¶
Create a new scheduler for fibers, and run it on a new kernel thread.
- REPL Command: kill-sched name ¶
Shut down the scheduler named name. Use
,scheds
to list scheduler names.
- REPL Command: spawn-fiber exp [sched] ¶
Spawn a new fiber that runs exp. If sched is given, the fiber will be spawned on the given scheduler.
Previous: REPL Commands, Up: API reference
Internally, fibers are built on top of schedulers and tasks.
A scheduler runs tasks. A task is just a thunk (a function of no arguments) whose return value is ignored. A scheduler runs tasks in batches, once per turn. Each turn, a scheduler takes all tasks from its “next” run-queue and adds them to its “current” run-queue, and then runs the tasks on the current run-queue in order. The scheduler then goes to the next turn, unless its “finished?” function returns true.
Both the “next” and the “current” run-queues are public atomic data structures. Scheduling a task adds it to the “next” run-queue. Scheduling a task is a thread-safe operation; it can be done by any thread. Scheduling a task on a scheduler running on a remote thread will ensure that the thread wakes up from any sleeping operation it might be currently engaged in.
There is normally just one scheduler for each kernel thread that runs fibers. Several schedulers can be made aware of each other so that they can one can spread out the load when spawning tasks that should run in parallel. Also, before moving to the next turn, a scheduler will try to steal work from other schedulers that it knows about, popping off an item from the remote scheduler’s “current” run-queue.
There are two additional sources of tasks for a scheduler: file descriptor events and timers. When gathering tasks to schedule for the next turn, a scheduler will call the native events system faciliy implementation to be notified of activity on file descriptors of interest. If there are no pending tasks on the next run-queue, the call to the events system facility will sleep until the scheduler is woken up, or until a timer expires.
The capability that allows fibers to be built on schedulers is that tasks can suspend. Suspending a task calls a user-supplied after-suspend handler that is passed the continuation of the task. The user can then schedule that continuation at some later time. In this way a fiber starts as a single task run by a scheduler, but each time it suspends and is resumed, a fresh task consisting of the fiber’s is scheduled. The fibers layer also uses other Guile mechanisms to isolate fibers from each other, such as dynamic states.
All interfaces in this module are thread-safe except where marked otherwise.
(use-modules (fibers scheduler))
-
Function: make-scheduler [#:parallelism=
#f
] [#:prompt-tag=(make-prompt-tag "fibers")
] ¶ Make a new scheduler in which to run fibers. If parallelism is true, it should be an integer indicating the number of schedulers to make. The resulting schedulers will all share the same prompt tag and will steal and share out work from among themselves.
- Function: run-scheduler sched finished? ¶
Run sched until calling the supplied finished? thunk returns true. Return zero values. Signal an error if scheduler is already running in some other kernel thread.
- Function: current-scheduler ¶
Return the current scheduler, or
#f
if no scheduler is current.
- Function: scheduler-kernel-thread sched ¶
Return the kernel thread that sched is running on, or
#f
if it is not currently running.
- Function: scheduler-runcount sched ¶
Return the number of tasks that have been run on sched, modulo 2^{32}. This interface is useful as a lightweight check to see if a remote scheduler is making progress.
- Function: scheduler-remote-peers sched ¶
Return a list of peer schedulers of sched, not including sched itself.
- Function: scheduler-work-pending? sched ¶
Return
#t
if sched has any work pending: any runnable tasks or any pending timeouts.
- Function: choose-parallel-scheduler sched ¶
Return a random scheduler from sched’s peer set. Note that sched’s peer set includes sched itself.
- Function: destroy-scheduler sched ¶
Release any resources associated with sched.
- Function: schedule-task sched task ¶
Arrange to run task, a procedure of no arguments, on the next turn of sched. If sched is remote and sleeping, it will be woken up.
- Function: schedule-task-when-fd-readable sched fd task ¶
Arrange to schedule task when the file descriptor fd becomes readable. Not thread-safe.
- Function: schedule-task-when-fd-writable sched fd task ¶
Arrange to schedule task on sched when the file descriptor fd becomes writable. Not thread-safe.
- Function: schedule-task-at-time sched expiry task ¶
Arrange to schedule task on sched when the absolute real time is greater than or equal to expiry, expressed in internal time units. Not thread-safe.
- Function: suspend-current-task after-suspend ¶
Suspend the current task to the current scheduler. After suspending, call the after-suspend callback with two arguments: the current scheduler, and the continuation of the current task. The continuation passed to the after-suspend handler is the continuation of the
suspend-current-task
call.
- Function: yield-current-task ¶
Yield control to the current scheduler. Like calling
(suspend-current-task schedule-task)
except that it avoids suspending if the current continuation isn’t suspendable. Returns#t
if the yield succeeded, or#f
otherwise.
Next: Examples, Previous: API reference, Up: Fibers
Running Guile code within a fiber mostly “just works”. There are a few pitfalls to be aware of though.
When you run a program under fibers, the fibers library arranges to make it so that port operations can suspend the fiber instead of block. This generally works, with some caveats.
- The port type has to either never block, or support non-blocking I/O. Currently the only kind of port in Guile are file ports (including sockets), and for them this condition is fulfilled. However notably non-blocking I/O is not supported for custom binary I/O ports, not yet anyway. If you need this, get it fixed in Guile :)
- You have to make sure that any file port you operate on is opened in
nonblocking mode. See Non-Blocking I/O in Guile Reference
Manual, for the obscure
fcntl
incantation to use on your ports. - You have to avoid any operation on ports that is not supported yet in
Guile for non-blocking I/O. Since non-blocking I/O is new in Guile,
only some I/O operations are expressed in terms of the primitive
operations. Notably, Scheme
read
,display
, andwrite
are still implemented in C, which prevents any fiber that uses them from suspending and resuming correctly. What will happen instead is that the call blocks instead of suspending. If you find a situation like this, talk to Guile developers to get it fixed :) - You can enable non-blocking I/O for local files, but Linux at least will always say that the local file is ready for I/O even if it has to page in data from a spinning-metal device. This is a well-known limitation for which the solution is apparently to do local I/O via a thread pool. We could implement this in Fibers, or in Guile... not sure what the right thing is!
You also have to avoid any other library or system calls that would
block. One common source of blocking is getaddrinfo
and
related network address resolution library calls. Again, apparently
the solution is thread pools? Probably in Fibers we should implement
a thread-pooled address resolver.
The (fibers)
module exports a sleep
replacement. Code
that sleeps should import the (fibers)
module to be sure that
they aren’t using Guile’s sleep
function.
Finally, a fiber itself has to avoid blocking other fibers; it must
reach a “yield point” some time. A yield point includes a read or
write on a port or a channel that would block, or a sleep
.
Other than that, nothing will pre-empt a fiber, at least not
currently. If you need to yield to the scheduler, then at least do a
(sleep 0)
or something.
When a fiber suspends, Fibers uses abort-to-prompt
to save the
fiber’s continuation, saving each pending computation in that fiber to
the heap. When the fiber resumes, Fibers invokes the saved
continuation, effectively replaying these saved stack frames back onto
the current stack. For this operation to succeed, the saved
continuation needs to be suspendable. A suspendable
continuation should be able to be resumed after the call to
abort-to-prompt
.
Most continuations in Guile are suspendable. However, not all of them are. It’s possible to explicitly instate a continuation barrier (see Continuation Barriers in Guile Reference Manual) that will allow the continuation to be aborted but not reinstated:
;; If put-message suspends, we will never resume! (run-fibers (lambda () (with-continuation-barrier (lambda () (put-message channel 42)))))
If the put-message
call can’t succeed directly, then the fiber
will suspend. However when the fiber becomes runnable again, it can’t
be rewound because of the barrier. Because this is the case, when
Fibers goes to suspend a computation but realizes that the suspended
fiber could never be resumed, it throws an error instead.
with-continuation-barrier
is the only function in Guile that
establishes a continuation barrier on purpose. However there are
number of other functions that accidentally establish a continuation
barrier by recursing into C code and then back to Scheme. (Guile can
only rewind the state of a saved computation if Guile created the
corresponding stack frame, and that’s not the case for the
intermediate stack frame created by the C compiler.)
Accidental continuation barriers are bugs, and the Guile developers have been working on removing them over the years. By now, most of the high-priority accidental barriers are gone. Those that are left include:
- The body thunk of
call-with-blocked-asyncs
- GOOPS methods attached to a primitive-generic like
+
orequal?
- Dynwind entry/exit handlers, but only when called due to nonlocal entry or exit
- R6RS custom binary port callbacks
- Legacy “soft port” callbacks
- R5RS “delay” callbacks
- Many module system callbacks (module transformers, etc)
- SRFI-13 string and character-set callbacks
- Callbacks from some SRFI-1 functions
- Callbacks from
sort
- Custom hash table assoc functions
- Calls to
load-from-path
(though, oddly, notload
) - Object printers, e.g. custom record printers
-
call-with-vm
-
array-map
and related array functions
This set will be reduced over time as more of libguile
is
rewritten in Scheme.
Finally, for port operations, See Non-Blocking
I/O in Guile Reference Manual. When Guile tries to read
from a file descriptor and nothing is available, normally it would
call the current read waiter, which Fibers customizes to suspend the
fiber and run another one in the meantime. However for procedures
that have not been rewritten in terms of the “suspendable port
operations”, notably including read
, write
, and
display
, the nothing-to-read condition is handled in C, not
Scheme, so Guile cannot create a resumable continuation. In this
case, instead of erroring, Guile will wait until the file descriptor
is readable or writable (as appropriate) and then continue. However
in the meantime, which may be forever, this blocks other fibers from
running. Therefore Fibers users sometimes have to be aware of the
state of Guile’s rewrite of port operations in terms of
suspendable-port primitives, and to help out if things aren’t moving
fast enough :)
Although code run within fibers looks like normal straight-up Scheme,
it runs concurrently with other fibers. This means that if you mutate
shared state and other fibers mutate the same shared state using
normal Scheme procedures like set!
, vector-set!
, or the
like, then probably you’re going to have a bad time. This is
especially the case considering that the default is to run as many
schedulers in parallel as your machine has cores, and also to preempt
fibers at any time.
Even if you explicitly choose a cooperative scheduling mode by disabling interrupts and parallelism, multi-step transactions may be suspended if your code reaches a yield point in the middle of performing the transaction.
The best way around this problem is to avoid unstructured mutation, and to instead share data by communicating over channels. Using channels to communicate data produces much more robust, safe systems.
If you need to mutate global data, the best way is to use an atomic variable. If that is not possible, then consider spawning a fiber to manage the mutable data, and communicating with that fiber over channels. Mutexes are also an option but are difficult to use correctly; see the considerations from the following section.
Mutexes are low-level synchronization primitives provided by Guile. Used properly, they can be used to build concurrent systems that concurrently access data without corrupting it.
See Mutexes and Condition Variables in Guile Reference Manual, for some reasons why mutexes aren’t so great for Guile in general.
Guile’s mutexes are an even worse solution with a Fibers system. It is a bad idea for a fiber to grab a Guile mutex, because if the mutex is not available, Guile will suspend not just the fiber that is running but the entire kernel thread. If the mutex is available, the fiber obtains it, cool; but if it the fiber suspends while holding a mutex, that’s bad news. Any fiber trying to acquire a mutex while a suspended fiber from the same thread already has the mutex will result in an error: as Guile thinks that the mutex has already been acquired by the current thread, it detects recursion and bails out.
With condition variables, similar problems arise: waiting on a condition variable will block indefinitely, if the condition can only be signalled by another fiber in the current kernel thread.
The root of this problem is that Guile associates mutexes with kernel
threads, not fibers. It would be possible however to make a
Fibers-appropriate implementation of mutexes, but we suggest that
users try atomic boxes or channels instead. If you do use mutexes,
make sure you disable preemption (possibly by a local call to
call-with-blocked-asyncs
), and take care to never suspend a
fiber while it owns any mutex.
Next: Project status, Previous: Pitfalls, Up: Fibers
Here are some small examples to get you started.
More examples would be great, especially demonstrating interesting things that can be done with channels.
This simple server listens on a TCP port, echoing lines back to any
user that connects. This file can be found in
examples/ping-server.scm
, and can be run from the build dir as
./env guile examples/ping-server.scm
.
First, we use some standard Guile modules, and the fibers module.
(use-modules (rnrs bytevectors) (fibers) (ice-9 textual-ports) (ice-9 rdelim) (ice-9 match))
We run the server within a run-fibers
call.
(define* (run-ping-server #:key (host #f) (family AF_INET) (addr (if host (inet-pton family host) INADDR_LOOPBACK)) (port 11211) (socket (make-default-socket family addr port))) (listen socket 1024) (sigaction SIGPIPE SIG_IGN) (socket-loop socket (make-hash-table)))(run-fibers run-ping-server)
Up to here, all good. Perhaps we should look at how to open a socket; here’s a couple helper that appears often in applications that use suspendable ports. See Non-Blocking I/O in Guile Reference Manual, for full details.
(define (make-default-socket family addr port) (let ((sock (socket PF_INET SOCK_STREAM 0))) (setsockopt sock SOL_SOCKET SO_REUSEADDR 1) (fcntl sock F_SETFD FD_CLOEXEC) (fcntl sock F_SETFL (logior O_NONBLOCK (fcntl sock F_GETFL))) (bind sock family addr port) sock))
We hope to make this easier in the future; it’s a bit too much ceremony. Now, the main dish is the server loop, that simply accepts new connections, forking off a fiber for each connection:
(define (socket-loop socket store) (let loop () (match (accept socket SOCK_NONBLOCK) ((client . addr) (spawn-fiber (lambda () (client-loop client addr store))) (loop)))))
Finally, the loop to handle a single client:
(define (client-loop port addr store) (setvbuf port 'block 1024) ;; Disable Nagle's algorithm. We buffer ourselves. (setsockopt port IPPROTO_TCP TCP_NODELAY 1) (let loop () ;; TODO: Restrict read-line to 512 chars. (let ((line (read-line port))) (cond ((eof-object? line) (close-port port)) (else (put-string port line) (put-char port #\newline) (force-output port) (loop))))))
This ping server is fairly straightforward, and is only flawed in a couple of ways: it doesn’t limit the line size, and so is vulnerable to memory exhaustion if the client gives it a very, very big line, and additionally, it does not time out clients after inactivity, so the poll set could get quite large.
In practice the limit for the number of connections is imposed by the
system in the form of a file descriptor limit. Use ulimit -n
to increase this limit on the console, or setrlimit
to increase
it from Guile, within the hard limits imposed by the system.
The client is similar to the server; see
examples/ping-client.scm
for full details. It can be run as
./env guile examples/ping-client.scm N M
, to make N concurrent
connections to the server and make a series of M requests on each
connection. It spawns a fiber per connection, and then uses a normal
Guile loop to make the serial requests.
(define (run-ping-test num-clients num-connections) ;; The getaddrinfo call blocks, unfortunately. Call it once before ;; spawning clients. (let ((addrinfo (car (getaddrinfo "localhost" "11211")))) (let lp ((n 0)) (when (< n num-clients) (spawn-fiber (lambda () (client-loop addrinfo n num-connections))) (lp (1+ n))))))
Running this on a laptop from 2015 yields results more or less like this:
$ time ./env guile examples/ping-client.scm 1000 100real 0m1.647s user 0m2.176s sys 0m0.816s
That’s a throughput of somewhere around 60000 fiber switches per second on each side, which is none too shabby.
Next: Web Server Backend, Previous: Ping, Up: Examples
Similarly to the echo server, Fibers includes an example memcached server and client. Run the server like this:
$ ./env guile examples/memcached-server.scm
Run the client as with the ping client:
$ time ./env guile examples/memcached-client.scm 1000 100real 0m6.343s user 0m9.868s sys 0m1.808s
Here we see a throughput of around 16000 puts plus 16000 gets per second on this 2-core, 2-thread-per-core machine; pretty OK for a basic implementation.
Next: Concurrent Web Server, Previous: Memcached, Up: Examples
Fibers includes a “backend” for Guile’s built-in web server that uses non-blocking fibers to read requests and write responses. Fibers also includes a standalone web server that uses Guile’s HTTP facilities, but not its web server framework. See Concurrent Web Server, for more on the standalone web server.
To run a web server that serves each client from
fibers, specify the fibers
backend when running your web
server:
(use-modules (web server))(define (handler request body) (values '((content-type . (text/plain))) "Hello, World!"))
(run-server handler 'fibers)
Performance seems to be about 60% of the standard web server backend implementation shipped with Guile, though it is not as battle-hardened.
The fibers web server backend is an interesting case because it uses channels to invert the inversion-of-control imposed on the backend by the web server framework. The web server wants the backend to provide “read-request” and “write-response” callbacks, whereas in fibers we usually want to run dedicated REPL-like fibers over the client sockets. The fibers backend enables this by passing a callback to the per-client loops:
(define (have-request response-channel request body) (put-message request-channel (vector response-channel request body)) (match (get-message response-channel) (#(response body) (values response body)))) (let loop () (match (accept socket SOCK_NONBLOCK) ((client . sockaddr) ;; ... (spawn-fiber (lambda () (client-loop client have-request)) #:parallel? #t) (loop))))
From the perspective of the client-loop
fiber,
have-request
is a normal function that takes a request and
returns a response, and the client-loop
fiber is in control.
But from the perspective of the web server, the server-read
and
server-write
callbacks are straightforward and idiomatic too:
(define (server-read server) (match (get-message (server-request-channel server)) (#(response-channel request body) (let ((client response-channel)) (values client request body)))))(define (server-write server client response body) (let ((response-channel client)) (put-message response-channel (vector response body))) (values))
This takes advantage of the fact that we can use get-message
,
put-message
, and other CML operations both inside and outside
of fibers, and it mostly just does the right thing.
Note also the #:parallel? #t
on the spawn-fiber
invocation. The natural unit of parallelism in a web server is the
request (or the client), so it’s at this point that we introduce work
sharing to our system, allowing us to naturally take advantage of
multiple cores on our server.
Previous: Web Server Backend, Up: Examples
Guile’s web server framework single-threads all web request handling. The handler procedure can be passed a number of additional “state” arguments, and is expected to return a corresponding number of additional values to use as the next state. This is sometimes what you want, but it does limit concurrency; instead it would be nice to be able to not only the input and output running concurrently, but also handlers too.
For this reason, Fibers includes a simple standalone web server that
uses Guile’s HTTP facilities, but not its web server framework. To
run a standalone web server, use the (fibers web server)
module:
(use-modules (fibers web server))(define (handler request body) (values '((content-type . (text/plain))) "Hello, World!"))
(run-server handler)
Compared to the Fibers web server backend (see Web Server Backend), using the standalone fibers web server enables more parallelism, as the handlers can run in parallel when you have multiple cores. Single-core performance of the standalone server is slightly better than the web server backend, and unlike the backend it scales with the number of cores available.
Fibers is feature-complete and ready to go! It’s early days but things are solid enough to say without embarrassment or misgiving that Guile now has a solid concurrency story. Use fibers, incorporate it directly into your project, fork it, improve it: what happens now is up to you. Happy hacking and godspeed!