docs/dev/events.pod - Design Notes for Events
This document describes the current state, which might not be the final implementation.
Parrot has to deal with asynchronous events (from timers, signals, async IO, notifications, and so on). This document describes the current implementation.
As there is currently no good test if a threading library is included at link time, its assumed, that platforms having PARROT_HAS_HEADER_PTHREAD link against libpthread.
On construction of the first interpreter (the one with no parent_interpreter) two threads are started: The event_thread, which manages the static global event_queue and the io_thread which is responsible for signal and IO related events.
Events can be either timed (they are due after some elapsed time) or untimed. For the former there is one API call: Parrot_new_timer_event
The event_thread holds the event_queue mutex first. When there is no event entry in the event_queue, the event_thread waits on the event condition until an event arrives. When there is an event with a timed entry, a timed wait is performed. (Waiting on the condition releases the mutex, so that other threads can insert events into the event_queue.)
When an event arrives (or the timeout was reached) the event_thread pops off all events and places the queue entries into the interpreter's task_queue. This also enables event checking in the interpreter's run-core.
When the popped off entry is a timed event and has a repeat interval, the entry is duplicated and reinserted with the interval added to the current time.
All signals that should be handled inside Parrot are blocked in all threads and only enabled in the io_thread. The signal handler functions just sets an atomic flag, that this signal arrived and returns. This finally interrupts the select(2) loop in the io_thread.
The io_thread sleeps in a select(2) loop, which is interrupted when either a signal arrives or when one of the file descriptors has a ready condition. Additionally the file descriptor set contains the reader end of an internal pipe, which is used by other threads to communicate with the io_thread.
Signal events like SIGINT are broadcasted to all running interpreters, which then throw an appropriate exception.
We cannot interrupt the interpreter at arbitrary points and run some different code (e.g. a PASM subroutine handling timer events). So when an event is put into the interpreter's task_queue the opcode dispatch table for the interpreter is changed.
Plain function cores get a function table with all entries filled with the check_events__ opcode. This opcode pops off and finally handles the event. The same scheme works for the CGOTO core, where the address table is replaced. The switched core does an explicit check if events are to be handled.
Prederefed and especially the CGP core don't have an opcode dispatch table that is checked during running the opcodes. When an event is scheduled, the event handler replaces backward branches in the opcode image with the check_events__ opcode.
After all events are popped off and handled, the opcode dispatch table is restored to its original, and the check_events__ reexecutes the same instruction again, which is now the real one and thus normal execution flow continues.
This scheme has zero overhead in the absence of scheduled events for all cores except switched.
Sync events could be placed directly into the interpreter's task queue.
That depends probably on the underlying OS, i.e. if it does async IO or we have to do it.