Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 66 additions & 0 deletions src/internal/task/interruptqueue.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
package task

import "runtime/volatile"

// InterruptQueue is a specialized version of Queue, designed for working with interrupts.
// It can be safely pushed to from an interrupt (assuming that a memory reference to the task remains elsewhere), and popped outside of an interrupt.
// It cannot be pushed to outside of an interrupt or popped inside an interrupt.
type InterruptQueue struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have a reference for this scheme, or is this something you came up with yourself? I believe you once said it was based on interrupt/scheduler communication in another RTOS.
If it is based on an existing system, a reference would be very useful to understand the code.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An older implementation of InterruptQueue was based off of https://arxiv.org/pdf/0709.4558.pdf

However, it was complicated, and we can do better on platforms with atomics, so I just decided to swap it with a double buffer.

The scheme of seperating the interrupt queue from the runqueue was something I came up with myself, and it seemed to make more sense based on the constraints involved in TinyGo.

// This implementation uses a double-buffer of queues.
// bufSelect contains the index of the queue currently available for pop operations.
// The opposite queue is available for push operations.
bufSelect volatile.Register8
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tl;dr: volatile is probably fine (no change needed) but atomics might be more appropriate.

I wonder if this should use atomics instead? (See #1113). Atomics have somewhat different guarantees and are intended for synchronization (where volatile is intended for memory-mapped I/O). In practice it will usually have the same effect, especially on microcontrollers.

Specifically:

  1. Volatile guarantees there is exactly one load or store instruction. This is necessary for some memory-mapped I/O but prevents certain (probably irrelevant) compiler optimizations where values are speculatively loaded when they might not be needed. It does not guarantee the value is loaded or stored in a single operation (although, unofficially, you can rely on there being only one operation if the hardware has a load/store instruction for the given bit width).
  2. Atomics guarantee that (from the perspective of other threads or interrupts), the value is loaded/stored atomically. This means, for example, that a 64-bit value is also loaded/stored atomically even if the processor does not support such an operation. In the case of Go (and usually C++ too), the operations will happen in the expected order respective to similar atomic operations (but not necessarily to regular load/store instructions). This is mostly relevant to out-of-ourder and/or multicore systems.

I think volatile will be fine in this case, but just wanted to warn for this particular (somewhat semantic) difference. No change is needed from my POV.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, maybe something like popSelect or receiverSelect would be clearer? Because apparently it's the queue used by the receiver, and the sender (pusher) uses the other one. This is not immediately clear from the name bufSelect.

(Just a suggestion, not necessary to change).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this intentionally does not use atomics. On platforms with a hardware atomic CAS, this can be done a lot more efficiently. I wanted to separate this to have a different approach.

That said, I could probably get this to work with atomic loads and stores instead of volatile if we added an atomic 8-bit load/store, as this is what is needed to function efficiently on AVR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see. Note that with #1113, atomics are supported in most places. Not yet on AVR, but that should not be very difficult to implement (although it might be inefficient). 32-bit load/store atomic instructions on Cortex-M are basically free so they can be used.

queues [2]Queue
}

// Push a task onto the queue.
// This can only be safely called from inside an interrupt.
func (q *InterruptQueue) Push(t *Task) {
// Avoid nesting interrupts inside here.
var nest nonest
nest.Lock()
defer nest.Unlock()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better to avoid defer here. defer might allocate heap memory (hopefully not in this case!) and the heap is currently not interrupt-safe and likely will never be.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I wonder if this could be written more clearly like so?

var mask uintptr
if interrupt.SupportsNesting {
    // If the system allows nested interrupts, disable them during this operation to make it atomic.
    mask := interrupt.DisableInterrupts()
}
q.queues[1-q.bufSelect.Get()].Push(t)
if interrupt.SupportsNesting {
    interrupt.EnableInterrupts(mask)
}

That's a bit more verbose and of course needs some additions to the runtime/interrupts package, but it does look more obvious to me. What do you think @jaddr2line?
One thing in particular that makes me a little bit nervous is that the nonest mechanism essentially has a blacklist of architectures that allow nesting, but it is certainly possible more get added in the future (for example, it's possible to have nested interrupts in RISC-V although the current PLIC makes it a bit cumbersome).


// Push to inactive queue.
q.queues[1-q.bufSelect.Get()].Push(t)
}

// Check if the queue is empty.
// This will return false if any tasks were pushed strictly before this call.
// If any pushes occur during the call, the queue may or may not be marked as empty.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the queue may or may not be marked as empty.

Could be me, but 'marked' sounds like it changes the queue, while it doesn't (at least, not in a relevant way). It only returns the current state of the queue.

// This cannot be safely called inside an interrupt.
func (q *InterruptQueue) Empty() bool {
// Check currently active queue.
active := q.bufSelect.Get() & 1
if !q.queues[active].Empty() {
return false
}

// Swap to other queue.
active ^= 1
q.bufSelect.Set(active)

// Check other queue.
return q.queues[active].Empty()
}

// Pop removes a single task from the queue.
// This will return nil if the queue is empty (with the same semantics as Empty).
// This cannot be safely called inside an interrupt.
func (q *InterruptQueue) Pop() *Task {
// Select non-empty queue if one exists.
if q.Empty() {
return nil
}

// Pop from active queue.
return q.queues[q.bufSelect.Get()&1].Pop()
}

// AppendTo pops all tasks from this queue and pushes them to another queue.
// This operation has the same semantics as repeated calls to pop.
func (q *InterruptQueue) AppendTo(other *Queue) {
for !q.Empty() {
q.queues[q.bufSelect.Get()&1].AppendTo(other)
}
}
20 changes: 20 additions & 0 deletions src/internal/task/nonest_arm.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
// +build arm,baremetal,!avr

package task

import "device/arm"

// nonest is a sync.Locker that blocks nested interrupts while held.
type nonest struct {
state uintptr
}

//go:inline
func (n *nonest) Lock() {
n.state = arm.DisableInterrupts()
}

//go:inline
func (n *nonest) Unlock() {
arm.EnableInterrupts(n.state)
}
10 changes: 10 additions & 0 deletions src/internal/task/nonest_none.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
// +build !arm !baremetal avr

package task

// nonest is a sync.Locker that blocks nested interrupts while held.
// On non-ARM platforms, this is a no-op.
type nonest struct{}

func (n nonest) Lock() {}
func (n nonest) Unlock() {}
10 changes: 10 additions & 0 deletions src/internal/task/queue.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,11 @@ func (q *Queue) Pop() *Task {
return t
}

// Empty checks if there are any tasks in the queue.
func (q *Queue) Empty() bool {
return q.head == nil
}

// Append pops the contents of another queue and pushes them onto the end of this queue.
func (q *Queue) Append(other *Queue) {
if q.head == nil {
Expand All @@ -48,6 +53,11 @@ func (q *Queue) Append(other *Queue) {
other.head, other.tail = nil, nil
}

// AppendTo pops the contents of this queue and pushes them onto another queue.
func (q *Queue) AppendTo(other *Queue) {
other.Append(q)
}

// Stack is a LIFO container of tasks.
// The zero value is an empty stack.
// This is slightly cheaper than a queue, so it can be preferable when strict ordering is not necessary.
Expand Down