Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP][RFC] A single macro to declare tasks and resources and a new mechanism to access resources #31

Closed
wants to merge 1 commit into from

Conversation

japaric
Copy link
Collaborator

@japaric japaric commented Jun 28, 2017

A complete example

Core idea

In the initial implementation of cortex-m-rtfm, resources (and task local data)
had global visibility, and an elaborated mechanism of type level integers and
tokens was used to prevent a task from using a resource it had no access to as
per the SRP (Stack Resource Policy).

This new implementation gets rid of the global visibility and uses scopes to
enforce SRP semantics: A resource (and task local data) is only visible to
tasks that can access it as per the SRP.

Design

The rtfm! macro

This new macro is used to declare both tasks and resources, and looks like this:

rtfm! {
    // svd2rust device crate
    device: stm32f103xx,

    // peripherals to be used as resources.
    // The value after '@' is the ceiling associated to each resource
    peripherals: {
        DWT @ 1,
        ITM @ 1,
        TIM2 @ 1,
        USART1 @ 1,
    },

    // list of all resources and their initial values and ceilings
    resources: {
        SLEEP_CYCLES: u32 = 0; @ 1,
    },

    // initialization routine
    //
    // - `body`: a path (e.g. foo::bar) to the initialization function.
    // - `peripherals`: a list of peripherals this function has access to.
    //   The peripherals here don't have to be in the top peripherals section.
    init: {
        body: init,
        peripherals: [
            AFIO,
            DWT,
            GPIOA,
            GPIOC,
            RCC,
            TIM2,
            USART1,
        ],
    },

    // idle loop
    //
    // - `body`: a path (e.g. foo::bar) to the idle function
    // - `local`: takes a list of static variables local to this function.
    // - `peripherals`: a list of peripherals this function has access to.
    //   The peripherals in this list must appear in the top peripherals list.
    // - `resources`: a list of resources this function has access to.
    //   The resources in this list must appear in the top resources list.
    idle: {
        body: idle,
        local: {
            data: u8 = 0;
        },
        peripherals: [
            DWT,
        ],
        resources: [
            SLEEP_CYCLES,
        ],
    },

    // exceptions (system handlers) as tasks
    //
    // the syntax of the items in this list is similar to the items in the
    // interrupts section (see below) with the exception of the 'enabled' field
    // which is not present in these items.
    exceptions: {},

    // interrupts as tasks
    //
    // each item in this list has the following fields:
    //
    // - `body`: a path to the interrupt handler function
    // - `priority`: the priority of this task
    // - `enabled`: whether this interrupt will be enabled or disabled between
    //   the end of `init` and the start of `idle`
    // - `local`: a list of static variables local to this function.
    // - `peripherals`: a list of peripherals this function has access to.
    //   The peripherals in this list must appear in the top peripherals list.
    // - `resources`: a list of resources this function has access to.
    //   The resources in this list must appear in the top resources list.
    interrupts: {
        TIM2: {
            body: blinky,
            priority: 1,
            enabled: true,
            local: {
                snapshot: Option<u32> = None;
                state: bool = false;
            },
            peripherals: [
                DWT,
                ITM,
                TIM2,
            ],
            resources: [
                SLEEP_CYCLES,
            ],
        },
    },
}

Functions

The rtfm! macro will generate a bunch of code that will override the main
function. The user will have to fill in the init, idle and task functions as
shown below:

fn init(_p: init::Peripherals, _r: init::Resources) {
    (..)
}

fn idle(
    _t: &Threshold,
    _l: &'static mut idle::Local, // NOTE: 'static lifetime
    _p: &idle::Peripherals,
    _r: &mut idle::Resources,
) -> ! {
    (..)
}

fn blinky(
    _t: &Threshold,
    _l: &mut TIM2::Local, // NOTE: non-'static lifetime
    _p: &TIM2::Peripherals,
    _r: &mut TIM2::Resources,
) {
    (..)
}

fn loopback(
    _t: &Threshold,
    _l: &mut USART1::Local,
    _p: &USART1::Peripherals,
    _r: &mut USART1::Resources,
) {
    (..)
}

The init function runs within a global critical section (rtfm::atomic) and
has direct access to all the peripherals and resources. No need to claim any
resource in this context.

The idle function has a threshold token with value 0, exclusive access to its
local data and access to the peripherals and resources that were associated to
it in the rtfm! macro. The idle function must never return / end.

Task functions look very similar to the idle function except that they have to
return.

The Local, Peripherals and Resources structs are auto generated from the
rtfm! declaration and are simply a list of all the declared items.

mod init {
    pub struct Peripherals<'a> {
        pub AFIO: &'a ::stm32f103xx::AFIO,
        pub DWT: &'a ::stm32f103xx::DWT,
        (..)
    }

    pub struct Resources<'a> {
        pub SLEEP_CYCLES: &'a mut Static<u32>,
    }
}

mod idle {
    (..)

    pub struct Peripherals {
        pub DWT: super::_peripheral::DWT,
    }

    pub struct Resources {
        pub SLEEP_CYCLES: super::_resource::SLEEP_CYCLES,
    }
}

mod TIM2 {
    pub struct Local {
        pub snapshot: Static<Option<u32>>,
        pub state: Static<bool>,
    }

    pub struct Resources {
        pub SLEEP_CYCLES: super::_resource::SLEEP_CYCLES,
    }

    pub struct Peripherals {
        pub DWT: super::_peripheral::DWT,
        pub ITM: super::_peripheral::ITM,
        (..)
    }
}

claim

Instead of the old Threshold.raise / rtfm::atomic and access mechanism a
single claim method is provided on all resources. Usage looks like this:

fn task(t: &Threshold, r: &mut Resources, ..) {
    r.R1.claim(t, |r1, t| {
        r1 += 1;

        r.R2.claim(t, |r2, t| {
            r2 += 1;
        });
    });
}

The claim method will provide direct access to the resource data if the
threshold is high enough. Otherwise it will raise the threshold (create a
critical section) to match the resource ceiling for the span of the closure.

Borrow checker

The borrow checker problem that plagued the initial implementation is solved
using proxy objects. Instead of handing a reference to the actual Resource to
tasks a reference to a proxy object is used.

mod _resource {
    // the actual resource
    // NOTE private
    static R1: Resource<Foo> = ..;

    // proxy object
    pub struct R1 { _0: () }

    impl R1 {
        pub fn claim<..>(&self, ..) -> R
        where
            F: FnOnce(&Foo, &Threshold) -> R,
        {
            R1.claim(..)
        }

        pub fn claim_mut<..>(&mut self, ..) -> R
        where
            F: FnOnce(&mut Foo, &Threshold) -> R,
        {
            unsafe { R1.claim_mut(..) }
        }
    }
}

mod EXTI0 {
    pub struct Resources {
        pub R1: super::_resource::R1,
    }
}

fn task(
    t: &Threshold,
    // the R1 binding has type `&mut _resource::R1`
    &mut EXTI0::Resources { ref mut R1 }: &mut EXTI0::Resources,
) {
    // OK
    R1.claim(t, |r1, t| {
        R1.claim(t, |r1, _| {
            // ..
        });
    });

    // not OK
    R1.claim_mut(t, |r1, t| {
        R1.claim_mut(t, |r1, _| {
        //~^ error: cannot borrow `R1` as mutable more than once at a time
        });
    });
}

The actual Resource has an unsafe claim_mut(&self) because it's up to the
caller to enforce Rust borrowing rules ("no two mutable references to the same
data can't exist at the same time", etc.). The proxy makes claim_mut safe by
enforcing Rust borrowing rules within a task.

Guarantees

  • The data race free access and deadlock free execution guarantees are
    maintained.

  • The macro enforces that ceilings and priorities are correctly selected. The
    ceiling of a resource must be equal or greater than the priority of any task
    it's used in.

    • However you can "oversize" the ceiling as in select a value greater than
      what's necessary. This results in more critical sections than necessary.

Pros

  • Solves an outstanding borrow checker limitation. RefCells / Cells are no
    longer required to mutate resource data.
  • Drops all the type level integers. This should make generic programming
    easier (less or no trait bounds are required).

  • The claim method should make code easier to refactor. Changes in task
    priorities won't require changing calls to claim.

  • We now fully support Cortex-M0(+) microcontrollers. For this target claim
    will use rtfm::atomic under the hood when a critical section is needed.

Cons

  • The claim method hides whether accessing the resource is direct (lockless)
    or done through a critical section.

  • Accessing several resources requires nesting claims which introduces
    rightward drift. This could be solved by adding some function / macro to
    perform several claims at once.

Extensions

Automatic ceiling derivation

With the above macro it's already possible to compute the ceiling of resources
and peripherals from the list of tasks and their resources and priorities to
not require the user to declare them.

This functionality has not been not included at this time because of
deficiencies in const context evaluation support in rustc / LLVM. Including the
functionality resulted in the ceilings being computed at runtime which made
claims operations O(N) (N = number of resources) in runtime rather than O(1).

If you are curious this is what the ceiling computation that was not included
looks like:

Given:

rtfm! {
     resources: {
         R1: u32 = 0;
         R2: u32 = 0;
     },

     interrupts: {
         EXTI0: {
             (..)
             priority: 1,
             resources: [
                 R1,
             ],
             (..)
         },

         EXTI1: {
             (..)
             priority: 2,
             resources: [
                 R1,
                 R2,
             ],
             (..)
         },
     },
}

The macro expands into:

#[derive(Clone, Copy, PartialEq)]
enum _Resource { R1, R2, R3 }

const _PRIORITIES: &[(_Resource, u8)] = &[
    (_Resource::R1, 1), // from EXTI0
    (_Resource::R1, 2), // from EXTI1
    (_Resource::R2, 2), // from EXTI1
];

mod R1 {
    fn ceiling() -> u8 {
        let mut ceil = 0;

        for (res, prio) in super::_PRIORITIES {
            if res != _Resource::R1 { continue }

            if prio > ceil { ceil = prio }
        }

        ceil
    }
}

R1::ceiling() can be computed at compile time because all the involved values
are known at compile time but LLVM isn't capable of performing the computation
at compile time so it generates code to compute the ceiling at runtime. This
might be fixable with better const fn support as in changing fn ceiling() -> u8 to const fn ceiling() -> u8 should force the computation to be done at
compile time.

Local peripherals

Resources are meant to be used to share data between tasks. The current
implementation exposes peripherals to tasks as resources even if a particular
peripheral is only going to be used in a single task. The idea of task local
data could be extended to cover peripherals. With "local peripherals" claim
wouldn't be necessary, and specifying a ceiling wouldn't be required either.

Possible syntax could look like this:

rtfm! {
    interrupts: {
        EXTI0: {
            body: foo,
            (..)
            local: {
                data: {
                    counter: u32 = 0;
                },
                peripherals: [
                    USART1, // no ceiling required
                ],
            },
            (..)
        },
    }
}

The macro would have to check that two or more tasks don't declare the same
peripheral as local. The peripheral would then appear as a field of the Local
struct.

fn foo(t: &Threshold, l: &mut EXTI0::Local, ..) {
    l.USART1.dr.write(0);
}

Optional fields / structs

It'd be great if e.g. the resources field could be omitted from an
interrupts item if that task won't make use of resources. Likewise if a task
doesn't use any resource it would be great to not include the Resources
argument in its signature. Sadly my macro-foo level is low and I don't know how
to implement such thing.

Unresolved problems

Composability

This new design makes the composability problem more obvious. Since the first
implementation of cortex-m-rtfm it has not been possible to create resources
in dependencies in a way that doesn't reduce the composability of those
crates.

The root of the problem is that each resource has a ceiling (constant value)
associated to it. Ideally library and application writers should never need to
pick a ceiling for a resource, and the compiler should pick one for them
automatically. To compute the ceiling automatically whole program, potentially
cross crate, analysis is required: "The ceiling must equal the maximum priority
of all the tasks that may claim the resource" is the problem to solve, and
the solution requires looking at the complete call graph of the program.

The initial implementation of cortex-m-rtfm allowed the user to specify the
ceiling value of a resource as a type parameter. By doing so the crate author
limited the applications where its crate could be used. If they picked a low
value their API can't be used in as many tasks as possible (i.e. not usable in
high priority tasks). If they picked a high value their API would impose more
task blocking (critical sections) than required making it less appealing to use.

This new implementation "deals" with the problem by making it impossible to
declare resources in dependencies. All tasks and resources must be declared in
the top crate. By doing so all the parts required for the whole program analysis
are in the top crate; this removes the cross crate element from the analysis and
makes the problem solvable with macros / build scripts.

Unresolved questions

In the current implementation ..

  • The init function has exclusive access to all the declared resources. Should
    we add a init.resources field to limit the number of resources it has access
    to?

  • All the exceptions with configurable priorities can be used as tasks. Should
    we reduce the list? The list currently includes:

pub enum Exception {
    /// Memory management.
    MEN_MANAGE,
    /// Pre-fetch fault, memory access fault.
    BUS_FAULT,
    /// Undefined instruction or illegal state.
    USAGE_FAULT,
    /// System service call via SWI instruction
    SVCALL,
    /// Pendable request for system service
    PENDSV,
    /// System tick timer
    SYS_TICK,
}

Acknowledgments

I want to thank @glaebhoerl for all their great ideas in this exchange. Most
of their ideas made it into this implementation 👍.

cc @whitequark
cc @pftbest the idea of local peripherals may interest you. Also I think it should be possible to implement RTFM for MSP430 in its current form. You can use global critical sections (temporarily disable interrupts) in claim like this implementation does for ARMv6M. Does the MSP430 support several priority levels for interrupts (i.e. nested interrupts)? If not you could have two priority levels: 0 = main / idle, 1 = interrupt / task.

@whitequark
Copy link

whitequark commented Jun 29, 2017

My comments:

  • Nit: idle::Peripherals, idle::Resources but idle::Local? Should be Locals.

  • Declaring locals in a place completely divorced from their actual use is, to put it mildly, unergonomic. Imagine a 1000-line file--you'd have to constantly jump from the rtfm! macro to the implementation to even see what the type is (unless you use RLS, and even then, RLS doesn't really work with xargo well).

    Can we add a macro like rtfm_locals! that would be callable from the function body?

  • In

    fn task(t: &Threshold, ...) {
      R1.claim(t, |r1, t| { 
        R2.claim(t, |r2, t| {
           ...
    

    there is a certain discipline involved passing the &Threshold token. What happens if you instead write:

    fn task(t1: &Threshold, ...) {
      R1.claim(t1, |r1, t| { 
        R2.claim(t1, |r2, t| {
           ...
    

    Could probably be fixed by switching to &mut Threshold. Do we need this token at all? Isn't the threshold always stored in the NVIC register anyway? It used to be a type level thing, but now it's fundamentally just a global quantity and no improvement in clarity is achieved by explicitly passing it around.

  • Accessing several resources requires nesting claims which introduces
    rightward drift. This could be solved by adding some function / macro to
    perform several claims at once.

    I propose a radical solution. Simply prohibit unwinding.

    Unwinding on Cortex-M microcontrollers is technically possible in a subset of cases and I know someone who did this (in C++) but there's no fundamental reason RTFM should support unwinding. It would in fact be highly nontrivial because no unwinder I am aware of can unwind an interrupt, and it's not even clear that this is in general possible in presence of user mode code, because then the unwinder would have to traverse through that, and emulate loading the EXC_RETURN bit patterns into pc. Even if it is implementable, unwinding consumes a lot of flash space for DWARF tables, a lot of stack memory for keeping its state on (inflating your worst-case projection), and isn't exactly a speed demon.

    I maybe see a reason for permitting unwinding to be used together with cortex-m. I do not see any good argument for permitting unwinding to be used together with cortex-m-rtfm.

    Without unwinding I believe we could use drop guards around claim, so long as mem::forget()ting a claim guard can result in, at most, deadlocks. As far as I can tell the existing implementation of claim already satisfies that condition.

  • but LLVM isn't capable of performing the computation
    at compile time so it generates code to compute the ceiling at runtime
    You really should not rely on LLVM to perform constant folding. LLVM's contract involves no guarntee that it will fold as many constants as is possible. It is well within its rights to give up early even on -O3 if it thinks that it's not profitable or will take too much time, and some passes do indeed give up on functions larger than a certain threshold.

  • What you refer to as "better const fn support" requires miri to be merged cc @eddyb.

  • Basically all of your problems with the rtfm! macro could be solved today by making it a procedural macro. These aren't stable right now but well on the way there.

  • The init function has exclusive access to all the declared resources. Should
    we add a init.resources field to limit the number of resources it has access
    to?

    No! The init function often needs far more resources than the rest of the firmware. Setting up resets, clocking, putting unused GPIO banks into a particular state... There's no benefit to limiting this artificially, there's enough ceremony as it is.

  • All the exceptions with configurable priorities can be used as tasks. Should
    we reduce the list? The list currently includes:

    It doesn't seem very useful. I would personally take faults off that list, since most people would want to trap the hard fault alone--it's rare that specific faults would be handled differently--and we'd serve users better by figuring out how to make handling hard fault, and by extension all four, ergonomically.

@japaric
Copy link
Collaborator Author

japaric commented Jun 29, 2017

@whitequark Thanks for the input.

Nit: idle::Peripherals, idle::Resources but idle::Local? Should be Locals.

I've been thinking of Local as "local data" (since there's only a single
static variable, which may have one or several fields, per task). Locals
makes more sense if you think of it as "local variables". Either name works for
me.

Can we add a macro like rtfm_locals! that would be callable from the function
body?

Sounds complicated. In particular preventing rtfm_locals! from being used from
something that's not a task.

What happens if you instead write:

You end up with more reads / writes to BASEPRI but semantics are unchanged. We
use msr BASEPRI_MAX to write to BASEPRI so you can't never lower the
threshold (that would open the door to data races) even if use the wrong
threshold token.

Could probably be fixed by switching to &mut Threshold

I'd have to test but I think that may not work well with the closures since the
compiler would consider the use of &mut Threshold in an inner closure as a
second mutable borrow of the Threshold token.

Do we need this token at all? Isn't the threshold always stored in the NVIC
register anyway?

I think, but have to check, that it should be possible drop the token and just
read NVIC without breaking semantics, but it will definitively impact
performance and code size. The claim implementation has three branches in it;
with the threshold token LLVM can always eliminate the two unused branches (in
release mode), but without the token LLVM has not enough information to
determine which branch will be taken and will have to include all three branch
per claim call.

Without unwinding I believe we could use drop guards around claim, so long as
mem::forget()ting a claim guard can result in, at most, deadlocks.

With drop guards you open the door to data races in "safe" code. mem::forget is
not even required you just have to drop the guards in the wrong order (not LIFO
order). Example here (see code snippets).

What you refer to as "better const fn support" requires miri to be merged

I'm aware, and I'm happy that there's a solution in the horizon.

Basically all of your problems with the rtfm! macro could be solved today by
making it a procedural macro.

Yes but I have been avoiding them because I and I expect neither the RTFM users
want to deal with constant breakage (cf. Zinc's demise). I don't know if the
frequency of breakage has reduced after macros 1.1 though. Maybe I can make an
alternate implementation that uses proc macros, but also provide a more stable
macro_rules! version.

I wonder if proc macros are enough to solve the problem of creating resources in
dependencies. In that scenario information has to flow backwards; the top
crate selects the ceiling values of resources created in dependencies. I'm
afraid solving this problem may require metadata / compiler integration; I hope
not.

@perlindgren
Copy link
Collaborator

perlindgren commented Jun 29, 2017 via email

@whitequark
Copy link

whitequark commented Jun 30, 2017

I don't know if the frequency of breakage has reduced after macros 1.1 though.

There shouldn't be (almost) any breakage, since the compiler interface is now a token stream one way and a token stream the other way. So long as the Rust code you emit still compiles there's no reason the crate will break.

I mean, custom derives are already procedural macros, and are stable. The only thing that may break here is the interface rustc uses to invoke function-like proc macros, and it's just not large enough to cause a lot of pain.

I wonder if proc macros are enough to solve the problem of creating resources in
dependencies. In that scenario information has to flow backwards; the top
crate selects the ceiling values of resources created in dependencies. I'm
afraid solving this problem may require metadata / compiler integration; I hope
not.

I don't see how procedural macros help here. You don't get any access to the crate dependencies at all. The best option I see is to concoct something in build scripts but anything I could come up so far is fragile.

Full ACK on the other points.

@japaric japaric mentioned this pull request Jul 4, 2017
@japaric
Copy link
Collaborator Author

japaric commented Jul 7, 2017

Closing in favor of #34.

@japaric japaric closed this Jul 7, 2017
andrewgazelka pushed a commit to andrewgazelka/cortex-m-rtic that referenced this pull request Nov 3, 2021
they were leftover from rebasing

closes rtic-rs#31
@AfoHT AfoHT deleted the v2 branch January 26, 2023 22:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants