-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] user defined monotonic
timer
#200
Comments
Can you explain why |
@jonas-schievink Hmm, actually I don't think the trait needs to be
|
I have made two amendments to the RFC:
|
I quite like this. Looking into a traits crate could be a good option. Are there other traits we'd like to add? When it comes to a default timer, I'd leave it explicit to easier find issues when |
Yes, good idea. Do you think a tuple
Likely one more that lets you sub the SysTick with a user defined timeout timer. There's also the Instead of creating a @TeXitoi think we can move forward with this? (I'm going to set up the tickbox thing in any case) Let's FCP merging this: Note that we still need to decide on the return type for the |
OK for me. |
I think a |
🎉 This RFC, with the |
In the original work on RTFM a generic timer implementation was discussed, and initial experiments were conducted to verify feasibility. http://ceur-ws.org/Vol-1464/ewili15_16.pdf There is discussion on using multiple timers/timer queues to reduce priority inversion. In short, a postponed task with a priority of x, will cause priority inversion if (put in a queue) handled by a timer with priority higher (more important) than x. So ideally we should associate a timer for each priority level that postponed task hold (in order to minimize priority inversion). One possible approach is to give RTFM a set of "free" timers (implementing the "dispatch" trait). (We have already a similar approach to allowing RTFM to know about available interrupt handlers for the "soft" tasks, the somewhat "ugly" extern C.) If the number of available timers is NOT sufficient (more priorities of postponed tasks than available timers), we can think about some heuristics for allocation (by default) or guided by annotations (in the free-timer list the user could say that he wants to associate the timer with queue handled at a particular priority). A sensible heuristic might be to distribute the timers top down (priority vice), so at least the highest priority tasks won't suffer from the priority inversion. Ultimately, for a hard real-time system it boils down to response time analysis (and overall schedulability). Priority inversion (due to the timer queue handling/dispatch) is one piece of the puzzle. To that end, it makes full sense to introduce the timers to the RTFM model, and perform the system wide analysis on the complete model (including the timer tasks). The message queues becomes just shared resources (here lock free implementations are of great help reducing blocking, so we are in a particular good spot with RTFM). Well why bother, RTFM works fine as is, and my application does not have any hard real time constraints. While this is in general true, RTFM is indeed the most efficient real-time scheduler out there, and your application logic may not have any explicit timing constraints, hard constraints typically trickle in from the interaction with the underlying hardware (hidden by HAL/drivers etc.). E.g., we assume the interrupt handler (task) for the UART, etc. to handle communication without overflowing the input buffer. If that task is exposed to excessive interference by the message passing mechanism, the task misses its deadline and data is lost. We can prevent that by a "multiple timer" message passing. In the Rust RTFM re-implementation, priority assignment is currently manual, in the original RTFM model priorities were assigned based on task deadline information (which allows reasoning on timing/response times etc.). For I/O interaction those deadlines can be derived from the inter-arrival of events (e.g., in the case of a UART, it would relate to the baud rate). A bit out of topic perhaps, but wouldn't it be great with a bit more of const evaluation of init, and having deadlines derived automatically by the HAL/driver implementations???? I fear however that Rust is not yet powerful enough to that type of const evaluation during proc-macro execution, but perhaps an extension to HAL could be possible with declarative style init (accessible to the proc-macro ... just dreaming.) In the short term, lets focus on generic timers, and a reasonable heuristic for timer allocations. |
205: rtfm-syntax refactor + heterogeneous multi-core support r=japaric a=japaric this PR implements RFCs #178, #198, #199, #200, #201, #203 (only the refactor part), #204, #207, #211 and #212. most cfail tests have been removed because the test suite of `rtfm-syntax` already tests what was being tested here. The `rtfm-syntax` crate also has tests for the analysis pass which we didn't have here -- that test suite contains a regression test for #183. the remaining cfail tests have been upgraded into UI test so we can more thoroughly check / test the error message presented to the end user. the cpass tests have been converted into plain examples EDIT: I forgot, there are some examples of the multi-core API for the LPC541xx in [this repository](https://github.com/japaric/lpcxpresso54114) people that would like to try out this API but have no hardware can try out the x86_64 [Linux port] which also has multi-core support. [Linux port]: https://github.com/japaric/linux-rtfm closes #178 #198 #199 #200 #201 #203 #204 #207 #211 #212 closes #163 cc #209 (documents how to deal with errors) Co-authored-by: Jorge Aparicio <jorge@japaric.io>
205: rtfm-syntax refactor + heterogeneous multi-core support r=japaric a=japaric this PR implements RFCs #178, #198, #199, #200, #201, #203 (only the refactor part), #204, #207, #211 and #212. most cfail tests have been removed because the test suite of `rtfm-syntax` already tests what was being tested here. The `rtfm-syntax` crate also has tests for the analysis pass which we didn't have here -- that test suite contains a regression test for #183. the remaining cfail tests have been upgraded into UI test so we can more thoroughly check / test the error message presented to the end user. the cpass tests have been converted into plain examples EDIT: I forgot, there are some examples of the multi-core API for the LPC541xx in [this repository](https://github.com/japaric/lpcxpresso54114) people that would like to try out this API but have no hardware can try out the x86_64 [Linux port] which also has multi-core support. [Linux port]: https://github.com/japaric/linux-rtfm closes #178 #198 #199 #200 #201 #203 #204 #207 #211 #212 closes #163 cc #209 (documents how to deal with errors) Co-authored-by: Jorge Aparicio <jorge@japaric.io>
Done in PR #205 |
Current behavior
The
schedule
API internally makes uses of two timers: theDWT_CYCCNT
(thecycle counter) as a monotonic timer / counter and the
SysTick
(system timer)to generate timeouts. Currently, these can't be changed and the result is that
the
schedule
API can't be used on ARMv6-M whereDWT_CYCCNT
doesn't exist.Proposal
Require that the user specifies the monotonic timer that will be used to
implement the
schedule
API. The user will still be able to useDWT_CYCCNT
asthe monotonic timer but this will not be the default; the user must specify
this timer, or some other timer, in their application.
This RFC does not propose a mechanism to sub the
SysTick
timer with adifferent timer.
Rationale
The
DWT_CYCCNT
is not an appropriate monotonic timer for multi-coreapplications: each Cortex-M core has its own
DWT
peripheral and its own cyclecounter so it's not possible to synchronize these two counters (there's no
register to do this), plus the counters may be operating at different frequencies
resulting in one core's
Instant
having a different meaning that other cores'Instant
s. Also, as was previously mentioned theDWT_CYCCNT
is not availableon ARMv6-M cores; this limits on which devices one can use the
schedule
API.In multi-core applications it's better to use a device specific, constant
frequency timer visible to all cores as the monotonic timer. And in single-core
applications one may want to use a 64-bit timer or a prescaled 32-bit timer as
the monotonic timer; this way one can
schedule
tasks with long periods, e.g.in the order of seconds (cc @jamesmunns).
Lastly, using a device-specific timer as the monotonic timer lets programmers
use the
schedule
API on single-core ARMv6-M devices.Detailed design
The
Monotonic
traitThe following trait will be added to the
cortex-m-rtfm
crate.This trait represents a monotonic timer. The trait is meant to be implemented on
global singleton ZSTs.
Also, although it's not shown in the
trait
interface, the RTFM runtime expectsthat subtracting two
Monotonic::Instant
s produces a value that implements theTryInto<u32>
trait. This fallible conversion must return a number ofMonotonic
clock cycles that when multiplied byMonotonic::ratio()
produceSysTick
clock cycles. The conversion is fallible to allow forMonotonic
implementations that use 64-bit counters / timers.
The
monotonic
argumentThe
#[rtfm::app]
attribute will gain amonotonic
argument that takes a pathto a
struct
that implements theMonotonic
trait. This struct must be apublic unit struct, a struct with no fields. Also, the runtime expects that the
application initializes the specified timer during the
init
-ialization phase.The
CYCCNT
implementationFor ARMv7-M, the
cortex-m-rtfm
crate will provide an implementation of theMonotonic
trait that uses the cycle counter. The proposal is to place all thisAPI, which is basically today's
Instant
+Duration
API, in a module namedcyccnt
.Also note that
U32Ext
will no longer be automatically imported in RTFMapplications so one would need manually import
U32Ext
to use thecycles
method / constructor on
u32
integers.The
MultiCore
marker traitNot all
Monotonic
implementations behave correctly in multi-core contexts.The cycle counter (
CYCCNT
) doesn't for example because each core has its owncycle counter and these counters are not synchronized and may even be running
at different frequencies.
To accommodate this fact we'll also provide a
MultiCore
marker trait in thecortex-m-rtfm
crate:This marker trait should be implemented for monotonic timers that can be used in
multi-core context. The
CYCCNT
type will not implement this marker trait.When the
schedule
API is used in more than one core the#[app]
DSL willcompile time check that the
monotonic
argument specified by the userimplements the
MultiCore
trait. This way we'll prevent usingCYCCNT
inmulti-core context.
Migrating existing code
Existing users of the
schedule
API can migrate to the new API with just a fewchanges:
An example
Applications are likely to implement the
Monotonic
trait "at the top" becauseclock frequencies are selected by the application, not the HAL. Thus
Monotonic::ratio
can't be known until the application is written. ImplementingMonotonic
at the top also lets one use thecore::time::Duration
API.Unresolved questions
Should we place the
Monotonic
trait in a separate crate (e.g.cortex-m-rtfm-traits
) so that HAL authors can implement it without pullingthe whole
cortex-m-rtfm
crate, which has lots of dependencies?Should we default to the
CYCCNT
implementation when nomonotonic
argumentis given and the
schedule
API is used? But note that this will not work whenthe target is ARMv6-M and we won't be able to provide a good error message
because it's not possible to know the compilation target at macro expansion
time.
cc #115
The text was updated successfully, but these errors were encountered: