Conversation
With RFC-20 (seL4/rfcs#33) the system initialiser is now the right place to initialise the domain schedule. Add a "domains" section to capDL-tool parsing and printing that contains a domain start index and a list of number pairs for the schedule to be passed as is to the kernel. This commit does not yet update the C output of capDL-tool. Signed-off-by: Gerwin Klein <gerwin.klein@proofcraft.systems>
|
@Kswin01 and @Ivan-Velickovic do these look fine from your side for Microkit and the Rust loader? The domain schedule is optional and the start index is optional as well. The intention is that the initialisation sequence will be: configure_domains();
start_threads();
set_start_index();(This will be in a separate PR, this one here just updates the data model and text spec syntax) |
|
Yep that initialisation sequence is what I followed in my draft for the Rust Cap-DL changes. In testing this seems to work well |
There was a problem hiding this comment.
Mostly Word64 missing I think, the rest seems fine, but I don't know capDL, nor Haskell.
I really don't understand what the point of index_shift is, just combine it with start_index.
However, having either non-zero only makes sense if the domain cap is passed on so that the other entries can be used somehow. If this doesn't happen, then there is no need for any offset either and both start_index and index_shift can always be zero.
I initially also thought so, but that does not work. If index_shift is 0, you will always be overwriting the domain entry that the initialiser is running in. If you don't want that, you could specify a schedule that happens to have the same value, but then you need to construct that every time and need to know what that value is, i.e. it's no longer the kernel that defines that. So index_shift = 1 is useful. Microkit is probably going to set that to 1 by default and I'm planning to do that to CAmkES as well. For index_shift > 1 I indeed see less use, but if you do pass on the domain cap at some point (currently it doesn't), then you might want to set up your initial schedule somewhere at the end so that your later user-level stuff can have indices from 0 for simplicity. Overall, people don't need to use it, the default of 0 will just come out to the standard kernel API, but if you do want to avoid writing to the currently running index 0, you need something like it. |
But that's okay, you don't need it after init and domain switch + schedule reload only happens after SetStart (which must be the last thing the initialiser does).
None of this explains why it's not sufficient to use the My second point is: If the goal is no constraints, then you do need to pas the domain cap and also have no artificial restrictions. Then why not follow the original API and add explicit schedule index per schedule entry instead of one offset? Adding |
You don't know that, you might want to run more user-level initialisation stuff after the capDL loader.
Agreed, yes.
Because you might want start_index to point into the middle or end of the schedule(s) you set up. It's not the index at which the schedule array start to copy, but the index in the schedule at which the kernel starts running.
I do agree with that, I and want to change the loader to allow passing the domain cap on (there no reason not to be able to do that if specified), but that's a separate issue.
We could do that, but it would make normal schedules annoying to write and read, because you have information in there that is in most cases redundant.
I don't see any remaining limitations on the actual schedule that is being set up. You can achieve all kernel schedules without knowing kernel internals. You might have to pad with end markers, but that is not a limitation, that is just input format. Having to rotate the schedule is an actual limitation on what ends up in the kernel. |
2ee4015 to
9789538
Compare
Then you defer the SetStart call, or, if you want to keep domain 0 alive, configure it at entry 0. Or move entry 0 to the end and re-use the now free first slot.
But that's not how the human brain works: People tend to write things out in sequential order. If you want schedule So I guess I'm actually arguing for retaining
Agreed. But the same is true for having both
Agreed, except for
In a way that is semantically equivalent, so not in a limiting way. Which is the whole point, as it simplifies things. First, it seems simple enough to keep both If domain schedules are only defined via capDL, then all that's needed are the schedule entries and we can have hard-coded But if loaders want to play (probably unnecessary) complicated games during init or rewriting user configs, then they may want to fiddle with The only good argument for having I guess an argument for having I suppose to me it feels like |
| } | ||
|
|
||
| domains { | ||
| schedule: [(0, 10), (1, 10), (0, 0), (2, 2)] |
There was a problem hiding this comment.
Should this be considered an invalid schedule because it does not end with an end marker?
That is, should we enforce: schedule: [(0, 10), (1, 10), (0, 0), (2, 2), (0, 0)]?
My feeling is yes.
Also, as start_index is 3, schedule entries 0 and 1 will never be executed.
There was a problem hiding this comment.
Should this be considered an invalid schedule because it does not end with an end marker?
That is, should we enforce:
schedule: [(0, 10), (1, 10), (0, 0), (2, 2), (0, 0)]?My feeling is yes.
I don't agree, because that would be redundant. Wherever you can legally write it, there will always automatically be an end marker after the last entry in the kernel. So we'd be forcing notational redundancy and runtime redundancy.
Also, as
start_indexis 3, schedule entries 0 and 1 will never be executed.
Yes, if there is no domain cap available, that is correct. The plan is to change that, though, and allow passing on the domain cap. Then it becomes exactly the kind of scenario that we want to support (two schedules that a monitor/manager with the domain cap can switch between).
Sure, you can do all that, and might want to and should be able to, but you should not be forced to.
I'm not proposing to rotate schedules, I'm just making a kernel feature available that already exists. Of course most schedules will start at the beginning, that's why the default is 0. Like the shift, people who don't need it don't even need to know it exists, but if you do need it, it's there.
You're assuming a user scenario that doesn't have any other constraints where you are free to set up schedules any way you like. Maybe that is the case, maybe not. I don't see any reason to force a specific one.
Ok, it looks like the naming is causing a lot of the confusion and I think I see it myself now. Any suggestions for a better name for the start index? Maybe
They are both optional -- normal input specs don't need mention them (the C data structure does, but that is not for human consumption). They just exist so we don't have to force specific scenarios on the user without needing to.
Interesting. I had not thought about that case and you are right, you might want to keep it at actual 0 even though you have specified a shift. That is a restriction and should not be there. I'll keep it absolute.
That's probably also confusing. I don't have a problem with keeping it absolute, I just hadn't thought about the case where you want to keep it at absolute 0.
It is definitely limiting :-): if you have some other constraint that says you can't rotate it for some reason you can no longer achieve the schedule you need. I don't know any specific scenario that has that, but compared to an option that does not have that constraint, there is a limitation.
The schedules that we're so far expecting from Microkit and CamkES are all going to be a fairly short list with no provided start index (starting at the beginning of the provided schedule), and probably a shift index of 1 (the latter is not strictly necessary, but it'll be easier to reason that nothing has interfered with the loader until it is finished).
The standard loaders won't be doing that kind of stuff, but I don't want to write something now and later get feature requests for breaking infrastructure changes because we have hard-coded policy in the thing that is supposed to be the mechanism for allowing users to set up policy.
So far that is what Microkit wants to do. We could put schedules at the end from the Microkit side, I don't think there is currently any strong requirement in either direction, but I have the feeling people would at least usually expect them at the beginning (not that it matters too much).
The plan was to always call SetStart when there is a schedule provided. There should probably be a clearer distinction between "not provided" (=default) and "do not call SetStart". I'd still like the behaviour of the default case when no
Depending on implementation, the spec is already a runtime argument to the loader (in Rust it can be, in C it currently isn't). Adding a proper interface for second-stage initialisation would indeed be something to look into, I agree, but one step at a time :-) |
Alternative is to keep it relative, but give a way to not call SetStart: Because the only possible continuation is keep running with the current domain, as nothing else has been set up that can be used yet. The only sensible call would be within the configured schedule and making it relative does make sense then.
I got confused before and thought schedules can be rotated, but they can't: The start index is used every time the schedule is reset, not only initially. So it is not possible to have a schedule Only way to achieve that is by configuring the schedule as But if the intention is to sometimes run a,b,c and sometimes just b,c, and switch between them by changing the starting index, then the above example would make sense, because that's the intention. So agreed that removing start_index would be limiting, although for very obscure use cases.
But that would add unnecessary complexity and make it seem like it does matter, while it's perfectly safe to overwrite entry 0. Then this misconception will spread and people will think it's necessary.
For its own purposes or to expose it to user space? I disagree with the own purposes use case, that shouldn't be necessary and would clash with user control.
When multiple entities try to control and manage the same resources, you have to be very clear how that's done. So I'm convinced
Agreed.
Okay, I think I've changed my mind about a few things. My current opinion is:
I think it would also be fine to not support user space managed domain schedules together with capDL defined domains: The added value is very low and over all complexity is lower if all domain management happens in one place, either capDL or user space. Considering this, I think it would be fine to not have Another argument against |
Signed-off-by: Gerwin Klein <gerwin.klein@proofcraft.systems>
Signed-off-by: Gerwin Klein <gerwin.klein@proofcraft.systems>
Use mempty for the None case in prettyDomains and make sure that all sections end with a newline and start with an empty line. This should give us consistent newlines at the end of each section and at the end of the file. Signed-off-by: Gerwin Klein <gerwin.klein@proofcraft.systems>
Add an index_shift declaration so that supplied domain schedules do not have to start at index 0 in the kernel, but at index_shift instead. Signed-off-by: Gerwin Klein <gerwin.klein@proofcraft.systems>
Signed-off-by: Gerwin Klein <gerwin.klein@proofcraft.systems>
Provide an option to not start the domain schedule even when one is provided. Intended for situations where further initialisation is running in domain 0 after the capDL loader finishes. This scenario would need the domain cap to be passed on to user-space, which the loader currently does not do yet, but the idea is to allow that later. Signed-off-by: Gerwin Klein <gerwin.klein@proofcraft.systems>
|
Ok, I think we agree and have converged enough to merge this. I've done the following:
How Microkit and CAmkES will use this interface is mostly up to those frameworks. They may want to provide/force policy, but the loader should still be as general as we can support without too much pain. I think that is achieved. For the CAmkES update, I'll be using the defaults, i.e. overwriting from index 0 and starting the schedule at index 0. I agree that using a default shift of 1 would just create confusion about whether that is safe or not. |
|
On this part:
Eventually maybe yes, but currently the domain cap is not yet passed on and the current tools don't expect one. So for now the index_shift is still useful to avoid bloat in the capDL spec if people want to put their schedule at the end or somewhere else than the beginning, so I left it in. |
But if the domain cap is not passed on, no one else can configure schedules, so there is no need for this. Or do you mean other loader code using |
Not being able to pass on the domain cap is a current limitation that I'm intending to lift. I need to look a bit deeper into that to figure out if this needs a kernel change or not. Currently the control caps are unique (cannot be copied), which makes it awkward for the loader to pass them on if the loader also needs to use them late like the domain cap. |
The same is true for the IRQ Control cap, can you have a look at that too while you're at it? Domain cap should be trivial though, as it can't be badged or anything. |
I'll have a look at both, yes.
Should not be too hard, it's just that the proof so far was allowed to take a bunch of shortcuts, which I will have to remove. |
Add domain schedule specifications to the capDL-tool parsing and printing process. Does not yet change the C loader or C output of the capDL-tool (left for a separate PR).
Add docs for syntax and intended semantics of those spec elements; bump minor spec version number to 1.1.
Add missing architecture tags in the docs in a separate commit.