Skip to content

add domain schedules to capDL-tool#81

Merged
lsf37 merged 7 commits intomasterfrom
dyn-domains
Mar 20, 2026
Merged

add domain schedules to capDL-tool#81
lsf37 merged 7 commits intomasterfrom
dyn-domains

Conversation

@lsf37
Copy link
Copy Markdown
Member

@lsf37 lsf37 commented Mar 13, 2026

  • Add domain schedule specifications to the capDL-tool parsing and printing process. Does not yet change the C loader or C output of the capDL-tool (left for a separate PR).

  • Add docs for syntax and intended semantics of those spec elements; bump minor spec version number to 1.1.

  • Add missing architecture tags in the docs in a separate commit.

With RFC-20 (seL4/rfcs#33) the system
initialiser is now the right place to initialise the domain schedule.

Add a "domains" section to capDL-tool parsing and printing that contains
a domain start index and a list of number pairs for the schedule to be
passed as is to the kernel.

This commit does not yet update the C output of capDL-tool.

Signed-off-by: Gerwin Klein <gerwin.klein@proofcraft.systems>
@lsf37 lsf37 requested review from Indanz and corlewis March 13, 2026 00:13
@lsf37
Copy link
Copy Markdown
Member Author

lsf37 commented Mar 13, 2026

@Kswin01 and @Ivan-Velickovic do these look fine from your side for Microkit and the Rust loader? The domain schedule is optional and the start index is optional as well. The intention is that the initialisation sequence will be:

  configure_domains();
  start_threads();
  set_start_index();

(This will be in a separate PR, this one here just updates the data model and text spec syntax)

@Kswin01
Copy link
Copy Markdown

Kswin01 commented Mar 13, 2026

Yep that initialisation sequence is what I followed in my draft for the Rust Cap-DL changes. In testing this seems to work well

Comment thread capDL-tool/CapDL/Parser.hs Outdated
Comment thread capDL-tool/doc/capDL.md Outdated
Comment thread capDL-tool/doc/capDL.md Outdated
Comment thread capDL-tool/CapDL/PrintModel.hs Outdated
Copy link
Copy Markdown
Contributor

@Indanz Indanz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly Word64 missing I think, the rest seems fine, but I don't know capDL, nor Haskell.

I really don't understand what the point of index_shift is, just combine it with start_index.

However, having either non-zero only makes sense if the domain cap is passed on so that the other entries can be used somehow. If this doesn't happen, then there is no need for any offset either and both start_index and index_shift can always be zero.

Comment thread capDL-tool/CapDL/AST.hs Outdated
Comment thread capDL-tool/CapDL/AST.hs
Comment thread capDL-tool/CapDL/Model.hs Outdated
Comment thread capDL-tool/CapDL/Parser.hs Outdated
Comment thread capDL-tool/CapDL/PrintC.hs
Comment thread capDL-tool/CapDL/PrintModel.hs Outdated
Comment thread capDL-tool/CapDL/PrintXml.hs Outdated
Comment thread capDL-tool/doc/capDL.md Outdated
Comment thread capDL-tool/example-arm.cdl Outdated
@lsf37
Copy link
Copy Markdown
Member Author

lsf37 commented Mar 16, 2026

I really don't understand what the point of index_shift is, just combine it with start_index.

However, having either non-zero only makes sense if the domain cap is passed on so that the other entries can be used somehow. If this doesn't happen, then there is no need for any offset either and both start_index and index_shift can always be zero.

I initially also thought so, but that does not work. If index_shift is 0, you will always be overwriting the domain entry that the initialiser is running in. If you don't want that, you could specify a schedule that happens to have the same value, but then you need to construct that every time and need to know what that value is, i.e. it's no longer the kernel that defines that. So index_shift = 1 is useful. Microkit is probably going to set that to 1 by default and I'm planning to do that to CAmkES as well.

For index_shift > 1 I indeed see less use, but if you do pass on the domain cap at some point (currently it doesn't), then you might want to set up your initial schedule somewhere at the end so that your later user-level stuff can have indices from 0 for simplicity.

Overall, people don't need to use it, the default of 0 will just come out to the standard kernel API, but if you do want to avoid writing to the currently running index 0, you need something like it.

@Indanz
Copy link
Copy Markdown
Contributor

Indanz commented Mar 16, 2026

I really don't understand what the point of index_shift is, just combine it with start_index.
However, having either non-zero only makes sense if the domain cap is passed on so that the other entries can be used somehow. If this doesn't happen, then there is no need for any offset either and both start_index and index_shift can always be zero.

I initially also thought so, but that does not work. If index_shift is 0, you will always be overwriting the domain entry that the initialiser is running in.

But that's okay, you don't need it after init and domain switch + schedule reload only happens after SetStart (which must be the last thing the initialiser does).

For index_shift > 1 I indeed see less use, but if you do pass on the domain cap at some point (currently it doesn't), then you might want to set up your initial schedule somewhere at the end so that your later user-level stuff can have indices from 0 for simplicity.

None of this explains why it's not sufficient to use the start_index value as implicit index_shift. That was my first point.

My second point is: If the goal is no constraints, then you do need to pas the domain cap and also have no artificial restrictions. Then why not follow the original API and add explicit schedule index per schedule entry instead of one offset? Adding start_index and index_shift only partially reduces constraints, but it still adds limitations.

@lsf37
Copy link
Copy Markdown
Member Author

lsf37 commented Mar 16, 2026

I initially also thought so, but that does not work. If index_shift is 0, you will always be overwriting the domain entry that the initialiser is running in.

But that's okay, you don't need it after init

You don't know that, you might want to run more user-level initialisation stuff after the capDL loader.

and domain switch + schedule reload only happens after SetStart (which must be the last thing the initialiser does).

Agreed, yes.

For index_shift > 1 I indeed see less use, but if you do pass on the domain cap at some point (currently it doesn't), then you might want to set up your initial schedule somewhere at the end so that your later user-level stuff can have indices from 0 for simplicity.

None of this explains why it's not sufficient to use the start_index value as implicit index_shift. That was my first point.

Because you might want start_index to point into the middle or end of the schedule(s) you set up. It's not the index at which the schedule array start to copy, but the index in the schedule at which the kernel starts running.

My second point is: If the goal is no constraints, then you do need to pas the domain cap

I do agree with that, I and want to change the loader to allow passing the domain cap on (there no reason not to be able to do that if specified), but that's a separate issue.

and also have no artificial restrictions. Then why not follow the original API and add explicit schedule index per schedule entry instead of one offset?

We could do that, but it would make normal schedules annoying to write and read, because you have information in there that is in most cases redundant.

Adding start_index and index_shift only partially reduces constraints, but it still adds limitations.

I don't see any remaining limitations on the actual schedule that is being set up. You can achieve all kernel schedules without knowing kernel internals. You might have to pad with end markers, but that is not a limitation, that is just input format. Having to rotate the schedule is an actual limitation on what ends up in the kernel.

@lsf37 lsf37 force-pushed the dyn-domains branch 3 times, most recently from 2ee4015 to 9789538 Compare March 17, 2026 02:47
@Indanz
Copy link
Copy Markdown
Contributor

Indanz commented Mar 17, 2026

I initially also thought so, but that does not work. If index_shift is 0, you will always be overwriting the domain entry that the initialiser is running in.

But that's okay, you don't need it after init

You don't know that, you might want to run more user-level initialisation stuff after the capDL loader.

Then you defer the SetStart call, or, if you want to keep domain 0 alive, configure it at entry 0.

Or move entry 0 to the end and re-use the now free first slot.

None of this explains why it's not sufficient to use the start_index value as implicit index_shift. That was my first point.

Because you might want start_index to point into the middle or end of the schedule(s) you set up. It's not the index at which the schedule array start to copy, but the index in the schedule at which the kernel starts running.

But that's not how the human brain works: People tend to write things out in sequential order.
Being able to rotate a schedule around in nonsensical ways is only a theoretical advantage.

If you want schedule [c, a, b] with starting index 1, most people would write that out
as [a, b, c] and set starting index to 0. Allowing the former does not add more expressive
value, it only adds complexity.

So I guess I'm actually arguing for retaining index_shift and hard-coding start_index to 0
(relative to index_shift), but to me calling it start_index makes more sense.

and also have no artificial restrictions. Then why not follow the original API and add explicit schedule index per schedule entry instead of one offset?

We could do that, but it would make normal schedules annoying to write and read, because you have information in there that is in most cases redundant.

Agreed. But the same is true for having both start_index and index_shift, although to a lesser extend.

Adding start_index and index_shift only partially reduces constraints, but it still adds limitations.

I don't see any remaining limitations on the actual schedule that is being set up. You can achieve all kernel schedules without knowing kernel internals. You might have to pad with end markers, but that is not a limitation, that is just input format.

Agreed, except for start_index being relative to index_shift, which strongly implies it must be within range of the schedule and can't stay at absolute 0 or something else. I suppose I don't like start_index being relative, as I find that confusing and it feels limiting, though it's not if negative and out-of-bound values are allowed.

Having to rotate the schedule is an actual limitation on what ends up in the kernel.

In a way that is semantically equivalent, so not in a limiting way. Which is the whole point, as it simplifies things.

First, it seems simple enough to keep both start_index and index_shift, so all this isn't a big deal. But to me it seems there is some friction between what is needed for capDL and what loaders may want to do.

If domain schedules are only defined via capDL, then all that's needed are the schedule entries and we can have hard-coded start_index and index_shift of 0 (or shifted to the end the domain schedule array).

But if loaders want to play (probably unnecessary) complicated games during init or rewriting user configs, then they may want to fiddle with index_shift and start_index. I still haven't heard any concrete example where this would be necessary and I fear that loaders will abuse index_shift for their own purposes instead of just exposing it to users.

The only good argument for having index_shift is that user space may want to configure their own schedules at the beginning instead of the end.

I guess an argument for having start_index is that if it's not set, SetStart isn't called by the capDL loader? But then how does the loader know which value to use for SetStart?

I suppose to me it feels like index_shift should be an argument given to the runtime capDL loader instead of configured in capDL directly. And if we do have a start_index, its value should be returned to the caller of capDL, instead of the capDL loader calling SetStart itself.

}

domains {
schedule: [(0, 10), (1, 10), (0, 0), (2, 2)]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be considered an invalid schedule because it does not end with an end marker?

That is, should we enforce: schedule: [(0, 10), (1, 10), (0, 0), (2, 2), (0, 0)]?

My feeling is yes.

Also, as start_index is 3, schedule entries 0 and 1 will never be executed.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be considered an invalid schedule because it does not end with an end marker?

That is, should we enforce: schedule: [(0, 10), (1, 10), (0, 0), (2, 2), (0, 0)]?

My feeling is yes.

I don't agree, because that would be redundant. Wherever you can legally write it, there will always automatically be an end marker after the last entry in the kernel. So we'd be forcing notational redundancy and runtime redundancy.

Also, as start_index is 3, schedule entries 0 and 1 will never be executed.

Yes, if there is no domain cap available, that is correct. The plan is to change that, though, and allow passing on the domain cap. Then it becomes exactly the kind of scenario that we want to support (two schedules that a monitor/manager with the domain cap can switch between).

@lsf37
Copy link
Copy Markdown
Member Author

lsf37 commented Mar 19, 2026

I initially also thought so, but that does not work. If index_shift is 0, you will always be overwriting the domain entry that the initialiser is running in.

But that's okay, you don't need it after init

You don't know that, you might want to run more user-level initialisation stuff after the capDL loader.

Then you defer the SetStart call, or, if you want to keep domain 0 alive, configure it at entry 0.

Or move entry 0 to the end and re-use the now free first slot.

Sure, you can do all that, and might want to and should be able to, but you should not be forced to.

None of this explains why it's not sufficient to use the start_index value as implicit index_shift. That was my first point.

Because you might want start_index to point into the middle or end of the schedule(s) you set up. It's not the index at which the schedule array start to copy, but the index in the schedule at which the kernel starts running.

But that's not how the human brain works: People tend to write things out in sequential order. Being able to rotate a schedule around in nonsensical ways is only a theoretical advantage.

I'm not proposing to rotate schedules, I'm just making a kernel feature available that already exists. Of course most schedules will start at the beginning, that's why the default is 0. Like the shift, people who don't need it don't even need to know it exists, but if you do need it, it's there.

If you want schedule [c, a, b] with starting index 1, most people would write that out as [a, b, c] and set starting index to 0. Allowing the former does not add more expressive value, it only adds complexity.

You're assuming a user scenario that doesn't have any other constraints where you are free to set up schedules any way you like. Maybe that is the case, maybe not. I don't see any reason to force a specific one.

So I guess I'm actually arguing for retaining index_shift and hard-coding start_index to 0 (relative to index_shift), but to me calling it start_index makes more sense.

Ok, it looks like the naming is causing a lot of the confusion and I think I see it myself now. Any suggestions for a better name for the start index? Maybe run_from? Or just domain_set_start? That would indicate what kernel API it is targeting.

and also have no artificial restrictions. Then why not follow the original API and add explicit schedule index per schedule entry instead of one offset?

We could do that, but it would make normal schedules annoying to write and read, because you have information in there that is in most cases redundant.

Agreed. But the same is true for having both start_index and index_shift, although to a lesser extend.

They are both optional -- normal input specs don't need mention them (the C data structure does, but that is not for human consumption). They just exist so we don't have to force specific scenarios on the user without needing to.

Adding start_index and index_shift only partially reduces constraints, but it still adds limitations.

I don't see any remaining limitations on the actual schedule that is being set up. You can achieve all kernel schedules without knowing kernel internals. You might have to pad with end markers, but that is not a limitation, that is just input format.

Agreed, except for start_index being relative to index_shift, which strongly implies it must be within range of the schedule and can't stay at absolute 0 or something else.

Interesting. I had not thought about that case and you are right, you might want to keep it at actual 0 even though you have specified a shift. That is a restriction and should not be there. I'll keep it absolute.

I suppose I don't like start_index being relative, as I find that confusing and it feels limiting, though it's not if negative and out-of-bound values are allowed.

That's probably also confusing. I don't have a problem with keeping it absolute, I just hadn't thought about the case where you want to keep it at absolute 0.

Having to rotate the schedule is an actual limitation on what ends up in the kernel.

In a way that is semantically equivalent, so not in a limiting way. Which is the whole point, as it simplifies things.

It is definitely limiting :-): if you have some other constraint that says you can't rotate it for some reason you can no longer achieve the schedule you need. I don't know any specific scenario that has that, but compared to an option that does not have that constraint, there is a limitation.

First, it seems simple enough to keep both start_index and index_shift, so all this isn't a big deal. But to me it seems there is some friction between what is needed for capDL and what loaders may want to do.

The schedules that we're so far expecting from Microkit and CamkES are all going to be a fairly short list with no provided start index (starting at the beginning of the provided schedule), and probably a shift index of 1 (the latter is not strictly necessary, but it'll be easier to reason that nothing has interfered with the loader until it is finished).

If domain schedules are only defined via capDL, then all that's needed are the schedule entries and we can have hard-coded start_index and index_shift of 0 (or shifted to the end the domain schedule array).

But if loaders want to play (probably unnecessary) complicated games during init or rewriting user configs, then they may want to fiddle with index_shift and start_index. I still haven't heard any concrete example where this would be necessary and I fear that loaders will abuse index_shift for their own purposes instead of just exposing it to users.

The standard loaders won't be doing that kind of stuff, but I don't want to write something now and later get feature requests for breaking infrastructure changes because we have hard-coded policy in the thing that is supposed to be the mechanism for allowing users to set up policy.

The only good argument for having index_shift is that user space may want to configure their own schedules at the beginning instead of the end.

So far that is what Microkit wants to do. We could put schedules at the end from the Microkit side, I don't think there is currently any strong requirement in either direction, but I have the feeling people would at least usually expect them at the beginning (not that it matters too much).

I guess an argument for having start_index is that if it's not set, SetStart isn't called by the capDL loader? But then how does the loader know which value to use for SetStart?

The plan was to always call SetStart when there is a schedule provided. There should probably be a clearer distinction between "not provided" (=default) and "do not call SetStart".

I'd still like the behaviour of the default case when no domain_set_start is provided to be that the schedule starts running at its own initial index, because that is what all applications so far want. It would be good if that had the minimum spec overhead. So I guess the spec side should have a construct for saying "don't do any domain_set_start" to handle that case explicitly. In the C loader the field is always present anyway, so -1 there could mean don't start, and any other index will mean a set_start from there. In Rust it'd just be an option type like in Haskell. I think that'd work.

I suppose to me it feels like index_shift should be an argument given to the runtime capDL loader instead of configured in capDL directly. And if we do have a start_index, its value should be returned to the caller of capDL, instead of the capDL loader calling SetStart itself.

Depending on implementation, the spec is already a runtime argument to the loader (in Rust it can be, in C it currently isn't). Adding a proper interface for second-stage initialisation would indeed be something to look into, I agree, but one step at a time :-)

@Indanz
Copy link
Copy Markdown
Contributor

Indanz commented Mar 19, 2026

So I guess I'm actually arguing for retaining index_shift and hard-coding start_index to 0 (relative to index_shift), but to me calling it start_index makes more sense.

Ok, it looks like the naming is causing a lot of the confusion and I think I see it myself now. Any suggestions for a better name for the start index? Maybe run_from? Or just domain_set_start? That would indicate what kernel API it is targeting.

domain_set_start would be good. But officially it should be domainset_schedule_set_start. ;-)

domain_schedule_start?

Agreed, except for start_index being relative to index_shift, which strongly implies it must be within range of the schedule and can't stay at absolute 0 or something else.

Interesting. I had not thought about that case and you are right, you might want to keep it at actual 0 even though you have specified a shift. That is a restriction and should not be there. I'll keep it absolute.

Alternative is to keep it relative, but give a way to not call SetStart: Because the only possible continuation is keep running with the current domain, as nothing else has been set up that can be used yet. The only sensible call would be within the configured schedule and making it relative does make sense then.

It is definitely limiting :-): if you have some other constraint that says you can't rotate it for some reason you can no longer achieve the schedule you need. I don't know any specific scenario that has that, but compared to an option that does not have that constraint, there is a limitation.

I got confused before and thought schedules can be rotated, but they can't: The start index is used every time the schedule is reset, not only initially.

So it is not possible to have a schedule [a, b, c] and start at b and get: b, c, a, b, c, ... execution.
If you do SetStart on b, a will never be executed.

Only way to achieve that is by configuring the schedule as [b, c, a] and have starting index 0, or to call SetStart at the right moment.

But if the intention is to sometimes run a,b,c and sometimes just b,c, and switch between them by changing the starting index, then the above example would make sense, because that's the intention. So agreed that removing start_index would be limiting, although for very obscure use cases.

probably a shift index of 1 (the latter is not strictly necessary, but it'll be easier to reason that nothing has interfered with the loader until it is finished).

But that would add unnecessary complexity and make it seem like it does matter, while it's perfectly safe to overwrite entry 0. Then this misconception will spread and people will think it's necessary.

The only good argument for having index_shift is that user space may want to configure their own schedules at the beginning instead of the end.

So far that is what Microkit wants to do.

For its own purposes or to expose it to user space? I disagree with the own purposes use case, that shouldn't be necessary and would clash with user control.

We could put schedules at the end from the Microkit side, I don't think there is currently any strong requirement in either direction, but I have the feeling people would at least usually expect them at the beginning (not that it matters too much).

When multiple entities try to control and manage the same resources, you have to be very clear how that's done.
I think it's good to be very rigid in this. In this case it's relatively easy because capDL only runs at boot and user space only runs later.

So I'm convinced index_shift is needed to not limit this use case. But it should be fully user configurable, not something Microkit adjusts behind the user's back, because then the user loses overview.

I'd still like the behaviour of the default case when no domain_set_start is provided to be that the schedule starts running at its own initial index, because that is what all applications so far want. It would be good if that had the minimum spec overhead.

Agreed.

I suppose to me it feels like index_shift should be an argument given to the runtime capDL loader instead of configured in capDL directly. And if we do have a start_index, its value should be returned to the caller of capDL, instead of the capDL loader calling SetStart itself.

Depending on implementation, the spec is already a runtime argument to the loader (in Rust it can be, in C it currently isn't). Adding a proper interface for second-stage initialisation would indeed be something to look into, I agree, but one step at a time :-)

Okay, I think I've changed my mind about a few things. My current opinion is:

  • index_shift can stay, but only defined by users, never used by e.g. Microkit or other intermediates for their own shenanigans. This because the use case is that user space wants to manage domains at run time and then it needs to know which indices it can use.
  • start_index should stay, and probably stay relative and fall within the configured schedule, as that's more user friendly.
  • There is no need to specify that SetStart should not be called. The only use case for this when user space wants to manage their own domains and start with their schedule, and also want capDL to configure some other schedule which may be used later. That's not very useful: User space can either configure all schedules themselves or retain entry 0 to achieve this. That said, it's probably trivial to add support for this anyway.

I think it would also be fine to not support user space managed domain schedules together with capDL defined domains: The added value is very low and over all complexity is lower if all domain management happens in one place, either capDL or user space. Considering this, I think it would be fine to not have index_shift.

Another argument against index_shift: If user space does manage domains and want to retain the capDL schedule, it needs to know where it is anyway. Once it does that, it doesn't matter much where it is. If user space doesn't want to retain the capDL schedule, it can just overwrite it with its own schedule.

lsf37 added 3 commits March 20, 2026 11:52
Signed-off-by: Gerwin Klein <gerwin.klein@proofcraft.systems>
Signed-off-by: Gerwin Klein <gerwin.klein@proofcraft.systems>
Use mempty for the None case in prettyDomains and make sure that all
sections end with a newline and start with an empty line. This should
give us consistent newlines at the end of each section and at the end of
the file.

Signed-off-by: Gerwin Klein <gerwin.klein@proofcraft.systems>
Add an index_shift declaration so that supplied domain schedules do not
have to start at index 0 in the kernel, but at index_shift instead.

Signed-off-by: Gerwin Klein <gerwin.klein@proofcraft.systems>
lsf37 added 2 commits March 20, 2026 13:45
Signed-off-by: Gerwin Klein <gerwin.klein@proofcraft.systems>
Provide an option to not start the domain schedule even when one is
provided. Intended for situations where further initialisation is
running in domain 0 after the capDL loader finishes.

This scenario would need the domain cap to be passed on to user-space,
which the loader currently does not do yet, but the idea is to allow
that later.

Signed-off-by: Gerwin Klein <gerwin.klein@proofcraft.systems>
@lsf37
Copy link
Copy Markdown
Member Author

lsf37 commented Mar 20, 2026

Ok, I think we agree and have converged enough to merge this. I've done the following:

  • renamed to domain_set_start
  • kept domain_set_start relative to the provided schedule
  • have added an option anyway to not call DomainSetStart, even though I agree that it is not going to be used in the current state. I've added it, because it's easy to do and it does remove the last limitation that I can see when the domain cap can eventually be passed on.

How Microkit and CAmkES will use this interface is mostly up to those frameworks. They may want to provide/force policy, but the loader should still be as general as we can support without too much pain. I think that is achieved.

For the CAmkES update, I'll be using the defaults, i.e. overwriting from index 0 and starting the schedule at index 0. I agree that using a default shift of 1 would just create confusion about whether that is safe or not.

@lsf37
Copy link
Copy Markdown
Member Author

lsf37 commented Mar 20, 2026

On this part:

I think it would also be fine to not support user space managed domain schedules together with capDL defined domains: The added value is very low and over all complexity is lower if all domain management happens in one place, either capDL or user space. Considering this, I think it would be fine to not have index_shift.

Eventually maybe yes, but currently the domain cap is not yet passed on and the current tools don't expect one. So for now the index_shift is still useful to avoid bloat in the capDL spec if people want to put their schedule at the end or somewhere else than the beginning, so I left it in.

@lsf37 lsf37 merged commit d633ff5 into master Mar 20, 2026
11 checks passed
@lsf37 lsf37 deleted the dyn-domains branch March 20, 2026 03:00
@Indanz
Copy link
Copy Markdown
Contributor

Indanz commented Mar 20, 2026

On this part:

I think it would also be fine to not support user space managed domain schedules together with capDL defined domains: The added value is very low and over all complexity is lower if all domain management happens in one place, either capDL or user space. Considering this, I think it would be fine to not have index_shift.

Eventually maybe yes, but currently the domain cap is not yet passed on and the current tools don't expect one. So for now the index_shift is still useful to avoid bloat in the capDL spec if people want to put their schedule at the end or somewhere else than the beginning, so I left it in.

But if the domain cap is not passed on, no one else can configure schedules, so there is no need for this. Or do you mean other loader code using seL4_CapDomain directly?

@lsf37
Copy link
Copy Markdown
Member Author

lsf37 commented Mar 20, 2026

But if the domain cap is not passed on, no one else can configure schedules, so there is no need for this. Or do you mean other loader code using seL4_CapDomain directly?

Not being able to pass on the domain cap is a current limitation that I'm intending to lift. I need to look a bit deeper into that to figure out if this needs a kernel change or not. Currently the control caps are unique (cannot be copied), which makes it awkward for the loader to pass them on if the loader also needs to use them late like the domain cap.

@Indanz
Copy link
Copy Markdown
Contributor

Indanz commented Mar 21, 2026

Currently the control caps are unique (cannot be copied), which makes it awkward for the loader to pass them on if the loader also needs to use them late like the domain cap.

The same is true for the IRQ Control cap, can you have a look at that too while you're at it?

Domain cap should be trivial though, as it can't be badged or anything.

@lsf37
Copy link
Copy Markdown
Member Author

lsf37 commented Mar 22, 2026

Currently the control caps are unique (cannot be copied), which makes it awkward for the loader to pass them on if the loader also needs to use them late like the domain cap.

The same is true for the IRQ Control cap, can you have a look at that too while you're at it?

I'll have a look at both, yes.

Domain cap should be trivial though, as it can't be badged or anything.

Should not be too hard, it's just that the proof so far was allowed to take a bunch of shortcuts, which I will have to remove.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants