Conversation
Having the extra configs doesn't scale well, it already is a bit out of hand with the SMP ones, I think the domain functionality should be part of the existing configurations. |
If you're always running with NUM_DOMAINS > 1 and don't set up a schedule, you will get the default schedule that wraps around once 2^56-1 ticks have passed. On a 1 GHz timer tick that's after about 833 days. Impact is negligible, just wanted to point out that if you are using this for a very long-term deployment with super precise time requirements you would see a blip every few years. |
8c458dd to
a3b6e88
Compare
|
Some comments:
When we move to a clock based MCS API instead of time based, we'll run into the same issues as for domains. For x86, the timing info is passed on via bootinfo. The problem of that is that it's hard to use by stand-alone or library code, as it depends on the system how to get that information. That's why I proposed a new syscall to get the same info (it could take either a domain cap, or an SchedContext/SchedControl cap). Even on Arm and RISC-V it depends on the system used whether those kernel configs are passed on or not. |
The max number of domains has no performance impact at all as long as it fits 8 bits. You can have NUM_DOMAINS=256 and only use 4 of them. I do agree that you don't want to actually use 256 domains, that would indeed have a performance impact, but there is no reason to pick a particular low number for the maximum.
That is the only one that actually does cost a little. The kernel default is 100, but I don't think 256 is a problem.
I agree, people have been asking for a shorter minimum duration for non-MCS as well. |
That's not true, all the scheduling queues are duplicated per-domain, and they're often some of the largest data in the kernel image aside from kernel page table structures. Though it's probably not a performance impact but more just a memory usage one. (But yes, I broadly agree with you). |
You're right, there is indeed a memory impact. |
I do explicitly mention just below the xml snippet: And I also mention something similar in the |
|
I've switched the default maximums to 64 domains, and 128 domain schedule entries. Would this be sufficient? |
a791ef9 to
ed650e1
Compare
9151e99 to
1085b8d
Compare
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
1085b8d to
887c554
Compare
Signed-off-by: Krishnan Winter <krishnan.winter@unsw.edu.au>
This PR adds support for defining domain schedules in Microkit, and relies on the run-time domain scheduler changes due to be merged into seL4.
It also depends on the following branch of rust-sel4: https://github.com/au-ts/rust-sel4/tree/domain_set?branch=domain_set
Changes for users
In
build_sdk.pywe define two new configurations,debug_domainsandrelease_domains. These configurations build the kernel with a max of 256 domains, and 256 entries into the domain schedule. The user can change this to values as they see fit. I have separated these from the regular configs as the users may not wish to have the extra memory overhead (although mostly negligible). I can merge these configs into one if its more desirable.A domain schedule is defined in the sdf as:
The length is defined in milliseconds, and reads the
TIMER_FREQUENCYfrom the kernel config to covert to timer ticks that the kernel requires. This is the case for aarch64 and riscv64. However, on x86, we don't have a static definition of the timer frequency, and is instead set at runtime by the kernel. I'm not sure of the best way to do it here, whether we should do this conversion in the capDL initialiser instead.The above will insert the schedule using the new
DomainSetinvocations, beginning at index 0 by default. We also provide capDL a start index of 0 by default.The users can optionally set these values like so:
The start index is relative to the schedule list that the user provides not an absolute index. It must be in bounds of the length of schedule that the user defines. We also can have a domain shift, which means that will start inserting the schedule at that shifted index (in the above case 10). The shift + length of the schedule must be less than the kernel configured max number of domain schedules.
We also now build one monitor per domain. The monitor is responsible for handling faults, and also making threads passive if requested. The decision to create a monitor per domain was that these faults/requests can be serviced within that domains timeslice, and not have to wait for a singular domain to be scheduled again.
TODOs before merging
cargo.tomlto point to main after capdl: add support for domain set rust-sel4#324 is merged.sdf.rsto use domain indexes rather than names.