Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scx_rustland: introduce SMT support #90

Merged
merged 1 commit into from
Jan 16, 2024
Merged

scx_rustland: introduce SMT support #90

merged 1 commit into from
Jan 16, 2024

Conversation

arighi
Copy link
Collaborator

@arighi arighi commented Jan 16, 2024

Introduce a basic support of CPU topology awareness. With this change, the scheduler will prioritize dispatching tasks to idle CPUs with fewer busy SMT siblings, then, it will proceed to CPUs with more busy SMT siblings, in ascending order.

To implement this, introduce a new CoreMapping abstraction, that provides a mapping of the available core IDs in the system along with their corresponding lists of CPU IDs. This, coupled with the get_cpu_pid() method from the BpfScheduler abstraction, allows the user-space scheduler to enforce the policy outlined above and improve performance on SMT systems.

Keep in mind that this improvement is relevent only when the amount of tasks running in the system is less than the amount of CPUs. As soon as the amount of running tasks increases, they will be distributed across all available CPUs and cores, thereby negating the advantages of SMT isolation.

Introduce a basic support of CPU topology awareness. With this change,
the scheduler will prioritize dispatching tasks to idle CPUs with fewer
busy SMT siblings, then, it will proceed to CPUs with more busy SMT
siblings, in ascending order.

To implement this, introduce a new CoreMapping abstraction, that
provides a mapping of the available core IDs in the system along with
their corresponding lists of CPU IDs. This, coupled with the
get_cpu_pid() method from the BpfScheduler abstraction, allows the
user-space scheduler to enforce the policy outlined above and improve
performance on SMT systems.

Keep in mind that this improvement is relevent only when the amount of
tasks running in the system is less than the amount of CPUs. As soon as
the amount of running tasks increases, they will be distributed across
all available CPUs and cores, thereby negating the advantages of SMT
isolation.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
use std::collections::HashMap;
use std::fs;

/// scx_rustland: CPU topology.
Copy link
Contributor

@Byte-Lab Byte-Lab Jan 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not necessary for this change, but this could be useful to provide as a rust crate as well. In scx_rusty we collect the host's topology to determine what domains we should create for the scheduler. If we had a single utility for mapping logical CPUs -> physical cores -> LLC packages -> NUMA nodes, I think that would be quite handy.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Decave yes, there's a lot of generic stuff that we can reuse in other schedulers (the cpu topology, the allocator, the bpf abstraction). It would be really nice to have them in the scx_utils crate. I'll start look at that and see if we can migrate some of these "generic building blocks" in there.

@Byte-Lab Byte-Lab merged commit b8687a0 into main Jan 16, 2024
2 checks passed
@Byte-Lab Byte-Lab deleted the scx-rustland-smt branch March 14, 2024 18:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants