Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

layered: Use TLS map instead of hash map #200

Merged
merged 1 commit into from
Mar 28, 2024
Merged

layered: Use TLS map instead of hash map #200

merged 1 commit into from
Mar 28, 2024

Conversation

Byte-Lab
Copy link
Contributor

@Byte-Lab Byte-Lab commented Mar 28, 2024

In scx_layered, we're using a BPF_MAP_TYPE_HASH map (indexed by pid) rather than a BPF_MAP_TYPE_TASK_STORAGE, to track local storage for a task. As far as I can tell, there's no reason we need to be doing this. We never access the map from user space, and we're even passing a struct task_struct * to a helper subprog to look up the task context rather than only doing it by pid.

Using a hashmap is error prone for this because we end up having to manually track lifecycles for entries in the map rather than relying on BPF to do it for us. For example, BPF will automatically free a task's entry from the map when it exits. Let's just use TLS here rather than a hashmap to avoid issues from this (e.g. we've observed the scheduler getting evicted because we're accessing a stale map entry after a task has been destroyed).

Reported-by: Valentin Andrei vandrei@meta.com

In scx_layered, we're using a BPF_MAP_TYPE_HASH map (indexed by pid)
rather than a BPF_MAP_TYPE_TASK_STORAGE, to track local storage for a
task. As far as I can tell, there's no reason we need to be doing this.
We never access the map from user space, and we're even passing a
struct task_struct * to a helper subprog to look up the task context
rather than only doing it by pid.

Using a hashmap is error prone for this because we end up having to
manually track lifecycles for entries in the map rather than relying on
BPF to do it for us. For example, BPF will automatically free a task's
entry from the map when it exits. Let's just use TLS here rather than a
hashmap to avoid issues from this (e.g. we've observed the scheduler
getting evicted because we're accessing a stale map entry after a task
has been destroyed).

Reported-by: Valentin Andrei <vandrei@meta.com>
Signed-off-by: David Vernet <void@manifault.com>
@htejun htejun merged commit 3409380 into main Mar 28, 2024
1 check passed
@htejun htejun deleted the layered_delete branch March 28, 2024 03:09
ptr1337 pushed a commit to ptr1337/scx that referenced this pull request May 4, 2024
We may end up stalling for too long in fcg_dispatch() if
try_pick_next_cgroup() doesn't find another valid cgroup to pick. This
can be quite risky, considering that we are holding the rq lock in
dispatch().

This condition can be reproduced easily in our CI, where we can trigger
stalling softirq works:

[    4.972926] NOHZ tick-stop error: local softirq work is pending, handler sched-ext#200!!!

Or rcu stalls:

[   47.731900] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
[   47.731900] rcu:     1-...!: (0 ticks this GP) idle=b29c/1/0x4000000000000000 softirq=2204/2204 fqs=0
[   47.731900] rcu:     3-...!: (0 ticks this GP) idle=db74/1/0x4000000000000000 softirq=2286/2286 fqs=0
[   47.731900] rcu:     (detected by 0, t=26002 jiffies, g=6029, q=54 ncpus=4)
[   47.731900] Sending NMI from CPU 0 to CPUs 1:

To mitigate this issue reduce the amount of try_pick_next_cgroup()
retries from BPF_MAX_LOOPS (8M) to CGROUP_MAX_RETRIES (1024).

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants