Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scx_rustland: lowlatency improvements #59

Merged
merged 4 commits into from
Dec 31, 2023

Conversation

arighi
Copy link
Collaborator

@arighi arighi commented Dec 31, 2023

Low-latency improvements:

  • scx_rustland: always use dispatch_on_cpu() when possible
  • scx_rustland: bypass user-space scheduler for short-lived kthreads

Reduce user-space overhead:

  • scx_rustland: enable SCX_OPS_ENQ_LAST

UI improvement:

  • scx_rustland: show the CPU where the scheduler is running

Make sure the scheduler is not activated if we are deadling with the
last task running.

This allows to consistency reduce scx_rustland CPU usage in systems that
are mostly idle (and avoid unnecessary power consumption).

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Use dispatch_on_cpu() when possible, so that all tasks dispatched by the
user-space scheduler gets the same priority, instead of having some of
them dispatched to the global DSQ and others dispatched to the per-CPU
DSQ.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Bypass the user-space scheduler for kthreads that still have more than
half of their runtime budget.

As they are likely to release the CPU soon, granting them a substantial
priority boost can enhance the overall system performance.

In the event that one of these kthreads turns into a CPU hog, it will
deplete its runtime budget and therefore it will be scheduled like
any other normal task through the user-space scheduler.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
In the scheduler statistics reported periodically to stdout, instead of
showing "pid=0" for the CPU where the scheduler is running (like an idle
CPU), show "[self]".

This helps to identify exactly where the user-space scheduler is running
(when and where it migrates, etc.).

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
* No scheduling required if it's the last task running.
*/
if (enq_flags & SCX_ENQ_LAST)
return true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite understand what difference this commit makes given that the default behavior w/o SCX_ENQ_LAST is auto-enq on local dsq. Does this make any behavior difference?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite understand what difference this commit makes given that the default behavior w/o SCX_ENQ_LAST is auto-enq on local dsq. Does this make any behavior difference?

For some reasons with this in place the CPU utilization of scx_rustland drops significantly when the system is idle (like 0.3-0.5%), without this change when the system is idle I can see some cpu usage spikes of up to 5-10%,but I'm not sure why it's happening... My theory was that with SCX_ENQ_LAST in place we could save some unnecessary usersched invocations.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's surprising given that the two essentially are doing the same thing. Maybe there's something timing dependent going on or I'm just confused.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's surprising given that the two essentially are doing the same thing. Maybe there's something timing dependent going on or I'm just confused.

I'll run some tests with and without that and collect more info, now I want to understand what's going on exactly :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@htejun it's SCX_OPS_ENQ_LAST set in sched_ext_ops.flags that seems to make a difference, the check in is_task_cpu_available() does't make any difference and can be dropped apparently.

@htejun htejun merged commit 70803d5 into sched-ext:main Dec 31, 2023
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants