-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ionice support #14151
Comments
It would be nice (hehe ;) ) if you'd use some more real-world benchmark. I suspect that |
I've noticed when using ZFS, certain programs can hog all the IO, and there's no fairness at all. It kind of sucks when there's BIG IO going on in the background that ends up locking up all the foreground programs. I checked my Linux IO schedulers, I'm setting all of them to |
Disregard, I'm dumb. Replied thinking this was the systemd issue. :) |
Describe the feature would like to see added to OpenZFS
Hello,
the CFQ and BFQ Linux IO schedulers are capable if managing IO classes and priorities with the ionice tool. Using this one can control how the scheduler handles processes. Long-running bulk copy jobs, background backups, maintenance tasks and similar things can be put into the
-c3
(idle) class, so they won't interfere with more interactive loads. Latency-sensitive processes like databases can be put into a higher priority, maybe even the realtime class.Services can be distinguished based on their priority and "need for interactivity/throughput". Basically,
nice
but for io. It's super useful.I'm very surprised to find no issues or discussion regarding ionice in ZFS. It's obviously not using CFQ/BFQ, but its own ZIO scheduler (and leaving the vdev ones at noop/deadline). It does not speak ionice, wasting this precious opportunity. Is there a reason for this? Was it ever considered? Why/why not?
How will this feature improve OpenZFS?
The same way it improves other fs running on disks with the cfq/bfq scheduler. By prioritizing processes, latency and throughput can be greatly improved in mixed-workload cases. Useful on the desktop and the server.
Additional context
Here's a simple test case.
stress -i 3 -d 4
in parallelionice -c3 stress -i 3 -d 4
Thanks a lot!
The text was updated successfully, but these errors were encountered: