Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ionice support #14151

Open
haarp opened this issue Nov 6, 2022 · 3 comments
Open

ionice support #14151

haarp opened this issue Nov 6, 2022 · 3 comments
Labels
Type: Feature Feature request or new feature Type: Performance Performance improvement or performance problem

Comments

@haarp
Copy link

haarp commented Nov 6, 2022

Describe the feature would like to see added to OpenZFS

Hello,

the CFQ and BFQ Linux IO schedulers are capable if managing IO classes and priorities with the ionice tool. Using this one can control how the scheduler handles processes. Long-running bulk copy jobs, background backups, maintenance tasks and similar things can be put into the -c3 (idle) class, so they won't interfere with more interactive loads. Latency-sensitive processes like databases can be put into a higher priority, maybe even the realtime class.

Services can be distinguished based on their priority and "need for interactivity/throughput". Basically, nice but for io. It's super useful.

I'm very surprised to find no issues or discussion regarding ionice in ZFS. It's obviously not using CFQ/BFQ, but its own ZIO scheduler (and leaving the vdev ones at noop/deadline). It does not speak ionice, wasting this precious opportunity. Is there a reason for this? Was it ever considered? Why/why not?

How will this feature improve OpenZFS?

The same way it improves other fs running on disks with the cfq/bfq scheduler. By prioritizing processes, latency and throughput can be greatly improved in mixed-workload cases. Useful on the desktop and the server.

Additional context

Here's a simple test case.

  • Copy a large file from a (mechanical) zpool to /dev/null
  • run stress -i 3 -d 4 in parallel
  • Watch the copy speed drop to very low numbers
  • Repeat with ionice -c3 stress -i 3 -d 4
  • If ZIO supported ionice, the copy speed would not be noticeably impacted

Thanks a lot!

@haarp haarp added the Type: Feature Feature request or new feature label Nov 6, 2022
@amotin
Copy link
Member

amotin commented Nov 7, 2022

It would be nice (hehe ;) ) if you'd use some more real-world benchmark. I suspect that -i and -d combination in your stress command creates a heavy stream of synchronous writes. And since ZFS is very serious about sync guaranties, those requests are propagated to the disk, which just dies under such workload.

@behlendorf behlendorf added the Type: Performance Performance improvement or performance problem label Nov 7, 2022
@CMCDragonkai
Copy link

I've noticed when using ZFS, certain programs can hog all the IO, and there's no fairness at all. It kind of sucks when there's BIG IO going on in the background that ends up locking up all the foreground programs.

I checked my Linux IO schedulers, I'm setting all of them to none to see what happens now. They were previously doing mq-deadline for rpool and none for the NVME drive that does ZIL + L2ARC.

@CrackerJackMack
Copy link

CrackerJackMack commented Dec 19, 2023

I've noticed when using ZFS, certain programs can hog all the IO, and there's no fairness at all. It kind of sucks when there's BIG IO going on in the background that ends up locking up all the foreground programs.

I checked my Linux IO schedulers, I'm setting all of them to none to see what happens now. They were previously doing mq-deadline for rpool and none for the NVME drive that does ZIL + L2ARC.

ionice has no effect on ZFS which is why the io scheduler is set to none on disks used by ZFS because it uses it's own scheduler. ionice, as I understand it, only affects the CFQ scheduler.

Disregard, I'm dumb. Replied thinking this was the systemd issue. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Feature Feature request or new feature Type: Performance Performance improvement or performance problem
Projects
None yet
Development

No branches or pull requests

5 participants