Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
ZEN: INTERACTIVE: Increase block layer queue depth to 512
4.9: In a surprising turn of events, while benchmarking and testing hierarchical scheduling with BFQ + writeback throttling, it turns out that raising the number of requests in queue _actually_ improves responsiveness and completely eliminates the random stalls that would normally occur without hierarchical scheduling. To make this test more intense, I used the following test: Rotational disk1: rsync -a /source/of/data /target/to/disk1 Rotational disk2: rsync -a /source/of/data /target/to/disk2 And periodically attempted to write super fast with: dd if=/dev/zero of=/target/to/disk1/block bs=4096 This wrote 10gb incredibly fast to writeback and I encountered zero stalls through this entire test of 10-15 minutes. My suspicion is that with cgroups, BFQ is more able to properly sort among multiple drives, reducing the chance of a starved process. This plus writeback throttling completely eliminate any outstanding bugs with high writeback ratios, letting the user enjoy low latency writes (application thinks they're already done), and super high throughput due to batched writes in writeback. Please note however, without the following configuration, I cannot guarantee you will not get stalls: CONFIG_BLK_CGROUP=y CONFIG_CGROUP_WRITEBACK=y CONFIG_IOSCHED_CFQ=y CONFIG_CFQ_GROUP_IOSCHED=y CONFIG_IOSCHED_BFQ=y CONFIG_BFQ_GROUP_IOSCHED=y CONFIG_DEFAULT_BFQ=y CONFIG_SCSI_MQ_DEFAULT=n Special thanks to h2, author of smxi and inxi, for providing evidence that a configuration specific to Debian did not cause stalls found the Liquorix kernels under heavy IO load. This specific configuration turned out to be hierarchical scheduling on CFQ (thus, BFQ as well).
- Loading branch information
9646e76
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you able to confirm similar statistics when performing on a solid state drive?
9646e76
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@damentz
9646e76
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@damentz @heftig
This patch may be completely unused since 2017
https://patchwork.kernel.org/patch/9822639/
nr_requests picks the smallest value between BLKDEV_MAX_RQ & (queue_depth * 2)
Confirmed on Samsung EVO 860
9646e76
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @Alt37 , I just reverted this customization here: 99a7502