Skip to content
This repository has been archived by the owner on Feb 11, 2024. It is now read-only.

Commit

Permalink
ANDROID: block: Improve shared tag set performance
Browse files Browse the repository at this point in the history
Remove the code for fair tag sharing because it significantly hurts
performance for UFS devices. Removing this code is safe because the
legacy block layer worked fine without any equivalent fairness
algorithm.

This algorithm hurts performance for UFS devices because UFS devices
have multiple logical units. One of these logical units (WLUN) is used
to submit control commands, e.g. START STOP UNIT. If any request is
submitted to the WLUN, the queue depth is reduced from 31 to 15 or
lower for data LUNs.

See also https://lore.kernel.org/linux-scsi/20221229030645.11558-1-ed.tsai@mediatek.com/

Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Ed Tsai <ed.tsai@mediatek.com>
Change-Id: Ia6d75917d533f32fffc68348b52fd3d972c9074c
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Bug: 281845090
Link: https://lore.kernel.org/linux-block/20230103195337.158625-1-bvanassche@acm.org/
Signed-off-by: Bart Van Assche <bvanassche@google.com>
  • Loading branch information
Bart Van Assche authored and YumeMichi committed Oct 15, 2023
1 parent 1d3d0cc commit 78fd3cd
Showing 1 changed file with 0 additions and 34 deletions.
34 changes: 0 additions & 34 deletions block/blk-mq-tag.c
Original file line number Diff line number Diff line change
Expand Up @@ -62,43 +62,9 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
blk_mq_tag_wakeup_all(tags, false);
}

/*
* For shared tag users, we track the number of currently active users
* and attempt to provide a fair share of the tag depth for each of them.
*/
static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx,
struct sbitmap_queue *bt)
{
unsigned int depth, users;

if (!hctx || !(hctx->flags & BLK_MQ_F_TAG_SHARED))
return true;
if (!test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state))
return true;

/*
* Don't try dividing an ant
*/
if (bt->sb.depth == 1)
return true;

users = atomic_read(&hctx->tags->active_queues);
if (!users)
return true;

/*
* Allow at least some tags
*/
depth = max((bt->sb.depth + users - 1) / users, 4U);
return atomic_read(&hctx->nr_active) < depth;
}

static int __blk_mq_get_tag(struct blk_mq_alloc_data *data,
struct sbitmap_queue *bt)
{
if (!(data->flags & BLK_MQ_REQ_INTERNAL) &&
!hctx_may_queue(data->hctx, bt))
return -1;
if (data->shallow_depth)
return __sbitmap_queue_get_shallow(bt, data->shallow_depth);
else
Expand Down

0 comments on commit 78fd3cd

Please sign in to comment.