Skip to content

Commit

Permalink
RDMA/mlx5: Delay emptying a cache entry when a new MR is added to it …
Browse files Browse the repository at this point in the history
…recently

[ Upstream commit d6793ca ]

Fixing a typo that causes a cache entry to shrink immediately after adding
to it new MRs if the entry size exceeds the high limit.  In doing so, the
cache misses its purpose to prevent the creation of new mkeys on the
runtime by using the cached ones.

Fixes: b9358bd ("RDMA/mlx5: Fix locking in MR cache work queue")
Link: https://lore.kernel.org/r/fcb546986be346684a016f5ca23a0567399145fa.1627370131.git.leonro@nvidia.com
Signed-off-by: Aharon Landau <aharonl@nvidia.com>
Reviewed-by: Maor Gottlieb <maorg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
  • Loading branch information
aharonl-nvidia authored and gregkh committed Aug 12, 2021
1 parent 1242ca9 commit e74551b
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions drivers/infiniband/hw/mlx5/mr.c
Expand Up @@ -526,8 +526,8 @@ static void __cache_work_func(struct mlx5_cache_ent *ent)
*/
spin_unlock_irq(&ent->lock);
need_delay = need_resched() || someone_adding(cache) ||
time_after(jiffies,
READ_ONCE(cache->last_add) + 300 * HZ);
!time_after(jiffies,
READ_ONCE(cache->last_add) + 300 * HZ);
spin_lock_irq(&ent->lock);
if (ent->disabled)
goto out;
Expand Down

0 comments on commit e74551b

Please sign in to comment.