Skip to content
/ linux Public

Commit 301670d

Browse files
jtlaytongregkh
authored andcommitted
sunrpc: fix cache_request leak in cache_release
commit 17ad31b upstream. When a reader's file descriptor is closed while in the middle of reading a cache_request (rp->offset != 0), cache_release() decrements the request's readers count but never checks whether it should free the request. In cache_read(), when readers drops to 0 and CACHE_PENDING is clear, the cache_request is removed from the queue and freed along with its buffer and cache_head reference. cache_release() lacks this cleanup. The only other path that frees requests with readers == 0 is cache_dequeue(), but it runs only when CACHE_PENDING transitions from set to clear. If that transition already happened while readers was still non-zero, cache_dequeue() will have skipped the request, and no subsequent call will clean it up. Add the same cleanup logic from cache_read() to cache_release(): after decrementing readers, check if it reached 0 with CACHE_PENDING clear, and if so, dequeue and free the cache_request. Reported-by: NeilBrown <neilb@ownmail.net> Fixes: 1da177e ("Linux-2.6.12-rc2") Cc: stable@kernel.org Signed-off-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent d6efaa5 commit 301670d

File tree

1 file changed

+21
-5
lines changed

1 file changed

+21
-5
lines changed

net/sunrpc/cache.c

Lines changed: 21 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1049,24 +1049,40 @@ static int cache_release(struct inode *inode, struct file *filp,
10491049
struct cache_reader *rp = filp->private_data;
10501050

10511051
if (rp) {
1052+
struct cache_request *rq = NULL;
1053+
10521054
spin_lock(&queue_lock);
10531055
if (rp->offset) {
10541056
struct cache_queue *cq;
1055-
for (cq= &rp->q; &cq->list != &cd->queue;
1056-
cq = list_entry(cq->list.next, struct cache_queue, list))
1057+
for (cq = &rp->q; &cq->list != &cd->queue;
1058+
cq = list_entry(cq->list.next,
1059+
struct cache_queue, list))
10571060
if (!cq->reader) {
1058-
container_of(cq, struct cache_request, q)
1059-
->readers--;
1061+
struct cache_request *cr =
1062+
container_of(cq,
1063+
struct cache_request, q);
1064+
cr->readers--;
1065+
if (cr->readers == 0 &&
1066+
!test_bit(CACHE_PENDING,
1067+
&cr->item->flags)) {
1068+
list_del(&cr->q.list);
1069+
rq = cr;
1070+
}
10601071
break;
10611072
}
10621073
rp->offset = 0;
10631074
}
10641075
list_del(&rp->q.list);
10651076
spin_unlock(&queue_lock);
10661077

1078+
if (rq) {
1079+
cache_put(rq->item, cd);
1080+
kfree(rq->buf);
1081+
kfree(rq);
1082+
}
1083+
10671084
filp->private_data = NULL;
10681085
kfree(rp);
1069-
10701086
}
10711087
if (filp->f_mode & FMODE_WRITE) {
10721088
atomic_dec(&cd->writers);

0 commit comments

Comments
 (0)