Skip to content

Commit 4af2e62

Browse files
CFSworksgregkh
authored andcommitted
net: stmmac: Prevent NULL deref when RX memory exhausted
[ Upstream commit 0bb05e6 ] The CPU receives frames from the MAC through conventional DMA: the CPU allocates buffers for the MAC, then the MAC fills them and returns ownership to the CPU. For each hardware RX queue, the CPU and MAC coordinate through a shared ring array of DMA descriptors: one descriptor per DMA buffer. Each descriptor includes the buffer's physical address and a status flag ("OWN") indicating which side owns the buffer: OWN=0 for CPU, OWN=1 for MAC. The CPU is only allowed to set the flag and the MAC is only allowed to clear it, and both must move through the ring in sequence: thus the ring is used for both "submissions" and "completions." In the stmmac driver, stmmac_rx() bookmarks its position in the ring with the `cur_rx` index. The main receive loop in that function checks for rx_descs[cur_rx].own=0, gives the corresponding buffer to the network stack (NULLing the pointer), and increments `cur_rx` modulo the ring size. After the loop exits, stmmac_rx_refill(), which bookmarks its position with `dirty_rx`, allocates fresh buffers and rearms the descriptors (setting OWN=1). If it fails any allocation, it simply stops early (leaving OWN=0) and will retry where it left off when next called. This means descriptors have a three-stage lifecycle (terms my own): - `empty` (OWN=1, buffer valid) - `full` (OWN=0, buffer valid and populated) - `dirty` (OWN=0, buffer NULL) But because stmmac_rx() only checks OWN, it confuses `full`/`dirty`. In the past (see 'Fixes:'), there was a bug where the loop could cycle `cur_rx` all the way back to the first descriptor it dirtied, resulting in a NULL dereference when mistaken for `full`. The aforementioned commit resolved that *specific* failure by capping the loop's iteration limit at `dma_rx_size - 1`, but this is only a partial fix: if the previous stmmac_rx_refill() didn't complete, then there are leftover `dirty` descriptors that the loop might encounter without needing to cycle fully around. The current code therefore panics (see 'Closes:') when stmmac_rx_refill() is memory-starved long enough for `cur_rx` to catch up to `dirty_rx`. Fix this by explicitly checking, before advancing `cur_rx`, if the next entry is dirty; exit the loop if so. This prevents processing of the final, used descriptor until stmmac_rx_refill() succeeds, but fully prevents the `cur_rx == dirty_rx` ambiguity as the previous bugfix intended: so remove the clamp as well. Since stmmac_rx_zc() is a copy-paste-and-tweak of stmmac_rx() and the code structure is identical, any fix to stmmac_rx() will also need a corresponding fix for stmmac_rx_zc(). Therefore, apply the same check there. In stmmac_rx() (not stmmac_rx_zc()), a related bug remains: after the MAC sets OWN=0 on the final descriptor, it will be unable to send any further DMA-complete IRQs until it's given more `empty` descriptors. Currently, the driver simply *hopes* that the next stmmac_rx_refill() succeeds, risking an indefinite stall of the receive process if not. But this is not a regression, so it can be addressed in a future change. Fixes: b6cb454 ("net: stmmac: avoid rx queue overrun") Closes: https://bugzilla.kernel.org/show_bug.cgi?id=221010 Cc: stable@vger.kernel.org Suggested-by: Russell King <linux@armlinux.org.uk> Signed-off-by: Sam Edwards <CFSworks@gmail.com> Link: https://patch.msgid.link/20260422044503.5349-1-CFSworks@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent 9d1774b commit 4af2e62

1 file changed

Lines changed: 12 additions & 7 deletions

File tree

drivers/net/ethernet/stmicro/stmmac/stmmac_main.c

Lines changed: 12 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5282,9 +5282,12 @@ static int stmmac_rx_zc(struct stmmac_priv *priv, int limit, u32 queue)
52825282
break;
52835283

52845284
/* Prefetch the next RX descriptor */
5285-
rx_q->cur_rx = STMMAC_NEXT_ENTRY(rx_q->cur_rx,
5286-
priv->dma_conf.dma_rx_size);
5287-
next_entry = rx_q->cur_rx;
5285+
next_entry = STMMAC_NEXT_ENTRY(rx_q->cur_rx,
5286+
priv->dma_conf.dma_rx_size);
5287+
if (unlikely(next_entry == rx_q->dirty_rx))
5288+
break;
5289+
5290+
rx_q->cur_rx = next_entry;
52885291

52895292
if (priv->extend_desc)
52905293
np = (struct dma_desc *)(rx_q->dma_erx + next_entry);
@@ -5422,7 +5425,6 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
54225425

54235426
dma_dir = page_pool_get_dma_dir(rx_q->page_pool);
54245427
bufsz = DIV_ROUND_UP(priv->dma_conf.dma_buf_sz, PAGE_SIZE) * PAGE_SIZE;
5425-
limit = min(priv->dma_conf.dma_rx_size - 1, (unsigned int)limit);
54265428

54275429
if (netif_msg_rx_status(priv)) {
54285430
void *rx_head;
@@ -5478,9 +5480,12 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
54785480
if (unlikely(status & dma_own))
54795481
break;
54805482

5481-
rx_q->cur_rx = STMMAC_NEXT_ENTRY(rx_q->cur_rx,
5482-
priv->dma_conf.dma_rx_size);
5483-
next_entry = rx_q->cur_rx;
5483+
next_entry = STMMAC_NEXT_ENTRY(rx_q->cur_rx,
5484+
priv->dma_conf.dma_rx_size);
5485+
if (unlikely(next_entry == rx_q->dirty_rx))
5486+
break;
5487+
5488+
rx_q->cur_rx = next_entry;
54845489

54855490
if (priv->extend_desc)
54865491
np = (struct dma_desc *)(rx_q->dma_erx + next_entry);

0 commit comments

Comments
 (0)