Skip to content

In the Linux kernel, the following vulnerability has been...

Unreviewed Published Apr 3, 2024 to the GitHub Advisory Database • Updated Jun 26, 2024

Package

No package listedSuggest a package

Affected versions

Unknown

Patched versions

Unknown

Description

In the Linux kernel, the following vulnerability has been resolved:

powerpc/kasan: Fix addr error caused by page alignment

In kasan_init_region, when k_start is not page aligned, at the begin of
for loop, k_cur = k_start & PAGE_MASK is less than k_start, and then
va = block + k_cur - k_start is less than block, the addr va is invalid,
because the memory address space from va to block is not alloced by
memblock_alloc, which will not be reserved by memblock_reserve later, it
will be used by other places.

As a result, memory overwriting occurs.

for example:
int __init __weak kasan_init_region(void start, size_t size)
{
[...]
/
if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) /
block = memblock_alloc(k_end - k_start, PAGE_SIZE);
[...]
for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
/
at the begin of for loop
* block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400)
* va(dcd96c00) is less than block(dcd97000), va is invalid
*/
void *va = block + k_cur - k_start;
[...]
}
[...]
}

Therefore, page alignment is performed on k_start before
memblock_alloc() to ensure the validity of the VA address.

References

Published by the National Vulnerability Database Apr 3, 2024
Published to the GitHub Advisory Database Apr 3, 2024
Last updated Jun 26, 2024

Severity

Unknown

EPSS score

0.044%
(11th percentile)

Weaknesses

No CWEs

CVE ID

CVE-2024-26712

GHSA ID

GHSA-8vch-c6pw-5chh

Source code

No known source code

Dependabot alerts are not supported on this advisory because it does not have a package from a supported ecosystem with an affected and fixed version.

Learn more about GitHub language support

Loading Checking history
See something to contribute? Suggest improvements for this vulnerability.