FrameDataAllocator::Alloc() in src/coreclr/vm/interpframeallocator.cpp aligns localloc buffer sizes to sizeof(void*) (4 bytes on WASM, 8 bytes on 64-bit platforms):
size = ALIGN_UP(size, sizeof(void*));
The JIT guarantees localloc returns pointers aligned to STACK_ALIGN (16 bytes on most platforms including WASM — see src/coreclr/jit/targetwasm.h), but the interpreter does not honor this. The fragment backing memory comes from raw malloc() with no explicit alignment, and subsequent bump allocations within a fragment are only pointer-size aligned.
This causes the ShowLocallocAlignment regression test to fail on WASM (and would fail on any platform using the interpreter) because stackalloc returns 8-byte aligned addresses instead of the expected 16-byte alignment.
Repro
Run JIT/Regression/Dev11/External/dev11_239804/ShowLocallocAlignment on a platform using the CoreCLR interpreter (e.g. WASM).
Expected
Localloc returns a 16-byte aligned pointer (matching STACK_ALIGN).
Actual
Localloc returns a pointer aligned only to sizeof(void*).
Suggested fix
Change the alignment in FrameDataAllocator::Alloc() to use INTERP_STACK_ALIGNMENT (16) instead of sizeof(void*), and ensure the returned pointer (not just the size) is aligned.