Skip to content

Commit

Permalink
arch: arm: mpu: Use temporary MPU mapping while reprogramming NXP MPU
Browse files Browse the repository at this point in the history
Race conditions exist when remapping the NXP MPU. When writing the
start, end, or attribute registers of a MPU descriptor, the hardware
will automatically clear the region's valid bit. If that region gets
accessed before the code is able to set the valid bit, the core will
fault.

Issue #20595 revealled this problem with the code in region_init()
when the compiler options are set to no optimizations. The code
generated by the compiler put local variables on the stack and then
read those stack based variables when writing the MPU descriptor
registers. If that region mapped the stack a memory fault would occur.
Higher compiler optimizations would store these local variables in
CPU registers which avoided the memory access when programming the
MPU descriptor.

Because the NXP MPU uses a logic OR operation of the MPU descriptors,
the fix uses the last descriptor in the MPU hardware to remap all of
dynamic memory for access instead of the first of the dynamic memory
descriptors as was occuring before. This allows reprogramming of the
primary discriptor blocks without having a memory fault. After all
the dynamic memory blocks are mapped, the unused blocks will have
their valid bits cleared including this temporary one, if it wasn't
alread changed during the mapping of the current set.

Fixes #20595

Signed-off-by: David Leach <david.leach@nxp.com>
  • Loading branch information
dleach02 authored and ioannisg committed Jul 22, 2020
1 parent 60f1d70 commit 5803ec1
Show file tree
Hide file tree
Showing 2 changed files with 54 additions and 7 deletions.
27 changes: 27 additions & 0 deletions arch/arm/core/aarch32/cortex_m/mpu/arm_core_mpu.c
Expand Up @@ -148,6 +148,21 @@ void z_arm_configure_dynamic_mpu_regions(struct k_thread *thread)
* of the respective dynamic MPU regions to be programmed for
* the given thread. The array of partitions (along with its
* actual size) will be supplied to the underlying MPU driver.
*
* The drivers of what regions get configured are CONFIG_USERSPACE,
* CONFIG_MPU_STACK_GUARD, and K_USER/supervisor threads.
*
* If CONFIG_USERSPACE is defined and the thread is a member of any
* memory domain then any partitions defined within that domain get a
* defined region.
*
* If CONFIG_USERSPACE is defined and the thread is a user thread
* (K_USER) the usermode thread stack is defined a region.
*
* IF CONFIG_MPU_STACK_GUARD is defined the thread is a supervisor
* thread, the stack guard will be defined in front of the
* thread->stack_info.start. On a K_USER thread, the guard is defined
* in front of the privilege mode stack, thread->arch.priv_stack_start.
*/
struct k_mem_partition *dynamic_regions[_MAX_DYNAMIC_MPU_REGIONS_NUM];

Expand Down Expand Up @@ -193,6 +208,7 @@ void z_arm_configure_dynamic_mpu_regions(struct k_thread *thread)
/* Thread user stack */
LOG_DBG("configure user thread %p's context", thread);
if (thread->arch.priv_stack_start) {
/* K_USER thread stack needs a region */
uint32_t base = (uint32_t)thread->stack_obj;
uint32_t size = thread->stack_info.size +
(thread->stack_info.start - base);
Expand All @@ -209,6 +225,10 @@ void z_arm_configure_dynamic_mpu_regions(struct k_thread *thread)
#endif /* CONFIG_USERSPACE */

#if defined(CONFIG_MPU_STACK_GUARD)
/* Define a stack guard region for either the thread stack or the
* supervisor/privilege mode stack depending on the type of thread
* being mapped.
*/
struct k_mem_partition guard;

/* Privileged stack guard */
Expand All @@ -223,12 +243,19 @@ void z_arm_configure_dynamic_mpu_regions(struct k_thread *thread)

#if defined(CONFIG_USERSPACE)
if (thread->arch.priv_stack_start) {
/* A K_USER thread has the stack guard protecting the privilege
* stack and not on the usermode stack because the user mode
* stack already has its own defined memory region.
*/
guard_start = thread->arch.priv_stack_start - guard_size;

__ASSERT((uint32_t)&z_priv_stacks_ram_start <= guard_start,
"Guard start: (0x%x) below privilege stacks boundary: (0x%x)",
guard_start, (uint32_t)&z_priv_stacks_ram_start);
} else {
/* A supervisor thread only has the normal thread stack to
* protect with a stack guard.
*/
guard_start = thread->stack_info.start - guard_size;

__ASSERT((uint32_t)thread->stack_obj == guard_start,
Expand Down
34 changes: 27 additions & 7 deletions arch/arm/core/aarch32/cortex_m/mpu/nxp_mpu.c
Expand Up @@ -106,7 +106,7 @@ static void region_init(const uint32_t index,
SYSMPU->WORD[index][3] = SYSMPU_WORD_VLD_MASK;
}

LOG_DBG("[%d] 0x%08x 0x%08x 0x%08x 0x%08x", index,
LOG_DBG("[%02d] 0x%08x 0x%08x 0x%08x 0x%08x", index,
(uint32_t)SYSMPU->WORD[index][0],
(uint32_t)SYSMPU->WORD[index][1],
(uint32_t)SYSMPU->WORD[index][2],
Expand Down Expand Up @@ -327,15 +327,35 @@ static int mpu_configure_dynamic_mpu_regions(const struct k_mem_partition
{
unsigned int key;

/* Reset MPU regions inside which dynamic memory regions may
* be programmed.
/*
* Programming the NXP MPU has to be done with care to avoid race
* conditions that will cause memory faults. The NXP MPU is composed
* of a number of memory region descriptors. The number of descriptors
* varies depending on the SOC. Each descriptor has a start addr, end
* addr, attribute, and valid. When the MPU is enabled, access to
* memory space is checked for access protection errors through an
* OR operation of all of the valid MPU descriptors.
*
* Writing the start/end/attribute descriptor register will clear the
* valid bit for that descriptor. This presents a problem because if
* the current program stack is in that region or if an ISR occurs
* that switches state and uses that region a memory fault will be
* triggered. Note that local variable access can also cause stack
* accesses while programming these registers depending on the compiler
* optimization level.
*
* Re-programming these regions will temporarily leave memory areas
* outside all MPU regions.
* This might trigger memory faults if ISRs occurring during
* re-programming perform access in those areas.
* To avoid the race condition a temporary descriptor is set to enable
* access to all of memory before the call to mpu_configure_regions()
* to configure the dynamic memory regions. After, the temporary
* descriptor is invalidated if the mpu_configure_regions() didn't
* overwrite it.
*/
key = irq_lock();
/* Use last descriptor region as temporary descriptor */
region_init(get_num_regions()-1, (const struct nxp_mpu_region *)
&mpu_config.mpu_regions[mpu_config.sram_region]);

/* Now reset the main SRAM region */
region_init(mpu_config.sram_region, (const struct nxp_mpu_region *)
&mpu_config.mpu_regions[mpu_config.sram_region]);
irq_unlock(key);
Expand Down

0 comments on commit 5803ec1

Please sign in to comment.