Implement support for dynamic memories in the pooling allocator #5208
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a continuation of the thrust in #5207 for reducing page faults and lock contention when using the pooling allocator. To that end this commit implements support for efficient memory management in the pooling allocator when using wasm that is instrumented with bounds checks.
The
MemoryImageSlot
type now avoids unconditionally shrinking memory back to its initial size during theclear_and_remain_ready
operation, instead deferring optional resizing of memory to the subsequent call toinstantiate
when the slot is reused. The instantiation portion then takes the "memory style" as an argument which dictates whether the accessible memory must be precisely fit or whether it's allowed to exceed the maximum. This in effect enables skipping a call tomprotect
to shrink the heap when dynamic memory checks are enabled.In terms of page fault and contention this should improve the situation by:
Fewer calls to
mprotect
since once a heap grows it stays grown and it never shrinks. This means that a write lock is taken within the kernel much more rarely from before (only asymptotically now, not N-times-per-instance).Accessed memory after a heap growth operation will not fault if it was previously paged in by a prior instance and set to zero with
memset
. Unlike Add support for keeping pooling allocator pages resident #5207 which requires a 6.0 kernel to see this optimization this commit enables the optimization for any kernel.The major cost of choosing this strategy is naturally the performance hit of the wasm itself. This is being looked at in PRs such as #5190 to improve Wasmtime's story here.
This commit does not implement any new configuration options for Wasmtime but instead reinterprets existing configuration options. The pooling allocator no longer unconditionally sets
static_memory_bound_is_maximum
and then implements support necessary for this memory type. This other change to this commit is that theTunables::static_memory_bound
configuration option is no longer gating on the creation of aMemoryPool
and it will now appropriately size toinstance_limits.memory_pages
if thestatic_memory_bound
is to small. This is done to accomodate fuzzing more easily where thestatic_memory_bound
will become small during fuzzing and otherwise the configuration would be rejected and require manual handling. The spirit of theMemoryPool
is one of large virtual address space reservations anyway so it seemed reasonable to interpret the configuration this way.