-
Couldn't load subscription status.
- Fork 700
make the maximum memory size a hard limit and update grow_memory return value #629
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
AstSemantics.md
Outdated
|
|
||
| `grow_memory` must fail when attempting to grow past the maximum declared | ||
| size, if one is specified in the [memory section](Module.md#linear-memory-section). | ||
| `grow_memory` may also fail if an underlying system allocation fails. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"underlying system allocation fails" -> it would be good to clarify that the intent is to disallow lazy allocation. This will fail if either virtual or physical pages aren't allocated, and cannot fail at runtime because physical pages have already been allocated.
We mention discard below but that won't be sufficient for "grow but don't commit".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's some subtley here because:
- if a maximum is set, we can still fail before it is reached (both if our initial reservation wasn't able to grab maximum or, if we're using memory protection tricks, if the
mprotectto enable the memory fails (rare as that may be)) - if a maximum is not set, on 64-bit an engine may still choose to reserve 4GiB (b/c bounds check elimination and vaddr space is no longer a precious resource.
I pushed an update that I think addresses your concern while respecting these subtleties. PTAL
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better, though it's still not clear that physical allocation cannot fail at runtime when touching a new page (i.e. outside of grow_memory). Could you clarify that part?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you say physical allocation, are you talking about the first time a virtual address range is accessed and the OS adds a resident physical page? That's abstracted even from native code (well, unless you go poking around with mincore) and technically it can fail (just fatally, killing the whole process :).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's what I mean. So your intent is that at runtime a regular load or store can bring down the entire process?
That's also worth spelling out explicitly IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I think this is basically an instance of the more general resource-exhaustion failure case (similar to process crash, over-recursion, background oom-kill) and not specifically related to grow_memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, but it's simply not clear that a regular load / store can exhaust resources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's nothing special about VirtualAlloc here, mmap, malloc and calloc all do the same: this is just the behavior you'd expect on an OS which doesn't have a page file or runs out of page file space. That means, in JS, if you new ArrayBuffer(1024*1024*1024), since we're calling calloc, any JS element access can kill the process on such an OS. For that matter any JS object field access can too if it's in a fresh page allocated by the GC. We could add a note to the Nondeterminism.md that this is another kind of resource exhaustion, but there's nothing specific to grow_memory here: to avoid this, we'd have to take extra steps (on both Windows and POSIX) in grow_memory and the initial memory allocation to preemptively dirty pages which would be silly and noone would expect us to do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nondeterminism.md sounds fine: #632
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
As discussed in #629 (comment)
|
lgtm |
Reflecting multiple previous discussions and vetting by a number of stakeholders, this PR changes the maximum memory size (specified in the memory section) to be a hard limit. It's already optional; that doesn't change. For the rationale for this design, see additions to Rationale.md in this PR.
Since this PR is touching
grow_memoryand since we've moved basically everything to units of pages (including the operand ofgrow_memory), this PR also changesgrow_memory's return value to be in units of pages.Less trivial, but also already discussed with initial consensus, the return value of
grow_memoryis updated to return -1 on failure. 0 is the other obvious candidate, but given that we don't prevent initial memory sizes of 0 orgrow_memorydeltas of 0, it seems like 0 is a valid return value ofgrow_memory. With units of pages and wasm32, -1 (UINT32_MAX as an unsigned) is not a valid memory_size. Presumably, wasm64 would return anint64for which -1 (UINT64_MAX) would also be an invalid memory size.