Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce EOS VM OC's memory slice count allowing for higher parallelization #645

Closed
Tracked by #149
spoonincode opened this issue Jan 17, 2023 · 3 comments · Fixed by #1488
Closed
Tracked by #149

Reduce EOS VM OC's memory slice count allowing for higher parallelization #645

spoonincode opened this issue Jan 17, 2023 · 3 comments · Fixed by #1488
Assignees

Comments

@spoonincode
Copy link
Member

spoonincode commented Jan 17, 2023

This effort will be required to allow EOS VM OC to reasonably be used for parallel read only transactions.

EOS VM OC uses a memory mirroring technique so that WASM linear memory can be both protected via page access permissions and be resized without usage of mprotect(). EOS VM OC refers to these mirrors as "slices".

Prior to 2.1/3.1, Antelope's WASM memory could never exceed 33MiB. EOS VM OC would set up 33MiB/64KiB+1=529 slices each with approximately 4GiB+33MiB of virtual memory. This meant that EOS VM OC required approximately 529*(4GiB+33MiB) of virtual memory; about 2.1TiB.

In 2.1+/3.1+ Antelope technically (but Leap does not reliably) supports WASM memory up to full 4GiB (Leap's supported WASM limits are only those as defined in the reference contracts which remain 33MiB). EOS VM OC was modified so that any growth beyond 33MiB is handled via mprotect(). This allows the optimization to remain in replace for all supported usages of Leap, while still allowing Leap to technically support the full Antelope protocol which allows any size up to 4GiB. However, this support still required increasing the size of a slice to a full 8GiB of virtual memory, meaning that EOS VM OC now requires 529*8GiB of virtual memory; about 4.2TiB.

Executing parallel read only transactions via EOS VM OC will require a set of slices for each executing thread. If 16 parallel threads are allowed with the current strategy of requiring 529 slices per set, that would require 16*4.2TiB of virtual memory: more virtual memory than allowed on most processors. Future efforts, like sync calls and background memory scrubbing, will also increase the need for more active slice sets.

We need to reduce the threshold where EOS VM OC transitions from mirroring to mprotect() to conserve virtual memory. Ideally we gather some data points from existing contract usage to know what's a good cut off. It wouldn't surprise me if most contracts use less than 1MiB, but I would be curious to see some statistics and measurements. It should be fairly simple to add a compile knob (not "public" in cmake or such) defining the threshold number of pages where the transition between the two approaches occurs.

Depends on #801 and #1159.

@bhazzard
Copy link

bhazzard commented Apr 7, 2023

@bhazzard convert this into a piece of a larger initiative to improve system resource utilization and parallelization.

@bhazzard
Copy link

I'm closing this as a duplicate, as I've moved this over into the product repo, here: eosnetworkfoundation/product#155

@arhag
Copy link
Member

arhag commented Jun 8, 2023

We re-opened to capture the actual work to be done that was described in eosnetworkfoundation/product#155.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

7 participants