Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: update wasmtime #1993

Merged
merged 3 commits into from
Feb 12, 2024
Merged

chore: update wasmtime #1993

merged 3 commits into from
Feb 12, 2024

Conversation

Stebalien
Copy link
Member

No description provided.

Needed to upgrade wasmtime, and now's a reasonable time to do this.
alloc_strat_cfg.total_core_instances(instance_count);
alloc_strat_cfg.total_memories(instance_count);
alloc_strat_cfg.max_memories_per_module(1);
alloc_strat_cfg.total_tables(instance_count);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't yet have an explicit limit on the number of table elements. We do limit it based on the available memory, but we likely need a max table elements limit defined in a FIP (not an issue right now, but a potential issue in the future.

@codecov-commenter
Copy link

codecov-commenter commented Feb 9, 2024

Codecov Report

Attention: 5 lines in your changes are missing coverage. Please review.

Comparison is base (f4b5f29) 75.53% compared to head (bd6863d) 75.59%.
Report is 3 commits behind head on master.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1993      +/-   ##
==========================================
+ Coverage   75.53%   75.59%   +0.06%     
==========================================
  Files         158      157       -1     
  Lines       15600    15741     +141     
==========================================
+ Hits        11784    11900     +116     
- Misses       3816     3841      +25     
Files Coverage Δ
fvm/src/engine/mod.rs 85.21% <100.00%> (+1.12%) ⬆️
ipld/amt/src/node.rs 86.16% <100.00%> (-0.50%) ⬇️
ipld/bitfield/src/iter/mod.rs 99.74% <ø> (ø)
ipld/hamt/src/lib.rs 100.00% <ø> (ø)
shared/src/address/mod.rs 92.72% <100.00%> (ø)
shared/src/econ/mod.rs 90.90% <100.00%> (ø)
fvm/src/blockstore/buffered.rs 82.29% <50.00%> (-0.23%) ⬇️
ipld/hamt/src/node.rs 89.81% <50.00%> (-0.22%) ⬇️
ipld/kamt/src/node.rs 81.80% <50.00%> (-0.13%) ⬇️
sdk/src/message.rs 0.00% <0.00%> (ø)

... and 50 files with indirect coverage changes

@@ -209,12 +223,17 @@ fn wasmtime_config(ec: &EngineConfig) -> anyhow::Result<wasmtime::Config> {
c.cranelift_debug_verifier(false);
c.native_unwind_info(false);
c.wasm_backtrace(false);
c.wasm_backtrace_details(WasmBacktraceDetails::Disable);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

likely unnecessary, but I might as well.

@Stebalien
Copy link
Member Author

This seems to sync mainnet.

@Stebalien Stebalien marked this pull request as ready for review February 10, 2024 01:46
@rvagg
Copy link
Member

rvagg commented Feb 12, 2024

I'm currently looking at this, but dong some due diligence on how we currently configure and run it and what the delta is between wasmtime 12 and 17.

Comment on lines +169 to 172
// simd isn't supported in wasm-instrument, but if we add support there, we can probably enable
// this.
// Note: stack limits may need adjusting after this is enabled
c.wasm_simd(false);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit sad; I haven't looked at wasm-instrument yet but this shouldn't be a big deal to enable should it? just lots of instructions and everything at 128 bits.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It wouldn't be a huge deal, we just kept it disabled out of safety and haven't had the chance to test and benchmark it.

@rvagg
Copy link
Member

rvagg commented Feb 12, 2024

We seem to be doing explicit config in here, reiterating all of the available and relevant options and setting custom defaults. If that's the aim, there's some that are not set here that we might want to consider?

  • static_memory_forced since we set static_memory_maximum_size to max memory I guess we implicitly want this to be true so maybe be explicit about it?
  • static_memory_guard_size assuming we're not supporting 32-bit runtimes, we're getting 2GB of this as a default (so a total of 8GB virtual address space per instance cause we have the before guard set already).
  • dynamic_memory_guard_size is 64K by default, but maybe it only needs to be 0 since we're doing static=max?
  • dynamic_memory_reserved_for_growth defaults to 2GB on 64-bit, but maybe could be 0?

Copy link
Member

@rvagg rvagg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 seems like the big ticket items in the update are memory related and the default enablement of proposed extensions (that all complicate things for our use case!) that have been already explicitly disabled here.

Base automatically changed from steb/update-rust to master February 12, 2024 15:47
@Stebalien
Copy link
Member Author

We seem to be doing explicit config in here, reiterating all of the available and relevant options and setting custom defaults.

It's mostly to ensure that features we don't want remain off.

static_memory_forced

Good idea, implemented.

static_memory_guard_size

This shouldn't affect consensus, so I'm going to leave it at the default for now. Although we should consider 4GiB for performance.

dynamic_memory_*

I think we can ignore these if we force static memory.

@Stebalien Stebalien merged commit 464bba5 into master Feb 12, 2024
12 checks passed
@Stebalien Stebalien deleted the steb/update-wasmtime branch February 12, 2024 19:14
@Stebalien
Copy link
Member Author

Tested on mainnet and it still works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: ☑️Done (Archive)
Development

Successfully merging this pull request may close these issues.

None yet

3 participants