Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Single-module feature testing #1280

Open
binji opened this issue May 29, 2019 · 34 comments
Open

Single-module feature testing #1280

binji opened this issue May 29, 2019 · 34 comments

Comments

@binji
Copy link
Member

binji commented May 29, 2019

I was chatting with @sbc100, and we came up with an idea that seems like it might work, providing a way to do feature testing in a single module. The current suggestion is to call WebAssembly.validate() on a tiny feature-test module, then use this information to fetch a specific module. We could codify this in a new feature-test section, that must come before all other sections:

Section Name Code
FeatureTest X
Type 1
Function 2
... ...

This section would itself contain the modules that should be validated:

FeatureTest

Field Type Description
count varuint32 count of modules to follow
modules module* sequence of modules

a module is:

Field Type Description
size varuint32 size of data (in bytes)
data bytes sequence of size bytes

This creates an index space of valid and invalid modules.

As an alternative, the FeatureTest section could just contain a list of names. Each one names a feature. So for example, simd might mean that the v128 type is available, as well as all of the simd instructions.

FeatureTest (alternative)

Field Type Description
count varuint32 count of names to follow
names name* sequence of names

Then, we allow each section to be specified multiple times, as long as they are still in the correct order. The behavior is as-if the sequence in the section were concatenated. The Start section would behave the same (perhaps we would allow multiple start functions, though that's not required). The DataCount section would sum all of the counts.

We need some way to mark the section as conditional; one way is to use a new section code. Like other sections, it is required to come in a specified order. Unlike other sections, it inherits the order of the section it is emulating. The format could be:

Conditional

Field Type Description
id varuint32 Section id to emulate
condition condition Condition that must be met to include this section
contents ... The regular contents of a section with id id

I'm not quite sure how we'd want to encode a condition, but the idea is that each module that was previously validated in the FeatureTest section gets a bit, and we can perform a logical operation on the bits to determine whether this section is included. One cute way to do it would be by evaluating an expression, perhaps where each local is given the value 1 for a valid module (or feature name that engine supports) and 0 for invalid (or feature name that engine doesn't support), e.g.

;; include this section if either module 0 or module 1 are valid
(i32.or (local.get 0) (local.get 1))

;; include this section if module 0 is not valid
(i32.eqz (local.get 0))

When an engine is validating a module, it will skip any Conditional section where the condition is not met. This does mean that the index space may be different, depending on which features are enabled. It's up to the tool generating the module to ensure that this makes sense.

Example

For SIMD, we expect that there will be many functions that are shared, and only a few that need to have scalar fallback. At first, a tool can generate both sections and interleave them in the binary:

Section Index Section Condition Count
0 FeatureTest N/A
1 Type !SIMD 5
2 Type SIMD 6
3 Function !SIMD 20
4 Function SIMD 30

Note that it is not necessary for the number of entries in each section to be the same.

Later, the tool could optimize this and produce a common section where possible:

Section Index Section Condition Count
0 FeatureTest * N/A
1 Type * 5
2 Type SIMD 1
3 Function * 17
4 Function !SIMD 3
5 Function SIMD 13

In this example, types 0 through 4 can be shared, and only type 5 needs SIMD. Similarly, functions 0 through 16 are shared. Functions 17 through 19 have different implementations for SIMD and !SIMD. Logically, these should have the same behavior, but there's no way to enforce this, of course. Functions 20 through 29 are only valid when SIMD is enabled.

Globals

Using globals, we can even export the results of the feature test:

Section Index Section Condition Count
0 FeatureTest * N/A
1 Global SIMD 1
2 Global !SIMD 1

Where we set the value of global 0 to 1 if SIMD is true.

Imports and Exports

We could potentially have different imports and exports. For example, we could export an additional function when SIMD is enabled.

Similarly, if we allow the feature-tested modules to reference imports, we could provide a (admittedly clunky) weak-import mechanism. If an import is not provided, we can generate a stub function instead. This would be tricky to get right, since the imports have to come before all defined functions, but it may be possible.

You could use this for an optional memory import too. If the memory import is provided, then use it. Otherwise generate your own memory section. Not sure why you'd want to do this, however. :-)

@tlively
Copy link
Member

tlively commented May 29, 2019

This would enable us to do some fun things with function multiversioning in clang, so I am very excited about this direction.

Packaging small modules inside a bigger module for feature detection is very general, but seems clunky. What if the feature strings used by the target features section were standardized instead and used to create an index from feature sets to sections? I had previously considered this idea for static feature detection in the tools, but had not considered that the fat modules could be consumed by engines as well.

@penzn
Copy link

penzn commented May 30, 2019

I think enough people have ran into this, we were just having a conversation on a similar topic in WebAssembly/simd#80.

@binji, how would the conditions look like? Can this be done on per-function basis, rather than per-module basis?

What I was thinking is some form of specid runtime functionality, may be an instruction. Standardizing feature strings (or other form of feature IDs), like @tlively said, would help with that. Right now a combination of supported WIP features behaves as a different virtual ISA, so it would be good if they could be represented as ISA extensions in some standard way, like what cpuid does on CPUs . But I can see that runtime checks may not work well with current validation scheme, as code needs to be valid before it can be loaded.

IMO, this has value, at least for proposals (there is already varying level of support for those features among the engines).

@binji
Copy link
Member Author

binji commented May 30, 2019

Packaging small modules inside a bigger module for feature detection is very general, but seems clunky.

True, though it leaves open the possibility for creative uses, like weak-imports or default memory definitions that would be difficult with a fixed string.

@binji, how would the conditions look like? Can this be done on per-function basis, rather than per-module basis?

Do you mean instead of per-section? We could do it that way, though we'll need to conditionally include sections in the general case. It seems likely that we'd want to group these conditions too, like #ifdef/#endif blocks.

I'm not sure how we want to design the binary encoding of conditions, but using an expression seems like a compact encoding. Originally I thought of having it more structured, something like (a & !b) | (c), perhaps with disjunctive normal form. But this is easily expressible as an expression too:

(i32.or
  (i32.and (local.get $a) (i32.eqz (local.get $b)))
  (local.get $c))

But I can see that runtime checks may not work well with current validation scheme, as code needs to be valid before it can be loaded.

Right, it seems we need some way of optionally disabling validation. The section boundary seems nice, since they are always encoded with a size. As you mentioned, functions could work for this too, since they also have a size.

@xtuc
Copy link

xtuc commented May 30, 2019

I don't think it's an issue but I want to point out that:

The i64<>bigint feature is different because it won't work with such a new section + validation; If the feature is not present it will throw when a HostFunc is called from JS with a i64 type in its signature.

In that case a simple JS test is fine: https://github.com/xtuc/webassembly-feature/blob/master/features/JS-BigInt-integration/index.js.

@binji
Copy link
Member Author

binji commented May 30, 2019

@xtuc good point, I don't think there's any way to feature test that here. This is a case where having a fixed string might be preferable.

@aardappel
Copy link

This seems like an nice way to have fat binaries with the minimum of fatness, but also seems a bit overly general.

For one, it imposes the constraint of having a compliant validator to be able to read modules at all. Maybe some engine doesn't support SIMD, but since they're memory or time constrained, rather than doing a validation pass, it just aborts if it comes across an instruction it doesn't know. It is fine with this since it is running a relatively trusted context. This worked fine with MVP modules, but now to read a module that contains SIMD and non-SIMD paths it suddenly needs to have a working verifier just to be able to even know how to select the non-SIMD section.

Same for using expressions to do the actual feature testing. Say I have a crazy AOT compiler that is relatively heavyweight, and I don't want to have to invoke that whole thing just to do a feature test. Do I now need to bundle an interpreter just to select features?

Since I expect the times when complex feature selection is necessary to be limited, I'd be in favor of a simpler way to accomplish this, which does not rely on verifying/running code. Maybe just a bit-set where we allocate bits to features in some way, and then addition "append" sections after each current section that say "if you have these bits and don't have these bits, append the following code section to the existing code section". With the added advantage that it is probably easier to ensure at build time you always end up with the same number of functions regardless of bit combinations.

@binji
Copy link
Member Author

binji commented May 30, 2019

This worked fine with MVP modules, but now to read a module that contains SIMD and non-SIMD paths it suddenly needs to have a working verifier just to be able to even know how to select the non-SIMD section.

Agreed, this is more work for an engine like that. One solution is to use a separate tool to "bake" in the results of the feature test, ultimately producing a new binary with no conditional sections.

Do I now need to bundle an interpreter just to select features?

Right, this feature might be too general. We could limit the allowed instructions (similar to constant expressions), but it would still be the most complex expression like this. We probably would only need local.get, i32.or, i32.and, and i32.eqz.

Maybe just a bit-set where we allocate bits to features in some way

If we do this, I'd suggest we follow the naming described here, as @tlively mentioned: https://github.com/WebAssembly/tool-conventions/blob/master/Linking.md#target-features-section

We probably don't want to use the prefix quite as its described there, though.

@penzn
Copy link

penzn commented May 30, 2019

Conditional inclusion of parts of module (function granularity or coarse) would still lead to need for conditions at the call site for conditionally enabled functions, either in WASM or JS. A potential solution is to have conditional "exports", which would trigger a different call path in WASM for the same exported name based on the feature. Another way, which would hit two birds with one stone, is to do validation on per-function basis when functions are called for the first time, this probably has good potential for module size reduction, but that would make WASM execution more dynamic.

@binji
Copy link
Member Author

binji commented May 30, 2019

Conditional inclusion of parts of module (function granularity or coarse) would still lead to need for conditions at the call site for conditionally enabled functions, either in WASM or JS.

That can be done with this proposal too, you would have an export section that is always defined, which always exports that function. Then you would conditionally include different code sections that implement that function differently.

Here's an example, using ad-hoc syntax:

(feature-test
  ;; check whether the v128 type is available.
  (module $simd (type (func (param v128))))
)

(memory 1)

;; only included if SIMD is available.
(conditional (local.get $simd)
  (func $add_one (param $addr i32)
    (v128.store
      (local.get $addr)
      (i32x4.add
        (v128.load (local.get $addr))
        (v128.const i32x4 1 1 1 1)))
  )
)

;; only included if SIMD is not available.
(conditional (i32.eqz (local.get $simd))
  (func $add_one (param $addr i32)
    ;; TODO: scalar implementation ...
  )
)

;; always export $add_one function as "add_one"
(export "add_one" (func $add_one))

@binji
Copy link
Member Author

binji commented Jun 3, 2019

Another thought I had about "inlined feature test modules" (as described above) vs "feature strings" (similar to the way it is defined for objects in tool-conventions):

If we have feature strings, one problem we might see would be engines incorrectly saying they support a feature when they don't completely. If we have a more fine-grained feature-testing mechanism, it may be possible to craft a module that fails in this engine but not in others.

@penzn
Copy link

penzn commented Jun 3, 2019

That can be probably fixed by using feature codes, similar to CPUID feature codes -- there can be base feature as defined by the proposal and variations defined by runtimes.

Pseudocode from your previous comment looks reasonable, it should work for the case when support is decided at validation time. Would this work for calling either of the two feature-dependent functions in the same module though, would those two variants map to the same function ID? Without that you might have to generate copies of callers as well.

@tlively
Copy link
Member

tlively commented Jun 3, 2019

Not that anyone would do this, but if you push the feature testing modules idea to the extreme you would end up essentially shipping the spec tests in your module. It bothers me that in this proposal there's no clear line beyond which further testing shouldn't be necessary. At some point you have to trust that the engine implements what it says it does, and if it doesn't then that is a bug with the engine and should not be worked around by users.

@binji
Copy link
Member Author

binji commented Jun 3, 2019

I'm mostly basing this on many discussions from a few years back, see #339, #412, #523, and #752 for lots of discussion on feature testing.

See also @lukewagner's comment here: #752 (comment):

We've definitely spent some time discussing the "has feature X" family of solutions and the problem is precisely defining the set of X (and choosing the level of granularity). We've also gotten advice from people working on other web standards not to go down this route and instead support the existing JS pattern of testing whether representative things exist.

I'm curious if Luke still stands by this comment :-)

@penzn
Copy link

penzn commented Jun 4, 2019

I think there are two issues here:

On the first subject, I do like the feature strings or (better yet) integer flags idea. I think that level of granularity of 'is type' checks is finer than would be needed. Implementing those as integers (which would map to feature strings) would be pretty cheap and native compilers and libraries already support this for CPUID checks.

On validation, I think the danger is that the code would have to be compiled into two (otherwise identical) copies because it has a call to something that is feature-dependent. But if all feature-dependent copies map to the same function ID in the module that would not happen.

I also like the idea of dynamic validation, when function would get validated on the first call only, which would enable the checks to be done at runtime. This would enable feature testing without making module layout more complicated, but it is a relatively drastic change to module validation scheme.

@binji
Copy link
Member Author

binji commented Jun 4, 2019

I've updated the top comment to have an alternate encoding of the FeatureTest section using names.

I do like the feature strings or (better yet) integer flags idea

I don't like flags here much, since it's much less clear what the bits mean. It also doesn't leave much room for local customizations and experimentation -- which bits can I assume are unused?

On validation, I think the danger is that the code would have to be compiled into two (otherwise identical) copies because it has a call to something that is feature-dependent. But if all feature-dependent copies map to the same function ID in the module that would not happen.

Right, this is a tooling concern to make sure the indexes map properly.

As an alternative, the older discussion threads describe a has_feature instruction that an engine could skip over unknown instructions. This helps for things like SIMD (since it is primarily about additional instructions), but doesn't extend to much else. Even for SIMD it doesn't address how you would extend the type and global sections to support SIMD value types.

I also like the idea of dynamic validation, when function would get validated on the first call only, which would enable the checks to be done at runtime.

This means you'd have to defer everything until runtime, including code generation. That's OK for browsers, but probably not desirable in many non-web cases. It also doesn't help when we add or modify sections other than the code section.

@penzn
Copy link

penzn commented Jun 20, 2019

That seems reasonable. What should be the next steps?

@binji
Copy link
Member Author

binji commented Jun 21, 2019

I'll present this at the next CG meeting, see what people think.

@penzn
Copy link

penzn commented Jun 21, 2019

I'll try to attend. I don't know how this works yet, but if you need any help, please let me know.

@tlively
Copy link
Member

tlively commented Jun 22, 2019

An alternative way to mark a section as conditional is to have as a separate section (or as part of the FeatureTest section) an index mapping feature sets to section indices. In other words, instead of distributing that information in each conditional section, keep it all in one place. I suppose the section codes of each conditional section would still have to be updated for back-compat purposes, though.

It would be nice if all these feature testing and conditional sections were standardized custom sections to allow a graceful fallback to some default set of sections (which are not marked as custom sections).

@kripken
Copy link
Member

kripken commented Jul 9, 2019

I like this proposal, but I think that the single-module part of it is not essential or necessary. What I mean is, I agree that

  • We should make feature testing easy.
  • Often there will be large amounts of shared code that does not depend on a feature, so shipping a "fat wasm" may not be that bad in code size.
  • This proposal of appending sections is a great solution.

But I think we can do all that with separate modules, and it may be simpler. If we have the ability to append modules together at runtime - basically appending their sections, as suggested in this proposal, so the result is a combined module with a single function index space etc., then feature testing can work as follows:

  • At the toolchain level we split the module up into "chunks", each of which is a normal wasm module. We'd have a hopefully big chunk for the shared part across all features, and then specific smaller modules for the features-specific stuff.
  • At runtime the page can download all the chunks. It identifies which features are present, and based on that knows which modules to combine. It then uses the appending feature to combine those wasms (in the right order).

Note btw that this connects to the "JIT" idea where we want to allow codegen at runtime, to add more code or other things to a wasm module. In other words, being able to append modules would support at least two use cases, nice feature testing and JITing.

@KronicDeth
Copy link

@kripken that seems like a very web browser solution to the problem and wouldn’t work as well with embeddings that only want to allow one WASM file to be loaded and no further access to the host file system.

@penzn
Copy link

penzn commented Jul 9, 2019

  • At the toolchain level we split the module up into "chunks", each of which is a normal wasm module. We'd have a hopefully big chunk for the shared part across all features, and then specific smaller modules for the features-specific stuff.

Personally, I do like the idea of moving some of this to runtime (as opposed to load time), as it provides more flexibility. However, for a C-like toolchain this is a bit counter-intuitive, as there is usually one output for each compile or link step.

(edited for typos)

@rossberg
Copy link
Member

rossberg commented Jul 9, 2019

To be honest, the entire idea of testing for features at runtime seems like a rather web browsery / JavaScripty concept. It's perhaps what you do in scripting environments, and there are many reasons (good, bad, and sad) why it became popular on the Web, but it is not the most scalable approach. The alternative are AOT configuration mechanisms, which tend to lead to more understandable and testable software systems and configuration matrices. AFAICT, that's what's primarily used by "real" software systems.

So I'd like to challenge some of our assumptions by raising a few questions. Do we have enough evidence that this particular practice from the Web is relevant to a broader range of Wasm environments? Even on the Web, does Wasm face the same implementation fragmentation risk that made this vital for JS+DOM? Is inline code deduplication crucial? Should we go out of our way to build it deep into Wasm?

@KronicDeth
Copy link

Fat binaries with trampolines tied to CPUID detection on startup are used on native hosts for using optimized routines in games while having a single install that works on x86 Windows, so it is not just a Web use case if we think the non-web embedders will have similar needs to current game installers.

@binji
Copy link
Member Author

binji commented Jul 9, 2019

@kripken That's similar to what @AndrewScheidecker mentioned in the meeting today, using separate modules instead of sections. I think that would make it harder to perform streaming compilation. For example, you may not have full information by the time you start reading the code section of the first module; you'd have to wait until you read all modules.

As @titzer mentioned, it would also be require breaking the encapsulation of the module. Each module would have to reference indexes from another module, so they couldn't actually be validated independently.

@tlively
Copy link
Member

tlively commented Jul 9, 2019

I read @kripken's idea not as having multiple complete wasm modules that export shared state to each other, but rather as a general idea for allowing wasm binaries to be composed of separately-fetched chunks, none of which is a complete module on its own. I believe this would require loosening the ordering restrictions between sections, since subsequent chunks may want to append contents to previously parsed sections.

@kripken
Copy link
Member

kripken commented Jul 9, 2019

Yes, it would require some loosening of current restrictions (on ordering, and the encapsulation issue). I wouldn't suggest this if the JIT use case didn't exist, btw - but given that is important IMO, it seems like the same solution can work for both.

@AndrewScheidecker
Copy link

@kripken That's similar to what @AndrewScheidecker mentioned in the meeting today, using separate modules instead of sections. I think that would make it harder to perform streaming compilation. For example, you may not have full information by the time you start reading the code section of the first module; you'd have to wait until you read all modules.

As @titzer mentioned, it would also be require breaking the encapsulation of the module. Each module would have to reference indexes from another module, so they couldn't actually be validated independently.

I was trying to suggest that this could be expressed entirely as a layer on top of ordinary modules. You would have "aggregate modules" that consist of some set of embedded modules. Those embedded modules' imports and exports would be linked together using some namespace internal to the aggregate module. The aggregate module would define its own imports and exports from this internal namespace.

That should still allow streaming compilation, and does not break encapsulation of internally linked symbols.

Since it works above the level of the module internal index spaces, it also doesn't need to do anything special to work with debug data and other metadata the embedded modules may carry, or any future extensions to WASM that add index spaces.

@penzn
Copy link

penzn commented Jul 9, 2019

Right now it is possible to combine multiple modules dynamically at runtime, exporting functions from one and importing into the other, and that can be used for this scenario, though it requires explicit exports/imports and some JavaScript glue code. Streamlining the importing would be helpful.

@tlively
Copy link
Member

tlively commented Jul 9, 2019

One extra kink in an MVP-compatible solution would be that we would require the ability to replace individual functions in the code section. For example, if I have function foo in the normal (MVP) code section, I would want the ability to add a custom conditional section that declares that foo should in fact be replaced with a SIMD-accelerated foo. Having both versions of foo in conditional sections would not be MVP-compatible because the MVP foo would need to be in the code section.

Does anyone have an idea of whether the ability to selectively replace functions in the code section would interfere with streaming compilation? Perhaps it would be ok if the replacement functions were required to be declared before the code section?

@binji
Copy link
Member Author

binji commented Jul 9, 2019

@AndrewScheidecker I see what you mean, basically this would allow you to have a single file which is treated as a collection of dynamically linked modules. You could perform streaming compilation, but it would be for each module separately, which would then be linked afterward. I think this idea could work too, though I'm not quite sure how we'd provide an internal-only namespace.

@tlively Yeah, you're right. I can't see a nice way to make it MVP-compatible without replacing individual items (e.g. types, functions, ...) from a section. One ugly way would be to allow the FeatureTest section to provide a substituted count for each section, dependent on the which feature set is matched. Then you can make sure that foo (and any other potentially replaced functions) are at the end of the MVP section.

@rossberg
Copy link
Member

@KronicDeth, fair enough, but are CPUs really comparable? They are a more fragmented landscape with no standardisation process. Is Wasm in the same situation?

Outside the Web I expect many environments to fix a feature set as part of their specification. Others are probably based on a single centralised implementation. Or both. Such environments have no use for feature testing, since apps never need to run on an "old" version. That is certainly true for the blockchain space, but might also apply to edge computing or the like.

And even on the Web, threads will likely ship long before this feature becomes available. After that the main feature I can imagine feature testing to be useful for is SIMD. But do we really expect multi-dimensional configuration matrices to arise from that? Over an extended amount of time? What's the scenario? Honest question, because I don't see it.

@KronicDeth
Copy link

@rossberg the games trampolines is a thing I know exists. If no one in the community is actively doing something similar, I don't personally have that use case. For Erlang/Elixir we're using rust, Rust's wasm-bindgen, and LLVM IR, so if neither Rust or LLVM will take advantage of this we are unlikely to use it either as we're not directly generating WASM at this time.

@tlively
Copy link
Member

tlively commented Jul 16, 2019

FWIW I have been personally asked for feature detection by multiple users interested in bundling SIMD and non-SIMD builds.

And even on the Web, threads will likely ship long before this feature becomes available.

Until someone proposes a semantics for the atomic instructions on unshared memories (which is a separate discussion), users will still need to produce and serve separate threaded and unthreaded builds of explicitly thread-safe libraries. So feature-detecting threads may still be useful after they have shipped everywhere on the web.

But do we really expect multi-dimensional configuration matrices to arise from that? Over an extended amount of time?

I think you're right that in the fullness of time all the web engines can be expected to implement all the features and that in many other WebAssembly ecosystems only a single implementation will matter. That being said, there will always be bleeding-edge features that have not yet shipped on all browsers or that are shipping only behind a flag. We want to make it easy for developers to test these features and ship them to their bleeding-edge users because that provides valuable data to the standardization process. I also expect that there will be library authors that want to use bindings, WASI, and any other portability-focused WebAssembly features to produce libraries that are useful across multiple WebAssembly ecosystems, which may be frozen supporting different feature sets.

If you take the long view, it is perhaps less important that a feature detection mechanism be able to fall gracefully back to MVP, since in the long run no one will be targeting MVP any more. But eventually some ecosystem will freeze its WebAssembly feature set and create a semi-permanent lowest common denominator that toolchains will want to target. Having a feature detection mechanism that can fall back to that lowest common denominator will allow developers to target such an ecosystem in addition to other ecosystems with newer features.

@KronicDeth We would definitely support a feature detection scheme in C/C++/LLVM, possibly via function multiversioning. I talked to the rust-wasm folks at their last meeting about this and they were open to ensuring such a scheme would work in Rust as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants