Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DISCUSSION] Compile to Native/Machine binary instead of Hermes VM #395

Closed
supercharger opened this issue Oct 22, 2020 · 11 comments
Closed
Labels
question Further information is requested

Comments

@supercharger
Copy link

supercharger commented Oct 22, 2020

Problem

Suboptimal performance compared to JIT ( only in certain conditions) and native/machine binaries

Discussion

Hermes compiles the code to Hermes VM. While It improves TTI, It is suboptimal to JIT ( only in certain conditions) and native/machine code. Alternatively,Javascript code can be compiled to native binary, and use a variant of memory management ( reference counting, etc).

Is there a plan to have Hermes target as native binary ( without VM)? Also, Can you answer following questions.

  1. What're the technical difficulties involved in compiling Javascript to non VM native binaries. One such difficulty is that, dynamic types. Does enforcing( type) interface when dealing with side-effects/real-world ( UI & HTTP Req) solve that?
  2. What's the reasoning behind building Hermes VM instead of native compilation?
  3. A bit tangential question, Can Hermes output WebAssembly ( not Hermes VM as webassembly - fully binary as webassembly) using Emscripten? If so, Can we use WebAssembly or asm.js ( asm.js has IO) instead of Hermes VM for react native, If so Are there any benefits of one over other?

PS: I did extensive search ( issue search & google) on why hermes uses VM and couldn't find any reasoning.

@supercharger supercharger changed the title Compile to Native/Machine binary instead of Hermes VM [DISCUSSION/FEATURE_REQ] Compile to Native/Machine binary instead of Hermes VM Oct 22, 2020
@supercharger supercharger changed the title [DISCUSSION/FEATURE_REQ] Compile to Native/Machine binary instead of Hermes VM [FEATURE_REQ] Compile to Native/Machine binary instead of Hermes VM Oct 22, 2020
@supercharger supercharger changed the title [FEATURE_REQ] Compile to Native/Machine binary instead of Hermes VM [DISCUSSION] Compile to Native/Machine binary instead of Hermes VM Oct 22, 2020
@tmikov
Copy link
Contributor

tmikov commented Oct 22, 2020

Compiling a dynamic language like JavaScript to native code ahead of time is still a research topic, but the established consensus is that it is impossible to do without sacrificing performance, code size, or usually both.

Basically, if we compiled JavaScript to ARM, while accurately preserving the language semantics, the result would be at least 5x larger, and would likely have only a marginal performance improvement, if any. In fact, the 5x increase in size is itself so prohibitive that the performance improvement becomes irrelevant. Instead of a 5MB bytecode bundle, you would get a 25MB or even a 50MB native shared library. The same applies to targeting WebAssembly.

We have performed internal experiments with ahead of time native code generation and the results were as expected.

To understand the size increase and the disappointing performance, here is a simple example:

a = b + c;

This translates to a single bytecode instruction, which is 4 bytes long. But let's consider what we need to do if we are generating native code. At compile time we don't know the types of b and c. They could be numbers, objects, strings, etc, or any combination of. Checking of all possible types would be tens of instructions, not to mention performing the actual operations, which would likely require function calls. This could easily run 100 bytes or more. We could go another way and simply emit a function call to "add()", which would perform all type checks. That would destroy performance, but let's say we only care about size. Unfortunately, even a simple function call would run about 18 bytes on ARM:

long long add(long long p1, long long p2);
void simpleCall(long long *memory) {
    memory[0] = add(memory[1], memory[2]);
}

       4: 04 46                        	mov	r4, r0
       6: 80 68                        	ldr	r0, [r0, #8]
       8: e1 68                        	ldr	r1, [r4, #12]
       a: 22 69                        	ldr	r2, [r4, #16]
       c: 63 69                        	ldr	r3, [r4, #20]
       e: ff f7 fe ff                  	bl	#-4
      12: 61 60                        	str	r1, [r4, #4]
      14: 20 60                        	str	r0, [r4]

@dulinriley dulinriley added the question Further information is requested label Oct 22, 2020
@likern
Copy link

likern commented Oct 22, 2020

But still WebAssembly support will be very beneficial.

First, we can utilize a lot of interesting emerging projects from Web community. It's much more simple to use some WebAssembly than writing own (or adopt others) C++ code through JSI.

There is a lot of potential in improving performance. I see a lot of projects claiming outstanding performance like this one https://www.infoq.com/news/2020/10/markdown-wasm-fast-parser.

WebAssembly inevitable in mobile browsers since more and more web apps / sites will use it. It has to have decent performance. The same can be done in Hermes.

No problems with compilation issues on different platforms.

Potential to interact / exploit improvements from WASI infrastructure.

@tmikov
Copy link
Contributor

tmikov commented Oct 23, 2020

@likern whether Hermes should be able to execute Web Assembly is a different question. I agree that Wasm support in Hermes will be useful - it is just a matter of us prioritizing it and getting the work done.

@Huxpro
Copy link
Contributor

Huxpro commented Oct 23, 2020

@likern just to confirm, by "WebAssembly support" you mean Hermes VM can (add a Wasm VM to) take Wasm bytecode as input and execute it along with JS (like any other web browsers), right?

A bit tangential question, Can Hermes output WebAssembly

The original question however, was phrased more like "compiling JavaScript to Wasm bytecode" (like AssemblyScript to the extreme), especially regarding the context of this issue was "compiling JS to native code", which is a completely different thing.

@supercharger
Copy link
Author

supercharger commented Oct 23, 2020

while accurately preserving the language semantics, the result would be at least 5x larger

In Javascript, I think the only time we can't infer type is when dealing with side effects ( FP term). There are only handful of ways to deal with real world, such as JSI and fetch/XHR. Most developers would willing to define the type contract to get native performance. If there is a way to use decorators/annotations ( like we do for ProtoBuf/Thrift) on the response so that we enforce the type, Do we really need these extra checks that hits the performance penalty?

@likern
Copy link

likern commented Oct 23, 2020

@likern just to confirm, by "WebAssembly support" you mean Hermes VM can (contain a Wasm VM to) take Wasm bytecode as input and execute it along with JS (like any other web browsers), right?

Can Hermes output WebAssembly

Your original phrasing looks like "compiling JavaScript to Wasm bytecode" (like AssemblyScript to the extreme), especially regarding the context of this issue was "compiling JS to native code", which is a completely different thing.

I decided to not create new issue to ask this question. Just wanted to bring in some attention to this topic since I couldn't find any information / plans regarding WebAssembly. And issue with "Native / WebAssembly" could fit.

Yes, I meant that Hermes could run WebAssembly modules like web browsers do.

But I didn't mean to compile JavaScript to WebAssembly. I meant compile C / C++ / Rust to WebAssembly and run it.

In React Native world there is some very important big pain points related to performance issues.

It's react-native-svg which implements SVG spec and allows to create complex and rich UI components. There's no alternative solutions to it. But this projects uses old Native Modules with bridge. For more complex things rendering becomes very slow. And it will no be rewritten using C++ / JSI / TurboModules anytime soon. So new upcoming RN architecture won't help.

If we could take some C++ SVG rendering library, compile it to WebAssembly (I think it should be able to interact with JavaScript seemlessly) and run - that could eliminate most struggles and open up new possibilities for React Native apps.

Ideally is to take Skia (which internally uses Flutter as rendering engine for their components).
And Skia already provides WebAssembly https://skia.org/user/modules/canvaskit

The same problems with react-native-storage which uses SQLite database. If we could take C++ drivers, compile to WebAssembly - that could eliminate Java / JNI part and all serialisation between C -> Java -> JavaScript. On Reddit comments someone said using JSI is much faster. Now we have to use Java https://developer.android.com/reference/android/database/sqlite/SQLiteDatabase.

There is others important libraries that might benefit from this.

@tmikov
Copy link
Contributor

tmikov commented Oct 23, 2020

@likern Wasm support is something that I would really like to do and have been thinking and planning about it (I have a pretty detailed design complete in my head). But we don't have a strong use case for it currently, so it is hard to prioritize working on it yet.

@tmikov
Copy link
Contributor

tmikov commented Oct 23, 2020

@supercharger there have been many experiments to add static typing to JavaScript, to compile TypeScript to native, etc. Without getting in details about the viability of those, we are no longer talking about "compiling JavaScript to native", but about a new language with new syntax and new semantics (even if it looks similar to JavaScript). Everything would have to be re-written for that new language - you wouldn't be able to use any existing JavaScript modules. So, what would be the point? Other statically typed compiled languages already exist.

The goal of Hermes is to improve specific use cases for existing code written in JavaScript.

@supercharger
Copy link
Author

@tmikov There is a open source project "NectarJS" to compile JS to native. They claim that it doesn't increase the binary size. Just checking your thoughts on this project.

@tmikov
Copy link
Contributor

tmikov commented Oct 29, 2020

@supercharger there are many projects that have attempted this, for example: https://github.com/tmikov/jscomp :-)

I don't feel comfortable criticizing other OSS projects. Looks like a lot of work has gone into NectarJS and I certainly wish it success. You can make your own evaluation of whether its correctness, output size and runtime model meet your needs (for example the lack of garbage collector). I did look quickly into the code it generates and I don't think invalidates anything I said above. The binary size increases for sure. I also tried a couple of trivial tests and ran into correctness issues.

@supercharger
Copy link
Author

supercharger commented Oct 29, 2020

@tmikov Thanks for answering the questions with patience. I hope this github issue helps others as well, with the same question about hermes. I am going to close this after 2 days of inactivity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants