Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JS Function calls take more time than nashorn #799

Closed
karthickpdy opened this issue Nov 11, 2018 · 4 comments
Closed

JS Function calls take more time than nashorn #799

karthickpdy opened this issue Nov 11, 2018 · 4 comments

Comments

@karthickpdy
Copy link

I am running benchmark between Nashorn and Graal. But I see that Nashorn is performing extremely better of the order of 40x w.r.t execution of javascript function. Can anybody explain me the same?

Have attached the benchmark file and the code file
Multithread Function Benchmark.txt
.
FunctionBenchmark.txt

@thomaswue
Copy link
Member

You are running the Nashorn code in parallel in 30 threads, while you are synchronizing the Graal.js code to run sequentially. We published detailed performance results of Graal.js vs Nashorn on well-known benchmarks. See "https://medium.com/graalvm/oracle-graalvm-announces-support-for-nashorn-migration-c04810d75c1f".

@karthickpdy
Copy link
Author

Is there a way to parallelize the execution without synchronizing in graal? I want to compile just once and execute many times like in my nashorn code. Is it possible to do that parallely in graal?

@chumer
Copy link
Member

chumer commented Dec 3, 2018

You need to create a context with an explicit engine. This enables source caching between multiple contexts. A context for JavaScript can only be used from one thread at a time, but if you create a context for each thread with the same engine then your code will be shared and compiled only once.

More info here: http://www.graalvm.org/docs/graalvm-as-a-platform/embed/#enable-source-caching

@karthickpdy
Copy link
Author

karthickpdy commented Dec 3, 2018

Thanks for the response. I tried out your suggestion and did a benchmark. But I do not see considerable performance differences between caching and not caching. What happens when we enable source caching? Is the whole AST shared? If so, why the performance in both cases are so similar. Here is the gist for reference. This is the source I used. Let me know if I am missing something.

Also would be really great if I can share any value across contexts. Is that in the roadmap?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants