Skip to content
This repository has been archived by the owner on Jan 19, 2021. It is now read-only.

Create dual ES5 and ES2017 builds #117

Merged
merged 2 commits into from
May 7, 2020
Merged

Create dual ES5 and ES2017 builds #117

merged 2 commits into from
May 7, 2020

Conversation

ryanio
Copy link
Contributor

@ryanio ryanio commented May 1, 2020

@sz-piotr and @msieczko suggested to use a ES2017 build for the performance benefit of not using ts generators for async code.

@alcuadrado highlighted the importance of maintaining browser support and suggested using the browser field in package.json.

This PR makes dist a ES2017 build and adds an additional build at dist.browser for ES5 (as inspired by this post).

@github-actions
Copy link

github-actions bot commented May 1, 2020

Coverage Status

Coverage remained the same at 94.992% when pulling fbb9f3f on buildES2017 into b19afe7 on master.

@ryanio ryanio requested a review from alcuadrado May 1, 2020 18:18
Copy link
Member

@alcuadrado alcuadrado left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice change! I think there's a lot to win by doing this in -vm and every other async-heavy package.

tsconfig.browser.json Outdated Show resolved Hide resolved
@holgerd77
Copy link
Member

holgerd77 commented May 1, 2020

Ran the benchmarks against both builds and these doesn't seem to show significant difference on first sight:

dist

$ npm run benchmarks
benchmarks/random.ts | rounds: 1000, ERA_SIZE: 1000, sys: 1663.196ms
benchmarks/checkpointing.ts | average execution time: 0.05s 377.844ms

$ npm run benchmarks
benchmarks/random.ts | rounds: 1000, ERA_SIZE: 1000, sys: 2068.566ms
benchmarks/checkpointing.ts | average execution time: 0.1s 383.173ms

dist.browser

$ npm run benchmarks
benchmarks/random.ts | rounds: 1000, ERA_SIZE: 1000, sys: 1856.385ms
benchmarks/checkpointing.ts | average execution time: 0.05s 390.474ms

$ npm run benchmarks
benchmarks/random.ts | rounds: 1000, ERA_SIZE: 1000, sys: 1598.650ms
benchmarks/checkpointing.ts | average execution time: 0.05s 358.461ms

(on Node v12.15.0)

@holgerd77
Copy link
Member

@alcuadrado How does this work with tools (you mentioned uglify I guess?) which needs ES5 along the toolchain? Do they automatically fallback to the build from the browser directive? Or are you given an option or can configure? Or does this just differ from tool to tool?

@alcuadrado
Copy link
Member

@alcuadrado How does this work with tools (you mentioned uglify I guess?) which needs ES5 along the toolchain? Do they automatically fallback to the build from the browser directive? Or are you given an option or can configure? Or does this just differ from tool to tool?

Web bundlers just use the browser version when resolving requires/imports. Then they pass those versions to the rest of the toolchain, like uglify.

@alcuadrado
Copy link
Member

Ran the benchmarks against both builds and these doesn't seem to show significant difference on first sight:

dist

$ npm run benchmarks
benchmarks/random.ts | rounds: 1000, ERA_SIZE: 1000, sys: 1663.196ms
benchmarks/checkpointing.ts | average execution time: 0.05s 377.844ms

$ npm run benchmarks
benchmarks/random.ts | rounds: 1000, ERA_SIZE: 1000, sys: 2068.566ms
benchmarks/checkpointing.ts | average execution time: 0.1s 383.173ms

dist.browser

$ npm run benchmarks
benchmarks/random.ts | rounds: 1000, ERA_SIZE: 1000, sys: 1856.385ms
benchmarks/checkpointing.ts | average execution time: 0.05s 390.474ms

$ npm run benchmarks
benchmarks/random.ts | rounds: 1000, ERA_SIZE: 1000, sys: 1598.650ms
benchmarks/checkpointing.ts | average execution time: 0.05s 358.461ms

(on Node v12.15.0)

How are you running the benchmarks? They get the implementation from ./dist.

@holgerd77
Copy link
Member

@alcuadrado Just manually pointing to dist.browser in the benchmark sources. This should be taken into account when running npm run benchmarks if I am not missing something?

@sz-piotr
Copy link

sz-piotr commented May 4, 2020

Here are the benchmarks that we did. Also @alcuadrado had significant performance improvements on the vm side when compiling it with ES2017 target.

Note that those benchmarks were done on node 11 and there is no benchmark for master with ES2017

v3.0.0 (js)

benchmarks/checkpointing.ts | iterations: 5000, samples: 50 | 
average execution time: 2s 541ms, std: 138.92

mpt master branch (at 7583aa95)

benchmarks/checkpointing.ts | iterations: 5000, samples: 50 | 
average execution time: 3s 275ms, std: 176.70

Using Map leaving async mpt interface (ES5 target)

benchmarks/checkpointing.ts | iterations: 5000, samples: 50 | 
average execution time: 1s 889ms, std: 115.67

Using Map leaving async mpt interface (ES2017 target)

benchmarks/checkpointing.ts | iterations: 5000, samples: 50 | 
average execution time: 1s 654ms, std: 73.90

Using Map with sync mpt interface (ES5 target)

benchmarks/checkpointing.ts | iterations: 5000, samples: 50 | 
average execution time: 1s 104ms, std: 86.20

Using Map with sync mpt interface (ES6 target)

benchmarks/checkpointing.ts | iterations: 5000, samples: 50 | 
average execution time: 1s 056ms, std: 84.36

@holgerd77
Copy link
Member

Would love to hear some more voices from the team or from others here: is this something we should consider to roll out (over time) to the other libraries as well? Any further input on side effects which might occur hear due to a) the switch in specific and/or b) general ecosystem and tool(chain) compatibility? Is this sufficiently integrated in the tests respectively ideas or suggestions on this front?

//cc @cgewecke @PhilippLgh @evertonfraga @s1na

@holgerd77
Copy link
Member

Hi Piotr @sz-piotr, thanks, great that you are doing this work, performance is really under-appreciated throughout the EthereumJS ecosystem! 👍 😄

I never looked closer to the MPT benchmark files, did you build up some trust on that these deliver valid results? And can you give some interpretation to the benchmark results you posted (so: what would you read out of the results respectively what conclusions would you regard as "safe to be drawn"?)

I would love to have these improved VM results a bit more transparent, can you or @alcuadrado eventually do some kind of test PR towards the VM to get some impression on the effect on CI?

@sz-piotr
Copy link

sz-piotr commented May 6, 2020

can you or @alcuadrado eventually do some kind of test PR towards the VM to get some impression on the effect on CI?

@holgerd77 That is the idea. We are currently investigating the impact of the changes and trying to build credible benchmarks that are a little bit more transparent and robust.

Copy link
Member

@holgerd77 holgerd77 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! 😄

@alcuadrado alcuadrado self-requested a review May 7, 2020 20:30
Copy link
Member

@alcuadrado alcuadrado left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed this again, LGTM

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants