Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking DOM updates #22

Open
chinedufn opened this issue Aug 12, 2018 · 0 comments
Open

Benchmarking DOM updates #22

chinedufn opened this issue Aug 12, 2018 · 0 comments

Comments

@chinedufn
Copy link
Owner

chinedufn commented Aug 12, 2018

Benchmarking DOM updates

A pre-requisite for optimizing our virtual dom implementation is having benchmarks in place to let us know where we're starting from and how much progress we're making.

Here's how I think we can benchmark our virtual_dom_rs::patch function that handles updating the real DOM.

This issue is a pre-requisite for some of the potential future optimizations outlined in #10

A potential approach to benchmarking DOM updates

  1. Create a patch-benchmarks cdylib crate that targets WebAssembly

    • Has a PatchBenchmark struct where you specify an old and new virtual-dom, similar to our DiffTestCase struct

    • Each file in this crate has a bench function that returns a -> PatchBenchmark

      fn bench () -> PatchBenchmark {
          PatchBenchmark {
            first: html! { <div> </div> },
            second: html! { <div id="hello",> </div> },
            benchmark_backstory: "What was the thinking behind adding this benchmark ..."
          }
      }
    • build.rs script generates an overall benchmark function that

      • run each of these functions, get the PatchBenchmark
      • Create the first DOM element and patches it with the second DOM element (and vice versa) a bunch of times and console.log's the average time per patch
    • We compile our patch-benchmarks crate to WebAssembly

  2. A JS module imports this WASM overall benchmark function and runs it. We bundle this into patch-benchmark-bundle.js

  3. We spawn a headless chrome instance using Google Chrome Puppeteer, and capture the console.log calls, thus capturing all of our benchmark timings.

  4. Write the benchmark timings to stdout!

Potential future improvements

When we have our first benchmarks in place there will probably be more information that we want to have / a cleaner output format that we can iterate towards.. But step one is just having something in place!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant