Test each framework for it's performance, particularly common Lighthouse and CWV metrics as applications scale
Important: This is not a measure of "is framework x better than y". There are many tradeoffs to weigh when choosing the best framework for you - such as DX, features, familiarity, ecosystem, documentation, etc. These benchmarks only show a part of the picture.
The goal for this project are to understand the performance tradeoffs of popular frameworks in real world-ish scenarios. We want to assume non trivial codebases and imperfect code and conditions, so to see how each framework holds up to real world scenarios and scale.
We are intentionally not focused on client side rendering performance, for that please use Stefan Krause's great js-framework-benchmark
This project is in initial development. Do not put too much weight on these current results, there is still much more to do to ensure accuracy, consistency, and fairness.
Contributions are welcome!
We created a basic starting point for each framework in the frameworks/
folder using each framework's suggested starter/cli.
We then create basic example components and use Mitosis to compile them to best-effort idiomatic code for each framework. This will never be perfectly optimized code
We then build and serve each project, and run Lighthouse on each project with puppeteer, including with emulation of slow devices and networks (aka includes CPU and network throttling), and measure:
- FCP: First Contentful Paint (lower is better)
- LCP: Largest Contentful Paint (lower is better)
- TBT: Total Blocking Time (lower is better)
- TTI: Time to Interactive (lower is better)
- Score: Lighthouse Performance Score (higher is better)
- Eager JS Kib: the KiB of JS that is eagerly downloaded and executed from
<script>
tags for the initial page load. This is the actual size transferred over the network, including compression (lower is better) - Total KiB: the total KiB transferred with the given page, including HTML, CSS, prefetched resources, etc. Also known as the "total byte weight". This is the actual size transferred over the network, including compression (lower is better)
We take the median of multiple runs, and sort the results by TTI, ascending
We are also experimenting with looking at other metrics, such as build times
Alphabetically:
- astro - generated via their official CLI, with Solid for the interactive parts. Source
- fresh - generated via their official CLI. Source
- gatsby - contributed by the Gatsby team. Source
- hydrogen - generated via their official CLI. Source
- lit - generated via their official starter. Source
- marko - generated via their official CLI. Source
- next - generated via their official CLI. Source
- nuxt2 - generated via their official CLI. Source
- nuxt3 - generated via their official CLI (in beta). Source
- qwik - generated with Qwik City (meta framework). Source
- react - generated from create-react-app with react-router-dom added for routing. Source
- react-ssr-node - Ultra simple Node server to server-side render react. Source
- react-ssr-deno - Ultra simple Deno server to server-side render react. Source
- react-ssr-bun - Ultra simple Bun server to server-side render react. Source
- preact-ssr-node - Ultra simple Node server to server-side render preact. Source
- remix - generated from create-remix. Currently excluded from Lighthouse tests by request from the team, from concerns about the code being too non-idiomatic. Source
- solid - generated with Solid Start (meta framework). Source
- svelte - generated with Svelte Kit (meta framework). Source
- vue3 - generated via their official CLI, with routing. Source
Important: This project is still in initial development. Do not put too much weight on these current results, there is still much more to do to ensure accuracy, consistency, and fairness.
Jump to:
- Dashboard
- Todo App
- Hello World
- SSR times
- SSR throughput (req/second)
- React SSR throughput
- Build times
A more feature rich app for displaying table data, sorting, filtering, etc. Uses more JS, more like a median website. Source
βββββββββββ¬βββββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ¬ββββββββ¬βββββββββββββββ¬ββββββββββββ
β (index) β name β TTI β FCP β LCP β TBT β Score β Eager JS KiB β Total KiB β
βββββββββββΌβββββββββββββΌβββββββββββΌβββββββββββΌβββββββββββΌβββββββββββΌββββββββΌβββββββββββββββΌββββββββββββ€
β 0 β 'qwik' β '0.6 s' β '0.6 s' β '1.5 s' β '0 ms' β 100 β 2 β 38 β
β 1 β 'react' β '0.8 s' β '0.8 s' β '2.4 s' β '0 ms' β 98 β 187 β 199 β
β 2 β 'gatsby' β '0.8 s' β '0.8 s' β '1.4 s' β '0 ms' β 100 β 82 β 87 β
β 3 β 'lit' β '0.8 s' β '0.8 s' β '1.1 s' β '0 ms' β 100 β 23 β 25 β
β 4 β 'solid' β '0.9 s' β '0.6 s' β '1.3 s' β '0 ms' β 85 β 24 β 28 β
β 5 β 'astro' β '0.9 s' β '0.9 s' β '1.1 s' β '0 ms' β 100 β 15 β 35 β
β 6 β 'marko' β '1.0 s' β '0.8 s' β '0.9 s' β '10 ms' β 100 β 24 β 33 β
β 7 β 'fresh' β '1.3 s' β '1.3 s' β '1.5 s' β '0 ms' β 100 β 17 β 46 β
β 8 β 'next' β '1.6 s' β '0.6 s' β '1.2 s' β '10 ms' β 100 β 91 β 103 β
β 9 β 'svelte' β '1.6 s' β '1.6 s' β '1.7 s' β '0 ms' β 99 β 29 β 35 β
β 10 β 'angular' β '1.7 s' β '1.5 s' β '1.5 s' β '150 ms' β 98 β 86 β 88 β
β 11 β 'nuxt3' β '1.7 s' β '1.7 s' β '1.7 s' β '0 ms' β 99 β 59 β 65 β
β 12 β 'vue3' β '1.8 s' β '1.2 s' β '2.1 s' β '0 ms' β 94 β 41 β 50 β
β 13 β 'nuxt2' β '2.1 s' β '1.2 s' β '2.1 s' β '70 ms' β 98 β 106 β 118 β
βββββββββββ΄βββββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ΄ββββββββ΄βββββββββββββββ΄ββββββββββββ
A very simple/trivial interactive Todo app. Source
Ordered by TTI, ascending:
βββββββββββ¬βββββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬βββββββββββ¬ββββββββ¬βββββββββββββββ¬ββββββββββββ
β (index) β name β TTI β FCP β LCP β TBT β Score β Eager JS KiB β Total KiB β
βββββββββββΌβββββββββββββΌββββββββββΌββββββββββΌββββββββββΌβββββββββββΌββββββββΌβββββββββββββββΌββββββββββββ€
β 0 β 'astro' β '0.6 s' β '0.6 s' β '0.6 s' β '0 ms' β 100 β 20 β 32 β
β 1 β 'qwik' β '0.7 s' β '0.7 s' β '1.2 s' β '0 ms' β 100 β 2 β 25 β
β 2 β 'react' β '0.8 s' β '0.8 s' β '2.2 s' β '0 ms' β 99 β 159 β 171 β
β 3 β 'fresh' β '0.8 s' β '0.8 s' β '0.9 s' β '0 ms' β 100 β 9 β 37 β
β 4 β 'lit' β '0.8 s' β '0.8 s' β '1.1 s' β '0 ms' β 100 β 16 β 18 β
β 5 β 'solid' β '1.0 s' β '0.7 s' β '1.3 s' β '40 ms' β 86 β 17 β 19 β
β 6 β 'marko' β '1.1 s' β '0.8 s' β '0.9 s' β '10 ms' β 100 β 17 β 23 β
β 7 β 'vue3' β '1.2 s' β '1.2 s' β '1.8 s' β '10 ms' β 99 β 33 β 41 β
β 8 β 'svelte' β '1.5 s' β '1.5 s' β '1.5 s' β '0 ms' β 100 β 19 β 24 β
β 9 β 'nuxt3' β '1.5 s' β '1.5 s' β '1.7 s' β '0 ms' β 99 β 50 β 55 β
β 10 β 'gatsby' β '1.6 s' β '0.8 s' β '1.1 s' β '0 ms' β 100 β 70 β 75 β
β 11 β 'nuxt2' β '1.6 s' β '1.0 s' β '1.0 s' β '40 ms' β 100 β 95 β 106 β
β 12 β 'angular' β '1.6 s' β '1.5 s' β '1.6 s' β '30 ms' β 99 β 72 β 74 β
β 13 β 'next' β '2.2 s' β '0.7 s' β '0.8 s' β '110 ms' β 99 β 83 β 94 β
βββββββββββ΄βββββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄βββββββββββ΄ββββββββ΄βββββββββββββββ΄ββββββββββββ
Just a few links and <h1>Hello World</h1>
. Source
Ordered by TTI, ascending:
βββββββββββ¬βββββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬βββββββββββ¬ββββββββ¬βββββββββββββββ¬ββββββββββββ
β (index) β name β TTI β FCP β LCP β TBT β Score β Eager JS KiB β Total KiB β
βββββββββββΌβββββββββββββΌββββββββββΌββββββββββΌββββββββββΌβββββββββββΌββββββββΌβββββββββββββββΌββββββββββββ€
β 0 β 'qwik' β '0.7 s' β '0.7 s' β '0.7 s' β '0 ms' β 100 β 0 β 4 β
β 1 β 'astro' β '0.7 s' β '0.7 s' β '0.7 s' β '0 ms' β 100 β 0 β 9 β
β 2 β 'react' β '0.8 s' β '0.8 s' β '2.2 s' β '0 ms' β 99 β 154 β 166 β
β 3 β 'fresh' β '0.8 s' β '0.8 s' β '0.8 s' β '0 ms' β 100 β 0 β 27 β
β 4 β 'marko' β '0.8 s' β '0.8 s' β '1.1 s' β '0 ms' β 100 β 15 β 21 β
β 5 β 'lit' β '0.8 s' β '0.6 s' β '0.9 s' β '10 ms' β 100 β 15 β 16 β
β 6 β 'solid' β '0.9 s' β '0.9 s' β '1.1 s' β '0 ms' β 100 β 16 β 18 β
β 7 β 'vue3' β '1.2 s' β '1.2 s' β '1.5 s' β '0 ms' β 100 β 31 β 38 β
β 8 β 'gatsby' β '1.4 s' β '0.6 s' β '1.0 s' β '0 ms' β 100 β 69 β 73 β
β 9 β 'svelte' β '1.5 s' β '1.5 s' β '1.5 s' β '0 ms' β 100 β 18 β 22 β
β 10 β 'nuxt2' β '1.5 s' β '0.9 s' β '0.9 s' β '120 ms' β 99 β 93 β 103 β
β 11 β 'nuxt3' β '1.5 s' β '1.5 s' β '1.7 s' β '0 ms' β 99 β 50 β 57 β
β 12 β 'angular' β '1.7 s' β '1.5 s' β '1.5 s' β '150 ms' β 98 β 72 β 74 β
β 13 β 'hydrogen' β '1.8 s' β '0.6 s' β '1.6 s' β '30 ms' β 91 β 160 β 172 β
β 14 β 'next' β '2.1 s' β '0.7 s' β '0.8 s' β '40 ms' β 100 β 82 β 93 β
βββββββββββ΄βββββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄βββββββββββ΄ββββββββ΄βββββββββββββββ΄ββββββββββββ
Time it took to server-side render the /dashboard page in milliseconds. Smaller numbers are better.
βββββββββββ¬βββββββββββββ¬ββββββ¬ββββββ¬ββββββ¬βββββββββ¬ββββββββββ
β (index) β name β 1% β 50% β 99% β Avg β Std Dev β
βββββββββββΌβββββββββββββΌββββββΌββββββΌββββββΌβββββββββΌββββββββββ€
β 0 β 'marko' β 1 β 1 β 4 β 1.29 β 0.8 β
β 1 β 'fresh' β 4 β 4 β 6 β 4.19 β 0.88 β
β 2 β 'hydrogen' β 4 β 5 β 14 β 5.63 β 4.39 β
β 3 β 'svelte' β 6 β 7 β 18 β 8.17 β 3.12 β
β 4 β 'solid' β 6 β 8 β 22 β 8.77 β 3.8 β
β 5 β 'nuxt3' β 19 β 25 β 68 β 29.49 β 18.09 β
β 6 β 'nuxt2' β 11 β 14 β 32 β 15.04 β 4.89 β
β 7 β 'astro' β 11 β 14 β 38 β 16.56 β 5.98 β
β 8 β 'remix' β 12 β 18 β 62 β 20.3 β 8.87 β
β 9 β 'gatsby' β 27 β 33 β 103 β 36.9 β 12.86 β
β 10 β 'next' β 35 β 40 β 113 β 46.96 β 19.06 β
β 11 β 'angular' β 113 β 127 β 400 β 141.73 β 47.64 β
βββββββββββ΄βββββββββββββ΄ββββββ΄ββββββ΄ββββββ΄βββββββββ΄ββββββββββ
SSR throughput of the dashboard page, measured by autocannon, sorted by 99% percentile descending. Larger numbers are better.
βββββββββββ¬ββββββββββββββββββββ¬βββββββ¬βββββββ¬βββββββ¬ββββββββββ¬ββββββββββ
β (index) β name β 1% β 50% β 99% β Avg β Std Dev β
βββββββββββΌββββββββββββββββββββΌβββββββΌβββββββΌβββββββΌββββββββββΌββββββββββ€
β 0 β 'marko' β 3677 β 5411 β 5951 β 5263.1 β 702.18 β
β 1 β 'fresh' β 1748 β 2057 β 2123 β 1997.73 β 117.92 β
β 2 β 'preact-ssr-node' β 1755 β 2065 β 2185 β 2052.6 β 115.11 β
β 3 β 'hydrogen' β 843 β 1653 β 1807 β 1528.1 β 290.27 β
β 4 β 'svelte' β 536 β 820 β 1107 β 795.3 β 182.24 β
β 5 β 'solid' β 534 β 842 β 1019 β 830.64 β 138.46 β
β 6 β 'astro' β 444 β 573 β 641 β 579 β 56.1 β
β 7 β 'nuxt2' β 384 β 550 β 588 β 539.8 β 54.2 β
β 8 β 'nuxt3' β 229 β 344 β 411 β 335.3 β 45.62 β
β 9 β 'remix' β 264 β 371 β 485 β 387.9 β 69.16 β
β 10 β 'gatsby' β 158 β 233 β 277 β 239.6 β 34.64 β
β 11 β 'next' β 142 β 206 β 233 β 203.7 β 24.71 β
β 12 β 'angular' β 45 β 72 β 76 β 69.91 β 8.61 β
βββββββββββ΄ββββββββββββββββββββ΄βββββββ΄βββββββ΄βββββββ΄ββββββββββ΄ββββββββββ
Measure Node vs Bun vs Deno at SSR speed of a non trivial (the dashboard) React app. The below is requests per second. Larger numbers are better.
βββββββββββ¬ββββββββββββββββββββ¬βββββββ¬βββββββ¬βββββββ¬ββββββ ββ¬ββββββββββ
β (index) β name β 1% β 50% β 99% β Avg β Std Dev β
βββββββββββΌββββββββββββββββββββΌβββββββΌβββββββΌβ βββββΌβββββββββΌββββββββββ€
β 0 β 'preact-ssr-node' β 1755 β 2065 β 2185 β 2052.6 β 115.11 β
β 1 β 'react-ssr-bun' β 500 β 669 β 718 β 650.8 β 71.7 β
β 2 β 'react-ssr-deno' β 550 β 600 β 630 β 601 β 20.89 β
β 3 β 'react-ssr-node' β 267 β 375 β 394 β 366.5 β 35.04 β
βββββββββββ΄ββββββββββββββββββββ΄βββββββ΄βββββββ΄βββββββ΄βββββββββ΄ββββββββββ
The above uses Bun version 0.1.10
, Deno version 1.25.0
, Node.js version 16.14.0
. Run on a 2.6 GHz 6-Core Intel Core i7
.
βββββββββββ¬βββββββββββββ¬βββββββββββββββββββββββ
β (index) β Name β Build Time (Seconds) β
βββββββββββΌβββββββββββββΌβββββββββββββββββββββββ€
β 0 β 'fresh' β 0 β
β 1 β 'vue3' β 3.5 β
β 2 β 'react' β 7.4 β
β 3 β 'svelte' β 7.7 β
β 4 β 'qwik' β 7.7 β
β 5 β 'next' β 7.8 β
β 6 β 'astro' β 7.9 β
β 7 β 'hydrogen' β 8.4 β
β 8 β 'lit' β 8.4 β
β 9 β 'gatsby' β 9.9 β
β 10 β 'angular' β 10.7 β
β 11 β 'nuxt2' β 12.8 β
β 12 β 'solid' β 12.9 β
β 13 β 'nuxt3' β 16 β
β 14 β 'marko' β 16.2 β
βββββββββββ΄βββββββββββββ΄βββββββββββββββββββββββ
Next things we want to add:
- More complex examples that more closesly emulate real world sites and apps (e.g. a dashboard for exploring the test run results in interactive tables and graphs)
- Test interaction delays - such as initial interaction (like add todo) or navigate to next page
- Move test running to be remote, such as in GH actions (Help wanted!)
- Benchmark SSR speeds. Also add Bun here.
- Benchmark with and without Partytown for 3P scripts (and vs none at all)
Contributions welcome!
You will need Node.js >= 16.14.0k Deno installed locally, and Bun installed locally
After cloning this repo, install dependencies:
npm install
Now you can start running the below scripts:
Use the install script to install dependencies of each framework:
npm run fw-install
First, we must generate the component code for each framework via Mitosis.
cd apps/components
npm install
npm run build
Use the build script to build all frameworks (be sure to install
first as described above):
npm run build
To measure the weight of each framework (after you ran install
and build
):
npm run measure
- Stefan Krause and his js-framework-benchmark
- Addy Osmani and his great Web Performance Recipes With Puppeteer blog post
- Lighthouse and the very helpful Lighthouse team
- Ryan Carniato for always providing incredibly helpful insight and feedback
- Builder.io for funding this research and development