|
| 1 | +# Benchmarking and Performance Testing |
| 2 | + |
| 3 | +> **Note:** The official Neo.mjs benchmarking suite and harness are maintained in a separate repository: [neomjs/benchmarks](https://github.com/neomjs/benchmarks). |
| 4 | +
|
| 5 | +## Philosophy: Measuring Resilience |
| 6 | + |
| 7 | +Most web benchmarks (like Lighthouse or Core Web Vitals) focus on the "First Impression"—how fast a page loads. Neo.mjs, being an application engine for complex enterprise tools, focuses on the "Lived-In" experience. |
| 8 | + |
| 9 | +Our benchmarking philosophy is not about "Page Load" time. It is about **Resilience**: |
| 10 | +* Can the UI handle 1,000 updates per second without freezing? |
| 11 | +* Can a grid ingest 100,000 rows while the user is scrolling? |
| 12 | +* Does the application remain responsive (60 FPS) during heavy background calculations? |
| 13 | + |
| 14 | +## The Challenges of Benchmarking |
| 15 | + |
| 16 | +Building a reliable benchmark for multi-threaded applications required solving three specific problems that standard test runners (like Playwright out-of-the-box) do not address. |
| 17 | + |
| 18 | +### 1. The Parallelism Trap |
| 19 | +**Problem:** Standard test runners execute tests in parallel to save time. This is disastrous for benchmarking because tests compete for CPU resources, causing massive variance in results (up to 50%). |
| 20 | +**Solution:** Our harness enforces **Serial Execution** (`--workers=1`). Every test gets the full, undivided attention of the CPU. |
| 21 | + |
| 22 | +### 2. The Latency Chasm |
| 23 | +**Problem:** Sending commands from Node.js (Test Runner) to the Browser introduces unpredictable network/process latency (The "Observer Effect"). |
| 24 | +**Solution:** We use **Atomic Measurement**. The entire test sequence (Action -> Wait -> Measure) is injected into the browser via `page.evaluate()`. The measurement happens entirely within the browser's high-precision context, returning only the final result to Node.js. |
| 25 | + |
| 26 | +### 3. The Polling Fallacy |
| 27 | +**Problem:** Standard `waitFor()` functions use polling (checking every ~30ms). You cannot measure a 20ms event with a 30ms ruler. |
| 28 | +**Solution:** We reject polling in favor of **MutationObservers**. Our harness attaches a listener that checks the pass condition *synchronously* on every single DOM mutation, allowing us to stop the timer with microsecond precision. |
| 29 | + |
| 30 | +## Running the Benchmarks |
| 31 | + |
| 32 | +To run the benchmarks, you must clone the separate repository: |
| 33 | + |
| 34 | +```bash |
| 35 | +git clone https://github.com/neomjs/benchmarks.git |
| 36 | +cd benchmarks |
| 37 | +npm install |
| 38 | +npm run test |
| 39 | +``` |
| 40 | + |
| 41 | +**Why a separate repository?** |
| 42 | +The benchmarking suite provides direct, apples-to-apples comparisons between **Neo.mjs**, **React**, **Angular**, and **AG Grid**. To ensure a fair comparison, we include the full production build environments for these frameworks. Keeping the benchmarks separate ensures that the core `neomjs/neo` repository remains lightweight and free of these heavy third-party dependencies. |
| 43 | + |
| 44 | +For detailed instructions on reproducing results and understanding the methodology, please refer to the documentation within that repository. |
0 commit comments