Companion benchmark for the article ML en el browser con WebGPU: inferencia en tiempo real on joanleon.dev.
Measures the real performance difference between TensorFlow.js backends (CPU, WebGL, and WebGPU) when performing image classification using the MobileNet v2 model. It compares the "first inference" (which includes shader compilation and model initialization) against the "steady-state" performance.
- Model: MobileNet v2 (image classification)
- Input: 224×224 generated canvas
- Methodology:
- First inference: Measures the time for the very first classification, including engine preparation and shader compilation.
- Steady-state: Median of 5 consecutive runs after the first inference.
- Backends:
cpu: Pure JavaScript implementation.webgl: GPU-accelerated via WebGL.webgpu: Next-generation GPU-accelerated via WebGPU.
npm install
npm run devThen open http://localhost:5173.
Built with Vite.
MacBook Air M4, GPU Apple Metal-3
| Browser | WebGPU |
|---|---|
| Chrome / Edge 113+ | ✅ |
| Safari 18+ | ✅ |
| Firefox | 🧪 Experimental (flag) |
On browsers without WebGPU or WebGL support, only the available backends will be benchmarked.