Skip to content

nucliweb/webgpu-ml-benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WebGPU ML Benchmark: MobileNet v2 Inference

Live demo →

Companion benchmark for the article ML en el browser con WebGPU: inferencia en tiempo real on joanleon.dev.

Measures the real performance difference between TensorFlow.js backends (CPU, WebGL, and WebGPU) when performing image classification using the MobileNet v2 model. It compares the "first inference" (which includes shader compilation and model initialization) against the "steady-state" performance.

What it measures

  • Model: MobileNet v2 (image classification)
  • Input: 224×224 generated canvas
  • Methodology:
    • First inference: Measures the time for the very first classification, including engine preparation and shader compilation.
    • Steady-state: Median of 5 consecutive runs after the first inference.
  • Backends:
    • cpu: Pure JavaScript implementation.
    • webgl: GPU-accelerated via WebGL.
    • webgpu: Next-generation GPU-accelerated via WebGPU.

Run locally

npm install
npm run dev

Then open http://localhost:5173.

Built with Vite.

Benchmark

webgpu-ml-benchmark

MacBook Air M4, GPU Apple Metal-3

Browser support

Browser WebGPU
Chrome / Edge 113+
Safari 18+
Firefox 🧪 Experimental (flag)

On browsers without WebGPU or WebGL support, only the available backends will be benchmarked.

Author

Joan León · @nucliweb

About

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors