Skip to content

Compiling JAX to WebAssembly for exploring client-side machine learning

Notifications You must be signed in to change notification settings

googleinterns/paksha

Repository files navigation

JAX on the Web

Why run JAX ML models on the Web?

  • Privacy: Running on edge means data doesn't have to be sent back to servers and can power local, privacy-first machine learning, such as federated learning or executing models on private PII information.
  • Low-Latency: Interacting with client-side data avoids round-trip of server back and forth and allows for computation to be performed on the client's device itself. You can also consider a combination of low-latency local decisions combined with slower, longer round-trip for more powerful models running in the cloud that are queried at a different time scale.
  • Run Anywhere: No need to worry about user's OS or user interaction, and no server costs or scaling when exploring demos with users.

About

Compiling JAX to WebAssembly for exploring client-side machine learning

Topics

Resources

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages