Skip to content

google/jaxonnxruntime

JAX ONNX Runtime

JAX ONNX Runtime is a robust and user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.

More specifically, this tool chain has the abilities:

  • ONNX Model Conversion: Converts ONNX models into JAX format modules. Tested on popular large language models including GPT-2, BERT, and LLaMA.

  • Hardware Acceleration: Enable the jit mode of the converted JAX modules, which accelerates execution on GPU and/or TPU.

  • Compatibility with JAX ecosystem: E.g., export models by Orbax, and serve the saved models by Tensorflow Serving system.

Get Started

Contributions and Discussions

We believe that collaboration is the key to building remarkable software, and we wholeheartedly welcome contributions from developers like you. You can make a real impact and help shape the future of our project with contributions such as implementing new operators and increasing support for more ML models.

Our contributors will have a chance to earn Google Open Source Peer Bonus, so that your valuable contributions won't go unnoticed. Your hard work will be rewarded both by the community and by Google. Together, let's create an amazing library and foster a supportive environment for open-source enthusiasts.

Thank you for taking the time to contribute! Please see the contribution guidelines.

License

This project is licensed under the Apache License.

About

A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published