BentoML is an open platform that simplifies ML model deployment and enables you to serve your models at production scale in minutes
The BentoML version 1.0 is around the corner. For stable release version 0.13, see the 0.13-LTS branch. Version 1.0 is under active development, you can be of great help by testing out the preview release, reporting issues, contribute to the documentation and create sample gallery projects.
- The easiest way to turn your ML models into production-ready API endpoints.
- High performance model serving, all in Python.
- Standardize model packaging and ML service definition to streamline deployment.
- Support all major machine-learning training frameworks.
- Deploy and operate ML serving workload at scale on Kubernetes via Yatai.
- Quickstart guide will show you a simple example of using BentoML in action. In under 10 minutes, you'll be able to serve your ML model over an HTTP API endpoint, and build a docker image that is ready to be deployed in production.
- Main concepts will give a comprehensive tour of BentoML's components and introduce you to its philosophy. After reading, you will see what drives BentoML's design, and know what
- ML Frameworks lays out best practices and example usages by the ML framework used for training models.
- Advanced Guides showcases advanced features in BentoML, including GPU support, inference graph, monitoring, and customizing docker environment etc.
- Check out other projects from the BentoML team:
- To report a bug or suggest a feature request, use GitHub Issues.
- For other discussions, use Github Discussions.
- To receive release announcements, please join us on Slack.
There are many ways to contribute to the project:
- If you have any feedback on the project, share it with the community in Github Discussions of this project.
- Report issues you're facing and "Thumbs up" on issues and feature requests that are relevant to you.
- Investigate bugs and reviewing other developer's pull requests.
- Contributing code or documentation to the project by submitting a Github pull request. See the development guide.
- See more in the contributing guide.
BentoML collects anonymous usage data that helps our team to improve the product.
Only BentoML's internal API calls and CLI commands are being reported. We strip out as much potentially
sensitive information as possible, and we will never collect user code, model data, model names, or stack traces.
Here's the code for usage tracking.
You can opt-out of usage tracking by the
--do-not-track CLI option:
bentoml [command] --do-not-track
Or by setting environment variable
Thanks to all of our amazing contributors!