A fast and simple web server to host your Machine Learning model.
pip install meteorite
import json
import meteorite
app = meteorite.Meteorite()
app.set_webhook_url("https://testapp.via.routehead.com")
@app.predict
def predict(data):
body = data.decode("utf-8")
"""
Run your model on the input
"""
return body
app.start(port=4000) # port is 4000 by default
By default, the server starts at port 4000
. The predict
function will run with GET/POST requests on /predict
.
The set_webhook_url
function has been added to the Meteorite API to get responses from prediction requests. This makes Meteorite suitable for use in long-running ML tasks.
Your webhook URL must accept a POST request at the specific route. The result JSON will be sent as body of the request.
This project is under active development. We will not recommend you to use this package for critical applications. We will welcome all contributions! Please refer to the contributions section for more details.
Some of the features we're still working on:
- Pass POST request String and JSON into the Python function.
- Return String and JSON with the correct content type headers.
- Graceful error handling (
⚠️ Priority). - Customise the port for the server
- Allow more datatypes for POST request to the model.
- Create more examples.
Please refer to the CONTRIBUTING.md docs for details.
Join our Discord channel if you have more questions.