diff --git a/README.md b/README.md index 40ac2f2..4d2bff8 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,14 @@ Code for paper [Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes](https://arxiv.org/abs/2305.02301) -## About this fork +## Changes in this fork +* [x] Add support for GCS +* [x] Add command line invocation with arguments +* [ ] Add support for hosting distilled models using docker +* [ ] Add support for hosting models as vertex AI endpoints +* [ ] Add support for hosting models as TF Serving endpoints +* [ ] Add [kedro](https://kedro.readthedocs.io/en/stable/) pipeline for distillation +* [ ] Add support for [vertex AI](https://cloud.google.com/vertex-ai/docs) pipelines **Work in progress.** @@ -37,7 +44,6 @@ distillm ``` #### Example usages - - Distilling step-by-step with `PaLM label` and `PaLM rationale`: ```python distillm --from_pretrained google/t5-v1_1-small \ @@ -71,6 +77,7 @@ distillm --from_pretrained google/t5-v1_1-small \ - `--output_dir`: The directory for saving the distilled model - `--gcs_project`: The GCP project name - `--gcs_path`: The GCS path. **_train.json** and **_test.json** will be added to the path + ## Cite If you find this repository useful, please consider citing: ```bibtex diff --git a/hosting/app.py b/hosting/app.py index 9989eac..185dff3 100644 --- a/hosting/app.py +++ b/hosting/app.py @@ -11,10 +11,15 @@ async def home(): return {"message": "Machine Learning service"} -@router.post("/serve") +@router.post("/predict") async def data(data: dict): return {"message": "Data received"} +@router.get("/health") +async def health(): + return {"message": "Model is healthy"} + + app.include_router(router) if __name__ == "__main__":