Skip to content

Latest commit

 

History

History
 
 

alt text

Example notebooks for the mistral 7B model on Databricks

This folder contains the following examples for mistral-7b models: `

File Description Model Used GPU Minimum Requirement
01_load_inference Environment setup and suggested configurations when inferencing mistral-7b-instruct models on Databricks. mistral-7b-instruct 1xA10-24GB
02_mlflow_logging_inference Save, register, and load mistral-7b-instruct models with MLflow, and create a Databricks model serving endpoint. mistral-7b-instruct 1xA10-24GB
02_[chat]_mlflow_logging_inference Save, register, and load mistral-7b-instruct models with MLflow, and create a Databricks model serving endpoint for chat completion. mistral-7b-instruct 1xA10-24GB
03_serve_driver_proxy Serve mistral-7b-instruct models on the cluster driver node using Flask. mistral-7b-instruct 1xA10-24GB
03_[chat]_serve_driver_proxy Serve mistral-7b-instruct models as chat completion on the cluster driver node using Flask. mistral-7b-instruct 1xA10-24GB
04_langchain Integrate a serving endpoint or cluster driver proxy app with LangChain and query. N/A N/A
04_[chat]_langchain Integrate a serving endpoint and setup langchain chat model. N/A N/A
05_fine_tune_deepspeed Fine-tune mistral-7b models leveraging DeepSpeed. mistral-7b 4xA10 or 2xA100-80GB
06_fine_tune_qlora Fine-tune mistral-7b models with QLORA. mistral-7b 1xA10
07_ai_gateway Manage a MLflow AI Gateway Route that accesses a Databricks model serving endpoint. N/A N/A