This repository includes an automated pipeline that:
- Clones the MindsDB documentation repository.
- Navigates to the
scripts
directory. - Executes the specified scripts in sequence.
To run the pipeline locally:
./scripts/run_pipeline.sh
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf
on the MindsDB documentation dataset.
Base Model: LLaMA-2 7B
Dataset: MindsDB documentation
Fine-tuning Method: LoRA
Training Epochs: 3
Hardware: Google Colab Pro (A100 GPU)
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ako-oak/llama2-finetuned-mindsdb"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
def chat(prompt):
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_length=200)
return tokenizer.decode(output[0], skip_special_tokens=True)
print(chat("What is the purpose of handlers in MindsDB?"))