You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A Retrieval Augmented Generator (RAG) that operates entirely locally, combining document retrieval and language model generation to provide accurate and contextually relevant responses. Built with @langchain-ai
Fine-tune the Gemma2B language model on a climate-related question-answer dataset to improve its domain-specific knowledge using LoRA (Low Rank Adaptation).
This project demonstrates the steps required to fine-tune the Gemma model for tasks like code generation. We use qLora quantization to reduce memory usage and the SFTTrainer from the trl library for supervised fine-tuning.
This project is an AI-powered chat interface developed using Next.js and Tailwind CSS. It allows users to interact with an artificial intelligence model similar to ChatGPT, generating responses based on user input. However, to utilize the AI model, you need access to the Ollama platform, which includes the Gemma:2b model.
This Repo contains How to Finetune Google's New Gemma LLm model using your custom instuction dataset. I have finetuned Gemma 2b instuct Model on 20k medium articles data for 5hrs on kaggle p100 GPU