Extracting Product Insights from Unstructured text data using LLMs with LangChain
-
Updated
Jun 1, 2023 - Jupyter Notebook
Extracting Product Insights from Unstructured text data using LLMs with LangChain
LLM-Lora-PEFT_accumulate explores optimizations for Large Language Models (LLMs) using PEFT, LORA, and QLORA. Contribute experiments and implementations to enhance LLM efficiency. Join discussions and push the boundaries of LLM optimization. Let's make LLMs more efficient together!
Finetuning of Falcon-7b, ROC is an Average D&D player, present it a situation, it will explain the thought process of an average player.
Finetune baichuan pretrained model with QLora method
Caption-Studio: Unleash the power of cutting-edge language models and image recognition to effortlessly generate captivating captions for your images. Elevate your social media game with expertly crafted, attention-grabbing captions that perfectly complement your visuals.
Small finetuned LLMs for a diverse set of useful tasks
Fine Tune technique exploration with the best ranked base models from Hugging Face
Fine-Tune Your Own Llama 2 Model LOCALLY in a Colab Notebook
Implementation for fine-tuning a Falcon-7b model using QLoRA on the Spider dataset. The repository focuses on the task of converting natural language questions into SQL commands.
This model is a fine-tuned model based on the "TinyPixel/Llama-2-7B-bf16-sharded" model and "timdettmers/openassistant-guanaco" dataset
A working example of a 4bit QLoRA Falcon model using huggingface
Finetuning Some Wizard Models With QLoRA
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
Add a description, image, and links to the qlora topic page so that developers can more easily learn about it.
To associate your repository with the qlora topic, visit your repo's landing page and select "manage topics."