Finetuning Some Wizard Models With QLoRA
-
Updated
Sep 17, 2023 - Python
Finetuning Some Wizard Models With QLoRA
Streamlit application for Reddit posts powered by OpenAI, Pinecone and Langchain
A collection of examples for training or fine-tuning LLMs.
This is a package for generating questions and answers from unstructured data to be used for NLP tasks.
Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.
We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20 via OpenAI’s APIs.
A payload compression toolkit that makes it easy to create ideal data structures for LLMs; from training data to chain payloads.
Fine-tune large language models (LLMs) using the Hugging Face Transformers library.
Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.
Enhancing Large Vision Language Models with Self-Training on Image Comprehension.
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
high-efficiency text & file scraper with smart tracking, client/server networking for building language model datasets fast
LLM Finetuning with Axolotl with decent defaults + Optional TrueFoundry Experiment Tracking Extension
Add a description, image, and links to the llm-finetuning topic page so that developers can more easily learn about it.
To associate your repository with the llm-finetuning topic, visit your repo's landing page and select "manage topics."