Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.
-
Updated
Feb 15, 2024 - Python
Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
Streamlit application for Reddit posts powered by OpenAI, Pinecone and Langchain
A payload compression toolkit that makes it easy to create ideal data structures for LLMs; from training data to chain payloads.
Fine-tune large language models (LLMs) using the Hugging Face Transformers library.
high-efficiency text & file scraper with smart tracking, client/server networking for building language model datasets fast
A collection of examples for training or fine-tuning LLMs.
LLM Finetuning with Axolotl with decent defaults + Optional TrueFoundry Experiment Tracking Extension
This is a package for generating questions and answers from unstructured data to be used for NLP tasks.
Enhancing Large Vision Language Models with Self-Training on Image Comprehension.
Finetuning Some Wizard Models With QLoRA
We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20 via OpenAI’s APIs.
Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.
Add a description, image, and links to the llm-finetuning topic page so that developers can more easily learn about it.
To associate your repository with the llm-finetuning topic, visit your repo's landing page and select "manage topics."