This project is an implementation of the paper: Parameter-Efficient Transfer Learning for NLP, Houlsby [Google], ICML 2019.
-
Updated
Mar 17, 2024 - Python
This project is an implementation of the paper: Parameter-Efficient Transfer Learning for NLP, Houlsby [Google], ICML 2019.
CRE-LLM: A Domain-Specific Chinese Relation Extraction Framework with Fine-tuned Large Language Model
This repo contains everything about transformers and NLP.
memory-efficient fine-tuning; support 24G GPU memory fine-tuning 7B
Mistral and Mixtral (MoE) from scratch
Fine-tuning Llama3 8b to generate JSON formats for arithmetic questions and process the output to perform calculations.
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
[SIGIR'24] The official implementation code of MOELoRA.
Comparing popular Parameter Efficient Fine-Tuning (PEFT) techniques for Large Language Models
High Quality Image Generation Model - Powered with NVIDIA A100
The task of this project is to Convert Natural Language to SQL Queries
Fine-tuning an LLM to generate musical micro-genres
Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"
Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
Add a description, image, and links to the peft-fine-tuning-llm topic page so that developers can more easily learn about it.
To associate your repository with the peft-fine-tuning-llm topic, visit your repo's landing page and select "manage topics."