llama-2 model finetuned to generate docker commands
-
Updated
Jan 22, 2024 - Jupyter Notebook
llama-2 model finetuned to generate docker commands
Factuality check of the SemRep Predications
Generative AI nano degree program
Multilingual spellings correction and Question Answering Large Language Models
算法岗-自然语言处理-大语言模型-应试资料
For enjoyable brain activity during holiday season in winter '23
CausalLM for python docstrings documentation
This is the implementation of low rank adaptation (LoRA) which is a subset of parameter efficient fine tuning (PEFT).
La recherche ouverte sur le traitement automatique du langage, dédiée à la matière fiscale 🔬
This project is an implementation of the paper: Parameter-Efficient Transfer Learning for NLP, Houlsby [Google], ICML 2019.
Tutorials on how to use language models
Optimizing a fine tuning script for machine translation between Arabic and English using the LoRA technique from the PEFT library.
This repository is dedicated to small projects and some theoretical material that I used to get into NLP and LLM in a practical and efficient way.
🚂 Fine tuning large language models
This is the repo for prompt tuning a language model to improve the given prompt (vague).
This project leverages FLAN-T5 from Hugging Face to perform dialogue summarization, fine-tuning with ROUGE, and detoxifying summaries using PPO and PEFT.
Fine-tuned FLAN T-5 using Instruction Fine-Tuning (Full), LoRA-based PEFT, and RLHF with PPO
Add a description, image, and links to the peft topic page so that developers can more easily learn about it.
To associate your repository with the peft topic, visit your repo's landing page and select "manage topics."