Official release of InternLM2 7B and 20B base and chat models. 200K context support
-
Updated
Jun 29, 2024 - Python
Official release of InternLM2 7B and 20B base and chat models. 200K context support
Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)
FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)
AM (Advanced Mathematics) Chat is a large language model that integrates advanced mathematical knowledge, exercises in higher mathematics, and their solutions. AM (Advanced Mathematics) chat 高等数学大模型。一个集成数学知识和高等数学习题及其解答的大语言模型。
A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ
Exploring the potential of fine-tuning Large Language Models (LLMs) like Llama2 and StableLM for medical entity extraction. This project focuses on adapting these models using PEFT, Adapter V2, and LoRA techniques to efficiently and accurately extract drug names and adverse side-effects from pharmaceutical texts
DICE: Detecting In-distribution Data Contamination with LLM's Internal State
MLX Institute | Fine-tuning Llama-2 7B on The Onion to generate new satirical articles given a headline
Develop a Romanian legal domain Large Language Model (LLM) using pre-trained model and fine-tuning on legal texts. The fine-tuned model is available on Hugging Face.
This repo contains everything about transformers and NLP.
Fine-tune ChatGPT with few-shot learning for personalized resume bullet points.
This repository implements a self-updating RAG (Retrograde Autoregressive Generation) model. It leverages Wikipedia for factual grounding and can fine-tune itself when information is unavailable. This allows the model to continually learn and adapt, offering a dynamic and informative response.
Add a description, image, and links to the fine-tuning-llm topic page so that developers can more easily learn about it.
To associate your repository with the fine-tuning-llm topic, visit your repo's landing page and select "manage topics."