Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
-
Updated
Jun 19, 2024 - Python
Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
LoRA & Dreambooth training scripts & GUI use kohya-ss's trainer, for diffusion model.
frozen transformer models adaptive layers selection for downstream tasks efficient solving
心理健康大模型、LLM、The Big Model of Mental Health、Finetune、InternLM2、Qwen、ChatGLM、Baichuan、DeepSeek、Mixtral、LLama3
Scripts for use with LongCLIP, including fine-tuning Long-CLIP
Fine-tuning code for CLIP models
The official repo for "LLoCo: Learning Long Contexts Offline"
This is an official repo for fine-tuning SAM to customized medical images.
A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Vicuna architecture. Basically ChatGPT but with Vicuna
Transfer Learning Library for Domain Adaptation, Task Adaptation, and Domain Generalization
这是一个很可爱的色妹妹,送给每一个需要的人~
Baseline achieving 0.8 accuracy on the private test set in the ZaloAI Challenge 2023 Elementary Math Solving
Create synthetic datasets for training and testing Language Learning Models (LLMs) in a Question-Answering (QA) context.
This is a cog implementation of the fine-tuner for Meta's MusicGen
Tune LLM in few lines of code
SDXL 1.0 DreamBooth Finetune with Diffusers
Simple python WebUI for fine-tuning ChatGPT (gpt-3.5-turbo)
Comprehensible scripts to instruction-tune a LLaMA model
Add a description, image, and links to the finetune topic page so that developers can more easily learn about it.
To associate your repository with the finetune topic, visit your repo's landing page and select "manage topics."