Skip to content

Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.

License

Notifications You must be signed in to change notification settings

yuchengml/Adaptation-Tuning-PEFT

Repository files navigation


Adaptation-Tuning-PEFT

[W&B Report]

Table of Contents
  1. About The Project
  2. To-do List
  3. License
  4. Experiments

About The Project

This project involves comparing LLMs based on the fine-tuning methods within the Hugging Face PEFT framework for specified downstream tasks. This comparison aims to provide relevant experimental data as a reference for other development tasks.

Built With

Hugging Face PEFT

W&B

To-do List

Sequence Classification

  • Prepare datasets
  • Pre-process data
    • Filtering
  • Create models
    • Sequence classification model
  • Implement fine-tuning methods
    • P-Tuning
    • Prefix Tuning
    • LoRA
  • Experiments on W&B

https://api.wandb.ai/links/yuchengml/1ev46x8s

License

Distributed under the MIT License. See LICENSE for more information.

Experiments

About

Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published