Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lit-llama使用 LoRA 进行微调 #70

Open
ziwang-com opened this issue Jun 5, 2023 · 0 comments
Open

lit-llama使用 LoRA 进行微调 #70

ziwang-com opened this issue Jun 5, 2023 · 0 comments

Comments

@ziwang-com
Copy link
Owner

https://github.com/Lightning-AI/lit-llama/blob/main/howto/finetune_lora.md
使用 LoRA 进行微调
低秩自适应 (LoRA) 是一种使用低秩矩阵分解近似更新到 LLM 中线性层的技术.这大大减少了可训练参数的数量,并加快了训练速度,而对模型的最终性能几乎没有影响。我们通过在单个GTX 3090(24GB)GPU上的Alpaca数据集上的指令微调LLaMA 7B来演示这种方法。

制备
此处的步骤只需执行一次:

按照自述文件中的说明安装依赖项。

下载并转换权重,并将其保存在文件夹中,如此所述。./checkpoints

下载数据并生成指令调优数据集:

python scripts/prepare_alpaca.py
另请参阅:对非结构化数据集进行微调

运行微调

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant