Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] 在lora训练时出现 “Could not find a config file in xx” #385

Open
2 tasks done
BigworldNebula opened this issue May 17, 2024 · 3 comments
Open
2 tasks done

Comments

@BigworldNebula
Copy link

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

运行 lora脚本后,在每轮训练中出现以下内容:
site-packages/peft/utils/save_and_load.py:195: UserWarning: Could not find a config file in /home/xx/huggingface/Qwen-VL - will assume that the vocabulary was not modified.

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

在对从hf下载下来的config文件查看时没有发现关于 vocabulary 内容的config

运行环境 | Environment

- OS: Ubuntu 18.04
- Python:3.8
- Transformers: 4.32.0
- PyTorch: 1.13.1+cu117
- peft: 0.11.0
- CUDA 11.7

备注 | Anything else?

No response

@1180300419
Copy link

我也遇到了这个问题,使用手动从modelscope 下载的模型会出现这个问题,但是使用让代码自己下载的就没问题了。
image

@zhangye0402
Copy link

请问这个Warning: Could not find a config file in /home/xx/huggingface/Qwen-VL - will assume that the vocabulary was not modified.会对结果产生影响吗?毕竟他只是一个Warning
另外vocabulary是不是就是按照assume的一样确实not modified呢?
谢谢! @1180300419 @BigworldNebula

@Bambooslp
Copy link

这个问题解决了吗,我也遇到了一样的问题,并且微调停止了,使用的finetune_qlora_single_gpu.sh脚本

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants