-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] .CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpecd6su1w/main.c' #1199
Comments
For the newer versions of Flash Attention v2, the additional rotary pos ops are dependent on the As a temporary workaround, I recommend uninstalling Triton. This will cause it to fallback to the non--Flash Attention v2 implementation. To troubleshoot the issue, the version of triton and related logs will be needed. |
Hi thanks for your prompt reply. I think I have figured out the problem, it is because I am using V100 for finetune and flash-attention does not support V100 currently. After I uninstall V100, I could run the finetune.py as normal. I have also noticed that V100 does not support training with BF16, do you have a benchmark for FP16? (Coz I only see the comparison between BF16, INT8 and INT4.) I am curious how much performance will be degraded if I finetune the baseline Qwen-7B model using FP16 for full parameters?. Or in this case, it would be preferred to just fine-tune use LORA only? (The magnitude of my dataset is around 100K single conversations) Thanks for your help in advance!! |
bf16 and fp16 should have similar performance (as in speed) on devices where both are supported. if accuracy is concerned, bf16 could enable training that are more stable for larger models but if both can train the model succesfully, the resulted models may not differ significantly. |
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
当前行为 | Current Behavior
当我用finetune_lora_ds.py训练模型时,会报错
CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpecd6su1w/main.c', '-O3', '-I/usr/local/lib/python3.8/dist-packages/triton/common/../third_party/cuda/include', '-I/usr/include/python3.8', '-I/tmp/tmpecd6su1w', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpecd6su1w/rotary_kernel.cpython-38-x86_64-linux-gnu.so', '-L/lib/x86_64-linux-gnu', '-L/lib/i386-linux-gnu', '-L/lib/i386-linux-gnu']' returned non-zero exit status 1.
期望行为 | Expected Behavior
正常运行
复现方法 | Steps To Reproduce
bash finetune/finetune_lora_ds.py``
运行环境 | Environment
备注 | Anything else?
怀疑是不是flash-attn的版本和cuda不兼容?因为看日志感觉是flash_atten里的rotary.py报的错。
The text was updated successfully, but these errors were encountered: