Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] .CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpecd6su1w/main.c' #1199

Closed
2 tasks done
ff1Zzd opened this issue Apr 9, 2024 · 3 comments
Closed
2 tasks done

Comments

@ff1Zzd
Copy link

ff1Zzd commented Apr 9, 2024

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

当我用finetune_lora_ds.py训练模型时,会报错
CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpecd6su1w/main.c', '-O3', '-I/usr/local/lib/python3.8/dist-packages/triton/common/../third_party/cuda/include', '-I/usr/include/python3.8', '-I/tmp/tmpecd6su1w', '-shared', '-fPIC', '-lcuda', '-o', '/tmp/tmpecd6su1w/rotary_kernel.cpython-38-x86_64-linux-gnu.so', '-L/lib/x86_64-linux-gnu', '-L/lib/i386-linux-gnu', '-L/lib/i386-linux-gnu']' returned non-zero exit status 1.

期望行为 | Expected Behavior

正常运行

复现方法 | Steps To Reproduce

bash finetune/finetune_lora_ds.py``

运行环境 | Environment

- OS: Ubuntu 20.04
- Python:3.8
- Transformers:4.37.2
- PyTorch:2.2.2+cu121
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`): 12.1
- flash-attn: 2.5.7

备注 | Anything else?

怀疑是不是flash-attn的版本和cuda不兼容?因为看日志感觉是flash_atten里的rotary.py报的错。
image

@jklj077
Copy link
Contributor

jklj077 commented Apr 10, 2024

For the newer versions of Flash Attention v2, the additional rotary pos ops are dependent on the triton library. However, it appears there's an issue with triton compiling the CUDA kernel. Unfortunately, the error messages from this compilation process are not included in the currently provided logs; they should ideally be located above the Python error messages.

As a temporary workaround, I recommend uninstalling Triton. This will cause it to fallback to the non--Flash Attention v2 implementation.

To troubleshoot the issue, the version of triton and related logs will be needed.

@ff1Zzd
Copy link
Author

ff1Zzd commented Apr 10, 2024

For the newer versions of Flash Attention v2, the additional rotary pos ops are dependent on the triton library. However, it appears there's an issue with triton compiling the CUDA kernel. Unfortunately, the error messages from this compilation process are not included in the currently provided logs; they should ideally be located above the Python error messages.

As a temporary workaround, I recommend uninstalling Triton. This will cause it to fallback to the non--Flash Attention v2 implementation.

To troubleshoot the issue, the version of triton and related logs will be needed.

Hi thanks for your prompt reply. I think I have figured out the problem, it is because I am using V100 for finetune and flash-attention does not support V100 currently. After I uninstall V100, I could run the finetune.py as normal.

I have also noticed that V100 does not support training with BF16, do you have a benchmark for FP16? (Coz I only see the comparison between BF16, INT8 and INT4.) I am curious how much performance will be degraded if I finetune the baseline Qwen-7B model using FP16 for full parameters?. Or in this case, it would be preferred to just fine-tune use LORA only? (The magnitude of my dataset is around 100K single conversations)

Thanks for your help in advance!!

@jklj077 jklj077 closed this as completed Apr 12, 2024
@jklj077
Copy link
Contributor

jklj077 commented Apr 12, 2024

bf16 and fp16 should have similar performance (as in speed) on devices where both are supported. if accuracy is concerned, bf16 could enable training that are more stable for larger models but if both can train the model succesfully, the resulted models may not differ significantly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants