-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
problem stll in environment #71
Comments
What's your current error message? |
no obvious error, just the following, so I asked GPT.AssertionError Traceback (most recent call last) File ~/.conda/envs/dnat/lib/python3.8/site-packages/torch/nn/modules/module.py:1511, in Module._wrapped_call_impl(self, *args, **kwargs) File ~/.cache/huggingface/modules/transformers_modules/DNABERT-2-117M/flash_attn_triton.py:781, in _flash_attn_forward(q, k, v, bias, causal, softmax_scale) And GPT tell me : To resolve this issue, make sure that the tensors involved in the attention mechanism are on the same device. You can achieve this by explicitly moving the tensors to the GPU using the .to(device) method. |
Please try "pip uninstall triton". |
Yes, I create a new environment,and I do not pip triton. |
It automatically install triton so you need to manually remove it. |
YES, you are right!!!! Thank you so much ~~~~~ |
I have create a new Virtual environment, and follow your read me.
create and activate virtual python environment
conda create -n dna python=3.8
conda activate dna
install required packages
python3 -m pip install -r requirements.txt
but it still can not run .
this problem is may about cuda (From chatGPT), and if I should
1.pip uninstall torch
2. conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch (my Cuda is 11.4)
Can you tell me if I am correct in these two steps?
The text was updated successfully, but these errors were encountered: