-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No Module Named 'torch' #246
Comments
Same issue for me. |
Workaround: install the previous version pip install flash_attn==1.0.5 |
I am seeing the same problem on every |
This might work in some scenarios but not all. |
Can you try Getting the dependencies right for all setup is hard. We had |
Getting the same issue. I also tried |
same problem here. |
I don't know a right solution that works for all setups, happy to hear suggestions. We recommend the Pytorch container from Nvidia, which has all the required tools to install FlashAttention. |
I believe this is an incompatibility issue with cuda 12.1 version of torch. Using the following torch version solves my probem.
|
@smeyerhot I use the exact version, but it doesn't work. See the screenshot. |
Sorry! This didn't fix things... apologies on the false hope. |
@smeyerhot No problem. Thanks a lot anyway! |
same problem |
1 similar comment
same problem |
|
I had the same issue with pip. Workaround was to compile from source, worked as a charm In [1]: import flash_attn
In [2]: import torch
In [3]: torch.__version__
Out[3]: '2.0.1+cu117'
In [4]: flash_attn.__version__
Out[4]: '1.0.6' |
I also had the same issue, but my system needs Cuda 12.1 (2x Nvidia L4). so using torch 117 is not an option. this is also my workaround and it works like a charm. my system uses Fedora Server |
I compiled it myself using a docker container and I still get this when executing |
try |
@xwyzsn Unfortunately this only worked on my windows system, not linux. But I feel we're making progress. |
We had torch in the dependency in 1.0.5, but for some users it would download a new version of torch instead of using the existing one. |
Hi, actually I am using linux. It also worked well. I assume that you may missed some other package to build this up in your linux system.
|
same problem to me, i solve by check my device and torch cuda version. |
This fixed the torch problem, but now I got an other error. Might be related to something else tho.
|
![]() @xwyzsn ninja was removed, then torch was removed, then ninja was re-added. Next logical step is to re-add torch. right??? 😄 |
Same issue in Kubuntu 20 with torch 2.0.1 and cuda 11.8 python 3.9 / python 3.10 and flash-attn versions 0.2.8 / 1.0.4 / 1.0.5 / 1.0.6 / 1.0.7 with and without --no-build-isolation flag. |
Thanks to the previous answers, I can install it successfully. Here is my experience:
|
I had the same issue. Could you solve this? |
@Martion-z looks the same as #172 and #225. Some version of CUDA doesn't like gcc 11. Downgrading to gcc 10 might work. |
This woked for me. |
Thanks! This solves the above error. But there's still a new one occurs: The detected CUDA version (12.1) mismatches the version that was used to compile. |
Problem solved. I installed the nightly version of pytorch, and then install flash-attn with the no-build-isolation option.
Note the wheel building process takes a long time. Don't kill it and just wait. |
There are a lot of things that can go wrong when installing this package. Some remarks:
Create a new environment if you don't have already one. conda create -n flash_attn python=3.10.11
conda activate flash_attn These are the required packages with their required versions. conda install -c conda-forge gcc=11.3
conda install -c conda-forge gxx=11.3
conda install cuda -c nvidia/label/cuda-11.8.0
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
pip install packaging
pip install flash_attn --no-build-isolation |
Thanks for your kind and detailed recipe! But I still meet this problem...
|
@jinghan23 See point two, you have several versions of CUDA installed and the installation is failing because of that. |
The solution I've found working :
I've not tried on other version of flash-attn, but I think it should works too |
The workaround that worked for me was to downgrade the cuda runtime version. My driver version is still 12.2 but the runtime version is now 11.7. It is also much faster than installing with no build isolation.
|
flash-attn is a very problematic library |
you MUST install flash-atten after torch been installed |
Solution: Install PyTorch first, then install FA2. Example: WRONG:
CORRECT:
(source) |
I've tried adding torch also as a build and not just a runtime dependency in my pyproject.toml. It doubled the install time, but actually ended up not working. My workaround now is to have it as an optional dependency: [project.optional-dependencies]
# Flash attention cannot be installed alongside normal dependencies,
# since it requires torch during build time. Install with
# pip install '.[flash-attn]'
# after installing everything else first.
flash-attn = [
"flash-attn>=2.5.7"
] And then do two pip installs. One without It's also worth noting that flash-attn is extremely picky when it comes to the pip and wheel versions. With the following build requirements: [build-system]
requires = ["setuptools>=69.0.0", "wheel>=0.43.0"] and a |
after I do this, I get this error:
|
You are right that omitting in your requirements such a slow to install (compilation-requiring) and popular dependency like The problem here is that your installer tries to |
When I run pip install flash-attn, it says that. But obviously, it is wrong. See screenshot.
![image](https://private-user-images.githubusercontent.com/11975133/241562105-01397bac-648e-4b91-91fd-9f084aa7d7d0.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA1NTk5OTYsIm5iZiI6MTcyMDU1OTY5NiwicGF0aCI6Ii8xMTk3NTEzMy8yNDE1NjIxMDUtMDEzOTdiYWMtNjQ4ZS00YjkxLTkxZmQtOWYwODRhYTdkN2QwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MDklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzA5VDIxMTQ1NlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWYyNGExYmU0OGUwMTVlZWEyNjAzNGM4NmMyZGQ2YzExNGRiY2I4NjM5MzExMTFjZGQxMmU2MDNhNjYyYWYwODImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.fAQvFRH4OeniKbN6NlwhJCmzhnsB_XU1Hz8_Dx7snl8)
The text was updated successfully, but these errors were encountered: