New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No module named 'fast_transformers.causal_product.causal_product_cpu' (solved: needed to at CUDA to the PATH) #12
Comments
It doesn't seem that the CUDA version could be the issue since it complains it cannot find the Is this error caused by simply running the following example from the docs? import torch
from fast_transformers.builders import TransformerEncoderBuilder
# Build a transformer encoder
bert = TransformerEncoderBuilder.from_kwargs(
n_layers=12,
n_heads=12,
query_dimensions=64,
value_dimensions=64,
feed_forward_dimensions=3072,
attention_type="full", # change this to use another
# attention implementation
activation="gelu"
).get()
y = bert(torch.rand(
10, # batch_size
512, # sequence length
64*12 # features
)) Also, is there a copy-paste issue in the error message? Because it seems that the last trace overlaps with the exception message. Finally, could you perhaps fetch the latest from the repository and just build it and report the files created? For instance using the following code.
Cheers, |
Thanks for the reply. I launched a fresh GC instance and installed this library via your method above. I should have mentioned this issue only happens when I use cluster-based attention, linear works no problem (and I haven't tried the others). The example from the docs runs fine only when not running on CUDA. That is, this code kicks an error:
Here is the error:
And I double checked the hashing folder, and it does indeed contain all files:
There were also warnings while building the library (via python setup.py build_ext --inplace):
That is just one of the install warnings, they all looked like this though. |
Ok, that is quite a different error than before. I can see that the CUDA extension is not built. This means that the machine which compiled the package lacks a CUDA compiler or a CUDA capable pytorch. If you look into the setup.py code it checks if a CUDA compiler is available and only then compiles the CUDA extensions. Your error has nothing to do with the clustered attention it would fail similarly with causal-linear or any other implementation which requires custom kernels. |
I am having trouble copying the full output upon building, here is the start:
|
Don't worry about the warnings, they are simply deprecation warnings for the CPU code (they shouldn't be there but they are not causing any problem). Could you run |
Yes, I am missing the CUDA toolkit. Which is odd. Sorry for the trouble. I'll try to figure this out... Thanks again :) |
Final update: The issue is solved after adding CUDA to the PATH. I had installed CUDA fine, and never added it to the PATH. I had been running PyTorch and HuggingFace's transformers without issue - so I assumed my CUDA setup was not the problem. But this install requires CUDA to be on the PATH. |
No trouble at all 😄 . I am glad that it was easy to solve, perhaps in the future we could also provide binaries however it is a lot more trouble to provide binaries for many architectures than build then binaries on the host machine. |
Hi there,
I am having some trouble using this library. I cloned this repo (July 19th) and ran the setup file, the setup ran but now I am getting this error (the same error occurs with pip install):
When I comment out importing this file (above), I get an import error on the hashing files instead, so I think the issues is these CUDA files. I am using Ubuntu 18.04 and PyTorch 1.5.1 with CUDA 10.2. However using the exact same setup procedure on Google Colab, I have no issues - Colab uses PyTorch 1.5.1 but CUDA 10.1.
Could the CUDA version difference be the issue?
Thanks :)
The text was updated successfully, but these errors were encountered: