-
Notifications
You must be signed in to change notification settings - Fork 201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compiler interprets fmax in lltm_cuda_kernel.cu __device__ function as std::fmax #14
Comments
Hmm that's interesting that I didn't notice this. Could you do me a favor and see if it goes a way if you change |
I changed I tried changing line 54 in the original code from I've attached the printout in a text file to avoid clutter. torch.cuda.is_available() returns True in Python. |
Ah, I see. I was using a different python environment from the one I normally use, so when I actually run
I'll let you know if this problem is fixed with PyTorch installed from source. |
Sounds good, let me know. |
Yep, that did the trick. |
Hi, I met exactly the same issue when trying to compile it. And I've checked my PyTorch version is up-to-date (0.4.1) and my cuda version is 9.1. |
I cloned the repository, and the CPU version compiles, but I get the following error when running
python setup.py install
in the cuda folder.I'm using PyTorch 0.4.0 installed via conda a few weeks ago, Python 3.5, CUDA 9.0, cuDNN 7.1.4, and GCC 6.4.0.
The text was updated successfully, but these errors were encountered: