Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA compute capability or CUDA version requirement? #6

Closed
tsoernes opened this issue Feb 20, 2017 · 3 comments
Closed

CUDA compute capability or CUDA version requirement? #6

tsoernes opened this issue Feb 20, 2017 · 3 comments

Comments

@tsoernes
Copy link

When running qlua fluid_net_train.lua -gpu 1 -dataset output_current_model_sphere -modelFilename myModel I get:

Try 'sleep --help' for more information.
sleep: invalid time interval ‘0,001’
Try 'sleep --help' for more information.
sleep: invalid time interval ‘0,001’========================>.]  319/320 
Try 'sleep --help' for more information.
sleep: invalid time interval ‘0,001’
Try 'sleep --help' for more information.
 [===========================================================>]  320/320 
sleep: invalid time interval ‘0,001’
Try 'sleep --help' for more information.
sleep: invalid time interval ‘0,001’
Try 'sleep --help' for more information.
==> Loaded 20480 samples
==> Creating model...
Number of input channels: 3
Model type: default
Bank 1:
Adding convolution: cudnn.SpatialConvolution(3 -> 16, 3x3, 1,1, 1,1)
Adding non-linearity: nn.ReLU (inplace true)
Bank 1:
Adding convolution: cudnn.SpatialConvolution(16 -> 16, 3x3, 1,1, 1,1)
Adding non-linearity: nn.ReLU (inplace true)
Bank 1:
Adding convolution: cudnn.SpatialConvolution(16 -> 16, 3x3, 1,1, 1,1)
Adding non-linearity: nn.ReLU (inplace true)
Bank 1:
Adding convolution: cudnn.SpatialConvolution(16 -> 16, 3x3, 1,1, 1,1)
Adding non-linearity: nn.ReLU (inplace true)
Adding convolution: cudnn.SpatialConvolution(16 -> 1, 1x1)
==> defining loss function
    using criterion nn.FluidCriterion: pLambda=0,00, uLambda=0,00, divLambda=1,00, borderWeight=1,0, borderWidth=3
==> Extracting model parameters
==> Defining Optimizer
    Using ADAM...
==> Profiling FPROP for 10 seconds with grid res 128
THCudaCheck FAIL file=/home/torstein/progs/FluidNet/torch/tfluids/generic/tfluids.cu line=119 error=8 : invalid device function
qlua: /home/torstein/torch/install/share/lua/5.1/tfluids/init.lua:516: cuda runtime error (8) : invalid device function at /home/torstein/progs/FluidNet/torch/tfluids/generic/tfluids.cu:119
stack traceback:
	[C]: at 0x7fdd9f648f50
	[C]: in function 'emptyDomain'
	/home/torstein/torch/install/share/lua/5.1/tfluids/init.lua:516: in function 'emptyDomain'
	fluid_net_train.lua:145: in main chunk

Using Nvidia GTX 770 with 367.57 drivers and 7.5.17 CUDA. Here's an overview over CUDA functions and required compute capability. The GPU in question has compute capability 3.0.

Here's the output from running './test.sh' in torch:
torch test.txt

@jonathantompson
Copy link
Collaborator

Sorry you're running into this.

Unfortunately, I haven't seen this before, but it's very likely a compute comparability problem. Try changing line 21 of FluidNet/torch/tfluids/CMakeLists.txt from:

LIST(APPEND CUDA_NVCC_FLAGS "-arch=sm_35;--use_fast_math; -D_FORCE_INLINES")

to

LIST(APPEND CUDA_NVCC_FLAGS "-arch=sm_30;--use_fast_math; -D_FORCE_INLINES")

I don't recall using any SM 3.5 specific features, so tfluids should compile and run with SM 3.0 as well. Let me know how that goes.

@tsoernes
Copy link
Author

tsoernes commented Feb 23, 2017

After I upgraded to CUDA V8.0.61 with driver 375.26, recompiled torch (first clean.sh in torch dir), then deleted everything in FluidNet/torch/tfluids/build, then recompiled tfluids with sm_30 it worked! I doubt any but last two steps was necessary but in any case issue solved.

@jonathantompson
Copy link
Collaborator

Oh that's great! Thanks for the update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants