Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Floating point exception (core dumped) ? #829

Open
brilliant-yan opened this issue Apr 21, 2023 · 4 comments
Open

Floating point exception (core dumped) ? #829

brilliant-yan opened this issue Apr 21, 2023 · 4 comments

Comments

@brilliant-yan
Copy link

What are the problems?(screenshots or detailed error messages)

floating point exception (core dumped)
( Debug: cta_num_limit_by_smem == 0 in ppl.nn/deps/ppl.kernel.cuda/src/nn/conv/conv_jit.cc on 3090?)

What are the types of GPU/CPU you are using?

3090

What's the operating system ppl.nn runs on?

ubuntu20.04

What's the compiler and its version?

nvcc 11.2

Which version(commit id or tag) of ppl.nn is used?

current master

What are the commands used to build ppl.nn?

./build.sh -DPPLNN_USE_X86_64=ON -DPPLNN_USE_CUDA=ON -DPPLNN_ENABLE_PYTHON_API=ON

What are the execution commands?

pplnn-build/tools/pplnn --onnx-model ./tests/testdata/mnasnet0_5.onnx --use-cuda --warmup-iterations 500 --enable-profiling

minimal code snippets for reproducing these problems(if necessary)

models and inputs for reproducing these problems (send them to openppl.ai@hotmail.com if necessary)

@brilliant-yan
Copy link
Author

如果关掉这个选项 -DPPLNN_ENABLE_CUDA_JIT=OFF,就不会有上面的错误。但是不打开这个选项,性能是不是也会有所下降,我测试了benchmark,各个模型的性能都不如tensorrt好

@qiumeng6
Copy link

我也遇到类似的问题,还不知道如何解决。

@jianfei-wangg
Copy link
Contributor

我也遇到类似的问题,还不知道如何解决。
Hello, I have no RTX3090 to be tested. And I tested mnasnet0_5.onnx on T4 and A100 with cuda=11.2 and JIT=ON, but these all works fine. I suggest you to substitute conv_jit.cc:Line599~600 with:
int cta_num_limit_by_regs = (regs_per_cta == 0) ? cta_num_limit_by_thds : max_regs_per_sm / regs_per_cta; int cta_num_limit_by_smem = (smem_per_cta == 0) ? cta_num_limit_by_thds : max_smem_per_sm / smem_per_cta;
and then test again. Please report the result on RTX3090, thx.

@brilliant-yan
Copy link
Author

我也遇到类似的问题,还不知道如何解决。
Hello, I have no RTX3090 to be tested. And I tested mnasnet0_5.onnx on T4 and A100 with cuda=11.2 and JIT=ON, but these all works fine. I suggest you to substitute conv_jit.cc:Line599~600 with:
int cta_num_limit_by_regs = (regs_per_cta == 0) ? cta_num_limit_by_thds : max_regs_per_sm / regs_per_cta; int cta_num_limit_by_smem = (smem_per_cta == 0) ? cta_num_limit_by_thds : max_smem_per_sm / smem_per_cta;
and then test again. Please report the result on RTX3090, thx.

Hi,
I tried your suggestion and modified the code, but the problem still persists on 3090. In addition, there is no problem in my multiple tests on the T4 and Jetson series.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants