Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GTX 1660 SUPER not detect #65

Open
YangSangWan opened this issue Nov 1, 2022 · 1 comment
Open

GTX 1660 SUPER not detect #65

YangSangWan opened this issue Nov 1, 2022 · 1 comment

Comments

@YangSangWan
Copy link

YangSangWan commented Nov 1, 2022

hello! thank you for your repository

I try image detector at GeForce RTX 3060, cuda 11.1 ->>>> result is good

However...

GeForce GTX 1660 SUPER, cuda 11.1 or 11.8 --- no detect , no errors.... but in pythorch(cu113) result is good...

and I debuging source code

In Geforce RTX 3060

auto det = torch::masked_select(detections[batch_i], conf_mask[batch_i]).view({-1, num_classes + item_attr_size});
qDebug() << "det.sizes().size() == " << det.sizes().size();
qDebug() << "det.size(0) == " << det.size(0);
qDebug() << "det.size(1) == " << det.size(1);

det.sizes().size() is 2
det.size(0) is 157
det.size(1) is 20

But GeForce GTX 1660 SUPER

auto det = torch::masked_select(detections[batch_i], conf_mask[batch_i]).view({-1, num_classes + item_attr_size});
qDebug() << "det.sizes().size() == " << det.sizes().size();
qDebug() << "det.size(0) == " << det.size(0);
qDebug() << "det.size(1) == " << det.size(1);

det.sizes().size() is 2
det.size(0) is 0
det.size(1) is 20

why "det.size(0) is 0" in "GTX 1660 SUPER" ??????

@xmcchv
Copy link

xmcchv commented Mar 27, 2023

I also meet this.
In pytorch, i add "torch.backends.cudnn.enabled = False" to fix, but i don't konw how to make it out in libtorch.
what should i set in cmakelist or code?
my env is 1660super torch 1.12.0+cu116 cuda11.6 cudnn 8.6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants