Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not detecting objects on image #1215

Open
yuis-ice opened this issue Dec 6, 2022 · 14 comments
Open

Not detecting objects on image #1215

yuis-ice opened this issue Dec 6, 2022 · 14 comments

Comments

@yuis-ice
Copy link

yuis-ice commented Dec 6, 2022

It seems to be working, but the resulted image has no object detection mapping on it, while it should have.

I read the readme to how to setup Yolo v7 on Docker, here's the full commands you should be able to reproduce my problem.

git clone https://github.com/WongKinYiu/yolov7
cd yolov7

nvidia-docker run --name yolov7 -it --rm -v "$CWD":/yolov7 --shm-size=64g nvcr.io/nvidia/pytorch:21.08-py3

// on the container
cd /yolov7
python -m pip install virtualenv
python -m virtualenv venv3
. venv3/bin/activate
pip install -r requirements.txt
apt update
apt install -y zip htop screen libgl1-mesa-glx
pip install seaborn thop
python detect.py --weights yolov7.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg

And this is the console output of the last command:

# python detect.py --weights yolov7.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg 
Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, img_size=640, iou_thres=0.45, name='exp', no_trace=False, nosave=False, project='runs/detect', save_conf=False, save_txt=False, source='inference/images/horses.jpg', update=False, view_img=False, weights=['yolov7.pt'])
YOLOR 🚀 v0.1-115-g072f76c torch 1.13.0+cu117 CUDA:0 (NVIDIA GeForce GTX 1650, 3903.875MB)

Fusing layers... 
RepConv.fuse_repvgg_block
RepConv.fuse_repvgg_block
RepConv.fuse_repvgg_block
Model Summary: 306 layers, 36905341 parameters, 6652669 gradients
 Convert model to Traced-model... 
 traced_script_module saved! 
 model is traced! 

/yolov7/venv3/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Done. (150.9ms) Inference, (0.3ms) NMS
 The image with the result is saved in: runs/detect/exp6/horses.jpg
Done. (0.616s)

Now, I should be able to see the detections on the generated image runs/detect/exp6/horses.jpg, from the original image inference/images/horses.jpg, right? But I see the two images the same, no difference. What's wrong with the setup?

Nvidia driver:

$ nvidia-smi 
Tue Dec  6 09:47:03 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.11    Driver Version: 525.60.11    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
| 45%   27C    P8    N/A /  75W |     13MiB /  4096MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1152      G   /usr/lib/xorg/Xorg                  9MiB |
|    0   N/A  N/A      1256      G   /usr/bin/gnome-shell                2MiB |
+-----------------------------------------------------------------------------+

Ubuntu version:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.4 LTS
Release:        20.04
Codename:       focal
@dsbyprateekg
Copy link

I have also faced the same issue in my Windows 11 system.

@4lparslan
Copy link

Same on Ubuntu 22.04

@afcreative
Copy link

Windows 11 with CUDA 11.7 same issue, doesn't detect anything

@altatec-sources
Copy link

The same problem. The docker installation working fine on CPU but on GPU does not.
But if change request by --conf 0.13 --img-size 128 than all horces will be found.
python detect.py --weights yolov7.pt --conf 0.13 --img-size 128 --source inference/images/hor
ses.jpg
Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.13, device='', exist_ok=False, img_size=128, iou_thres=0.45, name='exp', no_trace=False, nosave=False, project='runs/detect', save_conf=False, save_txt=False, source='inference/images/horses.jpg', update=False, view_img=False, weights=['yolov7.pt'])
YOLOR 🚀 v0.1-116-g8c0bf3f torch 1.10.0a0+3fd9dcf CUDA:0 (Quadro T2000 with Max-Q Design, 4095.6875MB)

Fusing layers...
RepConv.fuse_repvgg_block
RepConv.fuse_repvgg_block
RepConv.fuse_repvgg_block
Model Summary: 306 layers, 36905341 parameters, 6652669 gradients, 104.5 GFLOPS
Convert model to Traced-model...
traced_script_module saved!
model is traced!

5 horses, Done. (39.0ms) Inference, (2.4ms) NMS
The image with the result is saved in: runs/detect/exp25/horses.jpg
Done. (0.201s)

@cesc47
Copy link

cesc47 commented Dec 12, 2022

I have the same issue on Ubuntu 20.04, CUDA 11.6 and pytorch 1.13

@afcreative
Copy link

I have solution, just modify detect.py, find determine variable half, make half = False, change it on all condition

@yuis-ice
Copy link
Author

@afcreative Is the problem the detect.py file specific? Kindly share your idea if you have. Anyways thanks for the solution!

@afcreative
Copy link

afcreative commented Dec 13, 2022

Maybe this is a temporary solution from me while waiting for the developer, edit detect.py and comment on the condition line " device.type != 'cpu' " along with the contents of the condition and set the variable half = False.

I don't know for sure the impact of the detection process, what is certain is that I managed to do the detection with GPU: GTX 1660 Super.

Here is one of these related pytorch issues (CMIIW):
pytorch/pytorch#58123

@yuis-ice
Copy link
Author

Is this a Yolo version specific problem? I'm new to the Yolo project and got into the issue.
I'm not sure how to lower the version, but Is it possible to bypass the issue by lowering the version? Kindly share your information if anyone knows.

@lego-yaw
Copy link

lego-yaw commented Jan 7, 2023

False, chan
Please can you post your modified script here, i am finding it difficult to apply the your suggestion..

@NhutTien0905
Copy link

I have the same problem on Window 10, cuda 11.2 , cudnn 8.1

@Petopp
Copy link

Petopp commented Jan 21, 2023

Hello, by using to load a model over this way:

model = torch.hub.load("/home/petop/.cache/torch/hub/WongKinYiu_yolov7_main", 'custom', Model,source="local")
the same problem, by my windows 10 pc working all fine by the Ubuntu 22.04 with 1660Super i have the problem.

@t-weilin
Copy link

t-weilin commented Mar 22, 2023

Quick Fix
In detect.py, change
Screenshot 2023-03-22 145345
My set up:
GTX 1650 Super
Cuda 11.2
Torch 1.9

My understanding is that it is a bug as @afcreative mention above pytorch/pytorch#58123
It will produce "Half precision inference returns NaNs for a number of models". Hence, no inference.
Might be a bug for Nvidia GTX16XX series graphic card combined with Cuda 11.x version.

@acankarakus
Copy link

acankarakus commented May 7, 2023

I have solution, just modify detect.py, find determine variable half, make half = False, change it on all condition

Thanks for the comment. @afcreative You saved my time!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests