Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError: non-normalized or out of bounds coordinate labels: #658

Closed
Samjith888 opened this issue Nov 25, 2019 · 11 comments
Closed

AssertionError: non-normalized or out of bounds coordinate labels: #658

Samjith888 opened this issue Nov 25, 2019 · 11 comments
Labels
bug Something isn't working

Comments

@Samjith888
Copy link

Got the following error while running for custom data with 7 classes. https://github.com/ultralytics/yolov3/issues/569 .

$ python train.py --data data/coco.data
Namespace(accumulate=2, adam=False, arc='default', batch_size=32, bucket='', cache_images=False, cfg='cfg/yolov3-spp.cfg', data='data/coco.data', device='', epochs=273, evolve=False, img_size=416, img_weights=False, multi_scale=False, name='', nosave=False, notest=False, prebias=False, rect=False, resume=False, transfer=False, var=None, weights='weights/ultralytics49.pt')
Using CUDA device0 _CudaDeviceProperties(name='GeForce GTX 1070', total_memory=8116MB)
AssertionError: non-normalized or out of bounds coordinate labels:
Reading labels: 0%| | 0/1562 [00:00<?, ?it/s]Traceback (most recent call last):
File "train.py", line 444, in
train() # train normally
File "train.py", line 193, in train
cache_images=False if opt.prebias else opt.cache_images)
File "/home/samjth/Desktop/yolov3/utils/datasets.py", line 335, in init
assert (l[:, 1:] <= 1).all(), 'non-normalized or out of bounds coordinate labels: %s' % file
AssertionError: non-normalized or out of bounds coordinate labels: /home/samjth/Desktop/yolov3/data/coco/labels/train2014/073m_WW_5049.txt
Reading labels: 0%| | 0/1562 [00:00<?, ?it/s]
(base)

@Samjith888 Samjith888 added the bug Something isn't working label Nov 25, 2019
@FranciscoReveriano
Copy link
Contributor

Your labels are not in the correct coco-format.

@Samjith888
Copy link
Author

Your labels are not in the correct coco-format.

Do you meant about the directory structure of coco or the label.txt file ?

Label.txt is following format

Class_id Xmin Ymin width height

@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 25, 2019

@Samjith888 see https://docs.ultralytics.com/yolov5/tutorials/train_custom_data

Each row is class x_center y_center width height format, in normalized coordinates (0-1).

@Samjith888
Copy link
Author

Samjith888 commented Nov 26, 2019

@Samjith888 see https://docs.ultralytics.com/yolov5/tutorials/train_custom_data

Each row is class x_center y_center width height format, in normalized coordinates (0-1).

My label.txt is like following
class x_center y_center width height
2 2205.499392 0.0 576.5570559999996 2160.0

And the image resolution is 4096 *2160

Through normalizing the x_center y_center width height values, we can convert it into 0-1 range values. While training, how this model will locate our ROI from the 4096 *2160 resolution image by using the normalized values (between 0-1)??

@glenn-jocher
Copy link
Member

@Samjith888 we've updated the Custom Training Example in the wiki with directions for normalizing. All labels must be normalized otherwise training will not work.

Screen Shot 2019-11-25 at 6 49 11 PM

@Samjith888
Copy link
Author

@Samjith888 we've updated the Custom Training Example in the wiki with directions for normalizing. All labels must be normalized otherwise training will not work.

Screen Shot 2019-11-25 at 6 49 11 PM

Values before normalizing
x_center y_center width height
2029.5434240000002 1397.85804 2257.0106880000003 1524.2839199999999

After Normalization
0.9396034370370371 0.341273935546875 1.0449123555555557 0.37213962890624996

I have followed the instructions in the wiki for normalizing the values. And the image resolution is 4096 2160 (wh). But i got the width value is greater than one (1.0449123555555557) even after normalization.

@glenn-jocher
Copy link
Member

@Samjith888 well maybe your image width and height are switched, or your original coordinates are xyxy rather than xywh.

@Samjith888
Copy link
Author

@Samjith888 well maybe your image width and height are switched, or your original coordinates are xyxy rather than xywh.

Getting the following error after solving the above error

Downloading https://drive.google.com/uc?export=download&id=1uTlyDWlnaqXcsKOktP5aH_zRDbfcDp-y as yolov3.weights... Done (147.5s) Traceback (most recent call last): File "train.py", line 444, in <module> train() # train normally File "train.py", line 131, in train cutoff = load_darknet_weights(model, weights) File "/home/samjth/Desktop/yolov3/models.py", line 352, in load_darknet_weights conv_w = torch.from_numpy(weights[ptr:ptr + num_w]).view_as(conv_layer.weight) RuntimeError: shape '[1024, 512, 3, 3]' is invalid for input of size 1891940

I have 7 classes ,hence filter= 36. I have changed the classes and filter values in 3 fields of the yolov3.cfg file.

@glenn-jocher
Copy link
Member

@Samjith888 you may want to try loading the pytorch weights instead --weights yolov3.pt or no pretrained weights at all --weights '' since it seems that your cfg is not compatible with yolov3.weights.

@Damon2019
Copy link

@Samjith888 hi ,have you succeed?

@HassanBinHaroon
Copy link

@Damon2019 @Samjith888

This problem arises when the bounding box coordinate(s) is(are) greater than the respective image dimension(s).

For Instance: My image dimensions are 640 (width) x 512 (height) and due to some reason, my bounding box coordinates are xmin = 600, ymin = 450, xmax = 660, ymax = 510.

In the above example it can be seen that xmax > 640 and after normalizing, the result wouldn't be between 0-1. It would be greater than 1. This kind of situation is the cause of this error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants