Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build_targets function #8

Closed
okanlv opened this issue Sep 8, 2018 · 7 comments
Closed

build_targets function #8

okanlv opened this issue Sep 8, 2018 · 7 comments
Assignees
Labels
bug Something isn't working

Comments

@okanlv
Copy link

okanlv commented Sep 8, 2018

Can you explain why have you used the following constants? I have inspected a few different yolov3 implementation but none had a similar operation.

u = gi.float() * 0.4361538773074043 + gj.float() * 0.28012496588736746 + a.float() * 0.6627147212460307

@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 9, 2018

Ah yes. I did not comment this area sufficiently, I will do a commit to better explain this. This section assigns the closest grid point gi, gj, and anchor a to each target, to ensure that each grid-anchor point is only assigned to one target.

My first technique to do this was to select unique rows from the n x 3 [gi, gj, a] matrix using:

u = torch.cat((gi, gj, a), 0).view(3, -1).numpy()
_, first_unique_slow = np.unique(u[:, iou_order], axis=1, return_index=True)  # first unique indices

but during profiling I found np.unique was slow in selecting unique rows, so I merged the 3 columns into a single column by the dot product of a random 1 x 3 vector (the 3 random constants you see here). This produced much faster np.unique results.

u = gi.float() * 0.4361538773074043 + gj.float() * 0.28012496588736746 + a.float() * 0.6627147212460307
_, first_unique = np.unique(u[iou_order], return_index=True)  # first unique indices

I tested the two methods for accuracy using this line, and found differences extremely rare... but wait these are both sorted, and the output used later on is not sorted. Ok I will look into this some more to see the effect of the sorting.

print(((np.sort(first_unique_slow) - np.sort(first_unique)) ** 2).sum())

@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 9, 2018

Yes I've confirmed there is is a bug here. Thank you very much for spotting this! So numpy returns sorted unique indices by default, whereas I assumed they retained the original order. There exists an open numpy issue on the topic:
numpy/numpy#8621

I've corrected this by reverting to the unique row method (first example above), which does appear to retain the original order. Commit is 6116acb.

@glenn-jocher glenn-jocher added the bug Something isn't working label Sep 9, 2018
@glenn-jocher glenn-jocher self-assigned this Sep 9, 2018
@okanlv
Copy link
Author

okanlv commented Sep 10, 2018

Thank you.

@Bilal-Yousaf
Copy link

@glenn-jocher Could you please explain how this will affect results because it seems like results will be the same whichever method we use.

@glenn-jocher
Copy link
Member

glenn-jocher commented Aug 20, 2020

@Bilal-Yousaf Ultralytics has open-sourced YOLOv5 at https://github.com/ultralytics/yolov5, featuring faster, lighter and more accurate object detection. YOLOv5 is recommended for all new projects.



** GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS. EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.
  • August 13, 2020: v3.0 release: nn.Hardswish() activations, data autodownload, native AMP.
  • July 23, 2020: v2.0 release: improved model definition, training and mAP.
  • June 22, 2020: PANet updates: new heads, reduced parameters, improved speed and mAP 364fcfd.
  • June 19, 2020: FP16 as new default for smaller checkpoints and faster inference d4c6674.
  • June 9, 2020: CSP updates: improved speed, size, and accuracy (credit to @WongKinYiu for CSP).
  • May 27, 2020: Public release. YOLOv5 models are SOTA among all known YOLO implementations.
  • April 1, 2020: Start development of future compound-scaled YOLOv3/YOLOv4-based PyTorch models.

Pretrained Checkpoints

Model APval APtest AP50 SpeedGPU FPSGPU params FLOPS
YOLOv5s 37.0 37.0 56.2 2.4ms 416 7.5M 13.2B
YOLOv5m 44.3 44.3 63.2 3.4ms 294 21.8M 39.4B
YOLOv5l 47.7 47.7 66.5 4.4ms 227 47.8M 88.1B
YOLOv5x 49.2 49.2 67.7 6.9ms 145 89.0M 166.4B
YOLOv5x + TTA 50.8 50.8 68.9 25.5ms 39 89.0M 354.3B
YOLOv3-SPP 45.6 45.5 65.2 4.5ms 222 63.0M 118.0B

** APtest denotes COCO test-dev2017 server results, all other AP results in the table denote val2017 accuracy.
** All AP numbers are for single-model single-scale without ensemble or test-time augmentation. Reproduce by python test.py --data coco.yaml --img 640 --conf 0.001
** SpeedGPU measures end-to-end time per image averaged over 5000 COCO val2017 images using a GCP n1-standard-16 instance with one V100 GPU, and includes image preprocessing, PyTorch FP16 image inference at --batch-size 32 --img-size 640, postprocessing and NMS. Average NMS time included in this chart is 1-2ms/img. Reproduce by python test.py --data coco.yaml --img 640 --conf 0.1
** All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
** Test Time Augmentation (TTA) runs at 3 image sizes. Reproduce by python test.py --data coco.yaml --img 832 --augment

For more information and to get started with YOLOv5 please visit https://github.com/ultralytics/yolov5. Thank you!

@Bilal-Yousaf
Copy link

@glenn-jocher Could you please explain how this will affect results because it seems like results will be the same whichever method we use.
Thank You for sharing YoloV5.
Could you please point out how np.unique returning sorted output will affect results? I think this is handled properly?

@glenn-jocher
Copy link
Member

@Bilal-Yousaf the use of np.unique with sorted output would not affect the results in this specific scenario, as it is handled properly in the existing code. Both methods would yield the same results in this implementation. Thank you for bringing this up, and feel free to reach out if you have any more questions or concerns!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants