Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YOLOv5 License Issues with Kaggle Wheat Competition: GPL vs MIT #317

Closed
chabir opened this issue Jul 7, 2020 · 22 comments
Closed

YOLOv5 License Issues with Kaggle Wheat Competition: GPL vs MIT #317

chabir opened this issue Jul 7, 2020 · 22 comments
Assignees
Labels
question Further information is requested Stale Stale and schedule for closing soon

Comments

@chabir
Copy link

chabir commented Jul 7, 2020

would you be open to change the licence to MIT licence in order to be able to use yolov5 on Kaggle and elsewhere ?

@github-actions
Copy link
Contributor

github-actions bot commented Jul 7, 2020

Hello @chabir, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@glenn-jocher
Copy link
Member

@chabir can you elaborate on the issue?

We were considering switching to European Union Public Licence (EUPL) 1.2 for it's multi-language translation and GPL compatibility, but we are not currently considering MIT.

@chabir
Copy link
Author

chabir commented Jul 7, 2020

@glenn-jocher : thanks for your quick feedback.
To be fully clear, your last version of yolo is performing better than any other model from frcnn, efficientdet, etc in the wheat competition on Kaggle but the organizers are likely to disregard any solution involving a GPL licence like using yolov5. This is a real shame given the performance and how fast it is to train/inference, how easy it is to use it and adapt it to custom cases.

@glenn-jocher
Copy link
Member

@chabir ok I see! Do you have a link to the kaggle license guidelines we could look at?

@TaoXieSZ
Copy link

TaoXieSZ commented Jul 7, 2020

@glenn-jocher Maybe this could help.

https://opensource.org/licenses/MIT

@chabir
Copy link
Author

chabir commented Jul 8, 2020

@glenn-jocher : For this particular competition, global wheat detection, (rules changed from one competition to another), the rules are here. The sponsor in this competition is a Canadian university.

@sokazaki
Copy link

sokazaki commented Jul 8, 2020

Hi glenn,

Thank you for the replying.
Kaggle and competition host's decision are as follows:
https://www.kaggle.com/c/global-wheat-detection/discussion/163433
https://www.kaggle.com/c/global-wheat-detection/discussion/164845

To be concise, MIT/Apache/BSD license are allowed, but GPL-based libraries won't be permitted as final submission in Kaggle...

@glenn-jocher glenn-jocher changed the title licence YOLOv5 License Issues with Kaggle Wheat Competition: GPL vs MIT Jul 8, 2020
@glenn-jocher
Copy link
Member

glenn-jocher commented Jul 8, 2020

@sokazaki thanks for providing the links. I had no idea there was such an active discussion on the topic! I'm excited people are finding YOLOv5 useful, and especially excited to see it performing so well in the Kaggle Global Wheat Detection competition. I've posted a response.

I understand this may be unfortunate for many competition participants, but ultimately we believe a greater good rests in ensuring the continuation of open-source works so that all people may benefit from advancements, not just single commercial entities. Please note It's also possible I don't have a complete grasp of all the licensing aspects as I'm not a legal expert, so if new information comes to light in the future we may update our position as appropriate. All feedback is appreciated!

https://www.kaggle.com/c/global-wheat-detection/discussion/163433#920645

All,

This Global Wheat Detection competition, and this thread in particular were recently brought to my attention here #317.

I'm very happy that the community is finding YOLOv5 useful!! I personally have not participated in any Kaggle competitions, but I did participate in a National Geospatial-Intelligence Agency (NGA) competition that got me interested in YOLO and propelled me down my ML path a couple summers ago. I created a repo at the time to document my efforts:
https://github.com/ultralytics/xview-yolov3

It's a nice irony now to see the situation reversed, with other young people learning and using YOLOv5 to start their own research and competition efforts. I'm very humbled and hopeful that smart minds might use what we have developed as the basis for further advancements of their own in the future :)

Regarding the licensing issues, both of our YOLO repositories, https://github.com/ultralytics/yolov3 and https://github.com/ultralytics/yolov5 are licensed as GPL 3.0. The intention of the license is allow full use for any purpose (academic, research or commercial) of the original or modified code, provided that derivative works are licensed as the same. The greater good we have in mind here is the capability of all people to benefit from advancements made with future open-sourced works that begin from or leverage YOLOv5 as a starting point.

Ultralytics was founded to work on particle physics applications, we sleep with one eye open towards expanding knowledge and advancing science. We are always open to new ideas, but we believe we have the right license in place for our work. Isaac Newton once said "If I have seen further it is by standing on the shoulders of Giants." I believe that if we allow future works to hide from the world as proprietary code, which other licenses provide provisions for, we are reducing the possible giants that future Isaac Newtons may stand on, and diminishing all of our own futures slightly so.

@TaoXieSZ
Copy link

TaoXieSZ commented Jul 8, 2020

@glenn-jocher Thumbs up for your reply! Actually, I am not so concerned about the final score. The experience of learning is the best thing I get from the compete.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jul 8, 2020

@ChristopherSTAN actually I'm really interested in the pseudo labeling part of this: https://www.kaggle.com/nvnnghia/yolov5-pseudo-labeling

I think a more complete training pipeline needs to somehow incorporate/anticipate human labelling errors, and treat the dataset more flexibly by adding/removing/adjusting labels as appropriate as training progresses.

EDIT: After reading the pseudo labeling notebook I realize I had misunderstood. I thought the notebook was relabeling the training data, but it is labelling the unlabelled test data (using a pretrained model) and aggregating it to the train set, and finetuning the pretrained model on the larger mixed dataset to improve the score from 0.75 to 0.7644. Very inventive strategy!

@TaoXieSZ
Copy link

TaoXieSZ commented Jul 8, 2020

@glenn-jocher Yes! I used this trick too! And got the 4th place before (24th now). And I also apply more interesting data augmentation: https://www.kaggle.com/khoongweihao/insect-augmentation-with-efficientdet-d6/comments

Though it is only specific for this dataset, it is really interesting.
Because of the limited computing resources, I focus on manners of data, like augmentations, etc. And I tried your Cutout and flip ud. It seem a little bit improvement.
If you have more ideas, I will try more.

Thanks for your help again.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jul 8, 2020

@ChristopherSTAN insects are a very interesting idea! I thought that augmentations only helped if similar effects are at least somewhat observable in the val set. So perhaps there are insects naturally in the images, or perhaps the bees are acting more as small cutouts.

WBF sounds a bit like our Merge NMS. By coincidence I just recently updated YOLOv5 to allow Model Ensembles and Test Time Augmentation (TTA). These two, combined with Merge NMS, would probably be of interest to any competition participants. The tutorials are:

For Merge NMS, it is turned on in the code here by passing merge=True to non_max_suppression(). It merges boxes from a single image based on their class, IoU and confidence levels rather than removing them entirely as in NMS:

yolov5/utils/utils.py

Lines 543 to 544 in 1b9e28e

def non_max_suppression(prediction, conf_thres=0.1, iou_thres=0.6, merge=False, classes=None, agnostic=False):
"""Performs Non-Maximum Suppression (NMS) on inference results

@TaoXieSZ
Copy link

TaoXieSZ commented Jul 8, 2020

@glenn-jocher Excited to see that!

@glenn-jocher glenn-jocher self-assigned this Jul 8, 2020
@glenn-jocher glenn-jocher added the question Further information is requested label Jul 8, 2020
@sokazaki
Copy link

sokazaki commented Jul 9, 2020

@glenn-jocher thank you for replying and considering this topic! ok, I see. I respect your mind and I think so too. However, GPL-based codes are too risky not only in Kaggle, so realistically I won't be able to use your codebases in everywhere... but I love your concise and smart coding style, and learned some tips like apex. Thanks!

@sokazaki
Copy link

sokazaki commented Jul 9, 2020

In addition, pseudo-labeling are promising approaches for semi-supervised / unsupervised / transfer learning.
If you are interested in this topic, these papers using pseudo-labeling approaches may be useful.
(these are not about Object Detection, but insightful.)

[Unsupervised / Self-Supervised] Deep Clustering for Unsupervised Learning
of Visual Features, https://arxiv.org/abs/1807.05520
[Transfer Learning / Domain Adaptation] Unsupervised Domain Adaptive Re-Identification: Theory and Practice
, https://arxiv.org/abs/1807.11334
[Semi-Supervised Domain Adaptation] Self-similarity Grouping: A Simple Unsupervised Cross Domain Adaptation Approach for Person Re-identification, https://arxiv.org/abs/1811.10144

@TaoXieSZ
Copy link

TaoXieSZ commented Jul 9, 2020

@glenn-jocher Happy to see my model get a higher score after using CIOU and bee-augmentations!

@TaoXieSZ
Copy link

TaoXieSZ commented Jul 9, 2020

@sokazaki Thanks for your sharing! I sometimes consider pseudo-labeling is a kind of "Cheat" in compete. Your references just solve my puzzle!

@glenn-jocher
Copy link
Member

@ChristopherSTAN do you think it would help if we switched the default box loss to CIoU from GIoU? We have this implemented already, we simply have not switched over the default because we did not observe performance increases on COCO. It possible other datasets stand more to gain from the switch than COCO however, as their anchors may not be as closely aligned to the data as the default anchors are to COCO, even with AutoAnchor switched on now by default.

yolov5/utils/utils.py

Lines 295 to 296 in e16e9e4

def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False):
# Returns the IoU of box1 to box2. box1 is 4, box2 is nx4

@TaoXieSZ
Copy link

@glenn-jocher I found CIoU can help loss converge more quickly. But only in Wheat compete. I don't remember converging epoch in VOC dataset. We can test it together.

As you said to me before, GIoU do not improve much. But I found that training times of each epoch don't differ between three kind of IoU loss. Again, from my observe on Wheat.

@glenn-jocher
Copy link
Member

@ChristopherSTAN yes, I also observed on COCO than G/D/CIoU all have nearly identical epoch training times. I suppose in that respect there is really no downside to converting it to the default. If it doesn't harm COCO, and it helps the custom datasets converge faster... but wait final mAP is more important than convergence speed. Did you find it to also help final wheat mAP compared to GIoU?

@TaoXieSZ
Copy link

@glenn-jocher Not always... But my best model is trained with CIoU, and focal loss helps improve. I have to read more paper to explain this...

This was referenced Jul 16, 2020
@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Aug 10, 2020
@JJrodny JJrodny mentioned this issue Nov 23, 2021
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale Stale and schedule for closing soon
Projects
None yet
Development

No branches or pull requests

4 participants