Open solution to the Google AI Object Detection Challenge 🍁
Switch branches/tags
Nothing to show
Clone or download

Google AI Open Images - Object Detection Track: Open Solution

license Join the chat at

This is an open solution to the Google AI Open Images - Object Detection Track πŸ˜ƒ

More competitions πŸŽ‡

Check collection of public projects 🎁, where you can find multiple Kaggle competitions with code, experiments and outputs.

Our goals

We are building entirely open solution to this competition. Specifically:

  1. Learning from the process - updates about new ideas, code and experiments is the best way to learn data science. Our activity is especially useful for people who wants to enter the competition, but lack appropriate experience.
  2. Encourage more Kagglers to start working on this competition.
  3. Deliver open source solution with no strings attached. Code is available on our GitHub repository πŸ’». This solution should establish solid benchmark, as well as provide good base for your custom ideas and experiments. We care about clean code πŸ˜ƒ
  4. We are opening our experiments as well: everybody can have live preview on our experiments, parameters, code, etc. Check: Google-AI-Object-Detection-Challenge πŸ“ˆ and images below:
UNet training monitor πŸ“Š Predicted bounding boxes πŸ“Š
unet-training-monitor predicted-bounding-boxes


In this open source solution you will find references to the It is free platform for community Users, which we use daily to keep track of our experiments. Please note that using is not necessary to proceed with this solution. You may run it as plain Python script 🐍.

How to start?

Learn about our solutions

  1. Check Kaggle forum and participate in the discussions.
  2. Check our Wiki pages 🐬, where we describe our work. Below are link to specific solutions:
link to code link to description
solution-1 palm-tree 🌴

Dataset for this competition

This competition is special, because it used Open Images Dataset V4, which is quite large: >1.8M images and >0.5TB 😲 To make it more approachable, we are hosting entire dataset in the neptune's public directory 😎. You can use this dataset in with no additional setup πŸ‘.

Start experimenting with ready-to-use code

You can jump start your participation in the competition by using our starter pack. Installation instruction below will guide you through the setup.


Fast Track

  1. Clone repository, install requirements (check _requirements.txt)
pip3 install -r requirements.txt
  1. Register to the (if you wish to use it) and create your project, for example Google-AI-Object-Detection-Challenge.
  2. Train RetinaNet:


neptune send --worker m-4p100 \
--environment pytorch-0.3.1-gpu-py3 \
--config configs/neptune.yaml \ train --pipeline_name retinanet


neptune run train --pipeline_name retinanet


python -- train --pipeline_name retinanet
  1. Evaluate/Predict RetinaNet:

Note in case of memory trouble go to neptune.yaml and change batch_size_inference: 1

🐹 With cloud environment you need to change the experiment directory to the one that you have just trained. Let's assume that your experiment id was GAI-14. You should go to neptune.yaml and change:

  experiment_dir:  /output/experiment
  clone_experiment_dir_from:  /input/GAI-14/output/experiment
neptune send --worker m-4p100 \
--environment pytorch-0.3.1-gpu-py3 \
--config configs/neptune.yaml \
--input /GAI-14 \ evaluate_predict --pipeline_name retinanet --chunk_size 100


neptune run train --pipeline_name retinanet --chunk_size 100


python -- train --pipeline_name retinanet --chunk_size 100

Get involved

You are welcome to contribute your code and ideas to this open solution. To get started:

  1. Check competition project on GitHub to see what we are working on right now.
  2. Express your interest in particular task by writing comment in this task, or by creating new one with your fresh idea.
  3. We will get back to you quickly in order to start working together.
  4. Check CONTRIBUTING for some more information.

User support

There are several ways to seek help:

  1. Kaggle discussion is our primary way of communication.
  2. Read project's Wiki, where we publish descriptions about the code, pipelines and supporting tools such as
  3. Submit an issue directly in this repo.