Skip to content
This repository has been archived by the owner on Mar 12, 2024. It is now read-only.

Custom Object Detection Training #40

Closed
nisarggandhewar opened this issue Jun 3, 2020 · 2 comments
Closed

Custom Object Detection Training #40

nisarggandhewar opened this issue Jun 3, 2020 · 2 comments
Labels
duplicate This issue or pull request already exists

Comments

@nisarggandhewar
Copy link

❓ How to do something using DETR

Describe what you want to do, including:

  1. what inputs you will provide, if any:
  2. what outputs you are expecting:

NOTE:

  1. Only general answers are provided.
    If you want to ask about "why X did not work", please use the
    Unexpected behaviors issue template.

  2. About how to implement new models / new dataloader / new training logic, etc., check documentation first.

  3. We do not answer general machine learning / computer vision questions that are not specific to DETR, such as how a model works, how to improve your training/make it converge, or what algorithm/methods can be used to achieve X.

How to train a new model for Custom object detection in google colab.

@fmassa
Copy link
Contributor

fmassa commented Jun 3, 2020

Hi,

We provide training scripts for usage with the command line, see https://github.com/facebookresearch/detr#training

For more information on how to train on your own dataset, check #9 and #28

As such, I'm closing this issue as a duplicate of #9

@fmassa fmassa closed this as completed Jun 3, 2020
@fmassa fmassa added the duplicate This issue or pull request already exists label Jun 3, 2020
@Dicko87
Copy link

Dicko87 commented Jan 19, 2021

Hi there,
I have been using DETR on my own dataset and it works very well. I get a good mAP and Recall on the validation set. My question is, how to I run cocoEval to give me the same or similar results to what it got during model training. For example the model achieved and mAP of 0.89 on the validation set. I then decided to see if I could produce the same results again. I ran the model in eval mode on the dataset and set a confidence threshold > 0.8 and saved the results in a json file. I then used cocoeval and gave the validation set json and my new resFile as inputs and the evaluation results gave me an mAP of 0.6, which isn’t right. How do I go about getting the same or similar results as to what the model achieved originally and consequently how do I adjust the confidence threshold and get the precision-recall curves for these different thresholds. I should have said that I did try the cocoeval with all predictions (no filter on the confidences) and my mAP result was still a lot lower than what the model showed me. I guess my real question is, what code / steps should I take so that I can get the same results as what the model gave on the validation set? How to I replicate these figures ? What do I need to do to achieve this?
So far I ran the model in eval mode, ran predictions on my dataset and saved them into a json file. I then ran the code as shown in the attached image, but the results I got were no where near the same as what the model gave.
thank you.
image

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
duplicate This issue or pull request already exists
Projects
None yet
Development

No branches or pull requests

3 participants