Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are the unreasonable image used for training? #1

Closed
Hwang64 opened this issue Oct 8, 2018 · 4 comments
Closed

Are the unreasonable image used for training? #1

Hwang64 opened this issue Oct 8, 2018 · 4 comments

Comments

@Hwang64
Copy link

Hwang64 commented Oct 8, 2018

As the fig.7 shows, there are many unreasonable synthesis images, are these images used for training the Faster-RCNN or BlitzNet? Do these images hurt the detection performance?

@dvornikita
Copy link
Owner

We didn't evaluate the contribution of "bad" images into the training process since they are generated on the fly. My intuition is that if the context is wrong, they may hurt the performance to some extent. When the image looks unrealistic, although the context is correct, it helps the final performance, as the results suggest. We augmented for training both Faster-RCNN and BlitzNet. In the letter case, it was found to be more helpful, most likely due to big training batches.

@Hwang64
Copy link
Author

Hwang64 commented Oct 8, 2018

Thank you for your reply, you say that "We didn't evaluate the contribution of "bad" images into the training process since they are generated on the fly.", does it mean that these "bad" images have been filtered out manually before training the detector and these "bad" image are not use for training detectors?

@dvornikita
Copy link
Owner

Sorry for the ambiguity. What I mean is that we train with all generated images, and since they are generated on-the-fly, we don't know which ones are "good" and which ones are "bad". Hence, we can't separate bad and good ones and measure the impact of either of them to the training. However, as long as the final performance improves, we assume that either we don't have too many bad images, or they don't hurt the training so much.

@Hwang64
Copy link
Author

Hwang64 commented Oct 9, 2018

ok, thank you for your reply and I am clear now

@Hwang64 Hwang64 closed this as completed Oct 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants