New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
_make_data_gen() and start_enqueueing_threads() #98
Comments
Your assumption, that the second statement @L173 will never be reached is wrong. The generator created by
|
Ah, yes. A quick review of yield statements and your response showed me that my starting assumption was wrong. Thank you! |
Does this mean that the dataset is effectively doubled during training? In particular, that there are two versions of the input image in the training data:
|
Yes, data augmentation like random flipping is used to increasing the amount of effectively available training data. Other forms of data augmentation applied is random aspect ratio augmentation (random resize, random crop) and color augmentation (random brightness, random contrast, random saturation and random hue). |
Hi Marvin, related to random resize or random crop, does that mean you the training set will have different image size when feeding to the network input? |
If you use For batch_sizes > 1 all inputs need the same size. That can be achieved by setting |
I'm not sure I understand what is happening in
inputs/kitti_seg_input.py
here: https://github.com/MarvinTeichmann/KittiSeg/blob/master/inputs/kitti_seg_input.py#L169-L173.So, we are yielding 2 generators to the variable,
gen
, in L359:But then why do we do the
gen.next()
in the following line, without making use of the returned value? What is the purpose of this portion of the code, instart_enqueueing_threads()
?The text was updated successfully, but these errors were encountered: