Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are you validating in cropped image? #21

Closed
acgtyrant opened this issue Jan 9, 2018 · 1 comment
Closed

Are you validating in cropped image? #21

acgtyrant opened this issue Jan 9, 2018 · 1 comment

Comments

@acgtyrant
Copy link

acgtyrant commented Jan 9, 2018

The val_args copys from train_args while it does not change crop.

However, the data_grep/get_cityscapes_list.py offers is_crop, I think val_bigger_patch.lst should not be cropped version. So I set is_crop as False to produce val_bigger_patch.lst, and I tried to disable 'crop' in train/solver.py as below:

val_args = train_args.copy()
val_args['data_shape'] = [(self.batch_size, 3, 1024, 2048)]
val_args['label_shape'] = [
       (self.batch_size, 1024 * 2048 / self.cell_width ** 2)]
val_args['scale_factors'] = [1]
val_args['use_random_crop'] = False
val_args['use_mirror'] = False
val_args['crop'] = False

But module.fit fails and it seems that it complains train_data and val_data is not consistent while their data and label's shape are not same, module is bind to the train_data's shape as below already:

module.bind(
    data_shapes=[(self.data_name[0], self.data_shape[0])],
    label_shapes=[(self.label_name[0], self.label_shape[0])])

If you use cropped val_bigger_patch.lst actually, then I tried to validate it on full image by myself, or the program may be buggy in validating, it not enouge to fix #16 .

@GrassSunFlower
Copy link
Contributor

Currently MxNet is using static graph to build the network, GPU usage memories are pre-allocated. That's why they don't support disagreement between training shape and validation shape. Thus we are using cropped validations. BTW, this validation is just some evaluation numbers you monitor at training process not what you do for REAL validation benchmark. If you want to get real benchmarks please use the test script to generate results and use https://github.com/mcordts/cityscapesScripts to do evaluation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants