Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change interpolation in apply_transform() from nearest to bilinear. #8849

Merged
merged 1 commit into from
Feb 12, 2018

Conversation

ozabluda
Copy link
Contributor

@ozabluda ozabluda commented Dec 20, 2017

Before #2446, interpolation was inconsistently sometimes nearest, sometimes bicubic. AFAIK, in other libraries and tools default interpolation is never 'nearest'. Sometimes it's bicubic, sometimes it's bilinear. For Keras bilinear is better because it's much faster, and is something a single (de)convolutional layer can learn (i.e. it is, basically a single hardcoded non-trainable (de)convolutional layer. Which is not true of nearest, even with pooling.

Also see load_img() #8435

@ozabluda ozabluda changed the title Change default interpolation in apply_transform() from nearest to bilinear. Change interpolation in apply_transform() from nearest to bilinear. Dec 20, 2017
@fchollet
Copy link
Member

This is also a potentially breaking change.

@ozabluda
Copy link
Contributor Author

Yes, but it doesn't affect any pretrained models. Current default is highly unexpected, since we have fractional shifts, rotations, zooms and whatnot, and probably makes things worse vast majority of the time. I actually spent some time looking at 'nearest' interpolation when I was dealing with binarized images, which had to stay binarized after data augmentation. It sucks (augmented data doesn't correspond to the original). For that reason, I am not even proposing adding order parameter, since it's so exotic. A slightly less exotic case is order=3 for, say, superresolution autoencoder, but even than moved to order=1 AFAIK, probably for the reasons in my description above.

But we could introduce order parameter with default None and print warning that default changed, but I don't think anybody would care (but it would be useful mostly for debugging). Then revert to default order=1 when we have a an API-breaking release. Or we can keep None so that sometimes we do indeed default to order=0 when, say, we have integer shifts only (something like random_crop(), for which I am still designing of a good API, and, maybe, it would work just fine with order=1.

@Dref360
Copy link
Contributor

Dref360 commented Jan 16, 2018

This would break existing code for segmentation DA.

@ozabluda
Copy link
Contributor Author

what is "segmentation DA"?

@Dref360
Copy link
Contributor

Dref360 commented Jan 16, 2018

Sorry,
Data augmentation on segmentation map.
If you do a bilinear interpolation, you would corrupt the data. For example, a pixel with class "4" could become "4.5" if its neighbors are not of the same class.

@ozabluda
Copy link
Contributor Author

ozabluda commented Jan 16, 2018

I thought segmentation DA currently doesn't really work anyway: search for "segmentation" in #3338. Also see #6538. Also 2D softmax was merged only a couple of weeks ago. But it is a valid use case for nearest, thank you.

BTW, for binary labels, and binary crossentropy, it may make things better. Automatic soft labels.

@ahundt
Copy link
Contributor

ahundt commented Jan 31, 2018

I thought segmentation DA currently doesn't really work anyway

That's what I found last time I tried things out

Copy link
Member

@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, let's go with this change. Hopefully it won't break any users...

@fchollet fchollet merged commit 9b08f33 into keras-team:master Feb 12, 2018
@ozabluda ozabluda deleted the patch-21 branch February 13, 2018 01:31
@ozabluda
Copy link
Contributor Author

I knew it! Use random linear interpolation for data & labels to improve stuff

mixup: Beyond Empirical Risk Minimization (1017)
https://arxiv.org/abs/1710.09412v1

ahundt added a commit to ahundt/keras that referenced this pull request Feb 16, 2018
* 'master' of github.com:fchollet/keras: (57 commits)
  Minor README edit
  Speed up Travis tests (keras-team#9386)
  fix typo (keras-team#9391)
  Fix style issue in docstring
  Prepare 2.1.4 release.
  Fix activity regularizer + model composition test
  Corrected copyright years (keras-team#9375)
  Change default interpolation from nearest to bilinear. (keras-team#8849)
  a capsule cnn on cifar-10 (keras-team#9193)
  Enable us to use sklearn to do cv for functional api (keras-team#9320)
  Add support for stateful metrics. (keras-team#9253)
  The type of list keys was float (keras-team#9324)
  Fix mnist sklearn wrapper example (keras-team#9317)
  keras-team#9287 Fix most of the file-handle resource leaks. (keras-team#9309)
  Pass current learning rate to schedule() in LearningRateScheduler (keras-team#8865)
  Simplify with from six.moves import input (keras-team#9216)
  fixed RemoteMonitor: Json to handle np.float32 and np.int32 types (keras-team#9261)
  Update tweet length from 140 to 280 in docs
  Add `depthconv_conv2d` tests (keras-team#9225)
  Remove `force` option in progbar
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants