Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature skip data iteration when caching scoring #557

merged 8 commits into from Dec 16, 2019


Copy link

BenjaminBossan commented Nov 11, 2019

This is the proposed fix for #552.

As discussed there, my proposal is to cache net.forward_iter, so that we can skip the iteration over the data.

It would be very helpful if someone could think of any unintended side-effects this change could have. E.g., in theory, some code could rely on iterating over the data even in case of caching, but I can hardly imagine this happening in reality.

Also, I cannot test on GPU for the moment. I don't see how this would affect the outcome, but it could still be nice of someone verifies it works.

I deprecated cache_net_infer because it was not "private". However, I made the new _cache_net_forward_iter private.

BenjaminBossan added 2 commits Nov 11, 2019
* indentation level
* disable some pylint messages
* unused fixtures
Before, net.infer was cached when using a scoring callback wiht
use_caching=True. This way, the time to make an inference step was
saved. However, there was still an iteration step over the data for
each scoring callback. If iteration is slow, this could incur a
significant overhead.

Now net.forward_iter is cached instead. This way, the iteration over
the data is skipped and the iteration overhead should be gone.

This comment has been minimized.

Copy link

marrrcin commented Nov 14, 2019

Can I tests this from some nightly build or do I have to build the package on my own?


This comment has been minimized.

Copy link
Collaborator Author

BenjaminBossan commented Nov 14, 2019

@marrrcin You would need to install from source, but it's not difficult:

Copy link

ottonemo left a comment

In general I think this approach works. We could debate whether this is a problem we need to solve or where PyTorch lacks infrastructure (i.e., caching datasets) but ultimately I think it doesn't hurt to fix this.

def _forward_output(self, yp, device):
if isinstance(yp, tuple):
return tuple( for n in yp)

This comment has been minimized.

Copy link

ottonemo Nov 15, 2019


This is structurally very similar to skorch.utils.to_tensor. Maybe we should introduce a skorch.utils.to_device instead? This might also become handy if we support multiple GPUs in the future.

This comment has been minimized.

Copy link

BenjaminBossan Nov 17, 2019

Author Collaborator

Good point, I moved this to skorch.utils.to_device

BenjaminBossan added 4 commits Nov 17, 2019
Similar to the comment in cache_net_infer
... instead of having it as a method on NeuralNet.

Add tests

This comment has been minimized.

Copy link
Collaborator Author

BenjaminBossan commented Nov 17, 2019

@ottonemo I addressed your comment, pls review again.

Before merging this, I should we consider making a new release? I wouldn't mind this new feature being only on master for some time in case it creates some trouble down the line.

BenjaminBossan and others added 2 commits Dec 5, 2019
@ottonemo ottonemo merged commit 09be626 into master Dec 16, 2019
4 checks passed
4 checks passed
Travis CI - Branch Build Passed
Travis CI - Pull Request Build Passed
continuous-integration/travis-ci/pr The Travis CI build passed
continuous-integration/travis-ci/push The Travis CI build passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
3 participants
You can’t perform that action at this time.