Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Breaking] Pytorch 0.4 release compatibility fixes #299

Merged
merged 4 commits into from
Apr 27, 2018

Conversation

mttk
Copy link
Contributor

@mttk mttk commented Apr 25, 2018

#295

  • Fixes volatile for pytorch 0.4 / 0.3 (with a version check)

@ssnl
Copy link

ssnl commented Apr 25, 2018

Please fix the docstrings too.

@ssnl
Copy link

ssnl commented Apr 25, 2018

It would be better if we can also raise an error/warning when the flag is set, so users can know that they need to update this.

@mttk
Copy link
Contributor Author

mttk commented Apr 25, 2018

I agree with the warning, but the flag isn't used only for volatile (it sets some variables in Iterator), and it might not be set by the user (.splits()). I don't see a clean way to know if the flag was set by the user or not (without using yet another flag). I'll figure out a way to make it work though.

I updated the docstring and made the version check a bit nicer, as well as added it to iterator.

@mttk mttk closed this Apr 25, 2018
@mttk mttk reopened this Apr 25, 2018
@mttk
Copy link
Contributor Author

mttk commented Apr 25, 2018

I also added weak support for torch.device now, as in #294. It's not ideal as I'm just unpacking the device index, but it will take some time to find all the spots where the code breaks if the variable can be either an int or a torch.device instance.

@mttk mttk changed the title Fix volatile Fix volatile, add support for torch.device Apr 25, 2018
@mttk mttk changed the title Fix volatile, add support for torch.device Pytorch 0.4 release compatibility fixes Apr 25, 2018
@zshihang
Copy link
Contributor

how about get rid of all the device arguments in torchtext? user could change the device using one-liner.

input, target = input.to(device), target.to(device)

@jekbradbury
Copy link
Contributor

I'm open to either of these solutions, and I'm sorry I haven't been on top of 0.4 compatibility; let's get something merged soon so we have a working release

@mttk
Copy link
Contributor Author

mttk commented Apr 25, 2018

We could dump the responsibility onto the user (I prefer this as it would make for some cleaner code), but that would kinda ruin backwards compatibility.

@jekbradbury
Copy link
Contributor

I'm OK with 0.4 support being a breaking release

@mttk
Copy link
Contributor Author

mttk commented Apr 25, 2018

I went on a deleting spree:

  • I removed every place that train was passed as an argument, except (1) in the constructor of Iterator, where it is used to set the repeat/shuffle/sort flags and (2) in Batch.fromvars since it was stored during serialization in previous versions, however there it's a no-op and is simply present so that previously stored models can be restored.
  • I kept the device argument to be consistent with pytorch's DataLoader (which sets the device internally). However, now the argument has to either be a string or torch.device and is passed to the tensor constructor. In case the argument is an integer, a warning is logged.

Still waiting to see if this passes tests as my server is acting up.

@mttk mttk changed the title Pytorch 0.4 release compatibility fixes [Breaking] Pytorch 0.4 release compatibility fixes Apr 25, 2018
@mttk
Copy link
Contributor Author

mttk commented Apr 26, 2018

@jekbradbury the current build passes everything. When you get some time, please do a sanity check on the changes, but I don't see any obvious issues.

@jekbradbury jekbradbury merged commit 2ce827e into pytorch:master Apr 27, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants