New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add type annotations to torch.nn.quantized.modules.conv #49702
Closed
Closed
Changes from all commits
Commits
Show all changes
5 commits
Select commit
Hold shift + click to select a range
1ec27ba
add type annotations to torch.nn.quantized.modules.conv
guilhermeleobas a93a1d6
add TypeVar("MOD")
guilhermeleobas 62b6c25
revert change
guilhermeleobas 288fb06
change __init__ -> _init
guilhermeleobas 2f41a83
Merge remote-tracking branch 'origin/master' into quantized-conv
ezyang File filter
Filter by extension
Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the issue is that the subclass
__init__
methods have two fewer parameters (notransposed, output_padding
) than_ConvNd
. The current fix works, but technically the current way the class hierarchy is organized is not correct and the better way to do it would be:and then subclasses should call
super(Conv1d, self)._init(
I think that would be a good change, but it is only possible if there are no subclasses of
_ConvNd
floating around (it's private, but you never know).Let's see what other reviewers think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I'll wait the other reviewers before changing this part of the code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Soooo it is a little tricky.
torch.nn._ConvNd
is sadly de facto public API; see https://github.com/pytorch/fairseq/blob/master/fairseq/modules/quantization/scalar/modules/qconv.py#L14 for an example. But this is quantized ConvNd, and this code is relatively new and there ought to be less uses. cc @jerryzh168 for more thoughts.I'm trying to decide if I agree that the alternate version is better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the high level question I have is why the subclass init methods have fewer parameters. This seems like a blatant oversight to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because those two parameters are implicitly defined by the dimensionality of subclass - e.g.
Conv1d
hardcodestranspose, output_padding
toFalse, (0,)
. It would not make sense for the user to define those when initializingConv1d
.transpose
is alwaysFalse
, so that really doesn't make much sense - probably code just copied from the non-quantized version or something like that.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK fair enough. The suggested alternative seems like a reasonable way to shut up the type checker then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the code to use
_init
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, are we going to have similar changes in https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/conv.py as well?