Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add type annotations to torch.nn.quantized.modules.conv #49702

Closed
wants to merge 5 commits into from
Closed

add type annotations to torch.nn.quantized.modules.conv #49702

wants to merge 5 commits into from

Conversation

guilhermeleobas
Copy link
Collaborator

@guilhermeleobas guilhermeleobas commented Dec 21, 2020

closes gh-49700

No mypy issues were found in the first three entries deleted from mypy.ini:

[mypy-torch.nn.qat.modules.activations]
ignore_errors = True

[mypy-torch.nn.qat.modules.conv]
ignore_errors = True

[mypy-torch.nn.quantized.dynamic.modules.linear]
ignore_errors = True

@guilhermeleobas guilhermeleobas added the module: typing Related to mypy type annotations label Dec 21, 2020
@guilhermeleobas guilhermeleobas self-assigned this Dec 21, 2020
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Dec 21, 2020

💊 CI failures summary and remediations

As of commit 2f41a83 (more details on the Dr. CI page):


  • 1/2 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)
  • 1/2 broken upstream at merge base 321b988 from Jan 06 until Jan 07

1 job timed out:

  • pytorch_linux_bionic_py3_8_gcc9_coverage_test1

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 27 times.

@codecov
Copy link

codecov bot commented Dec 21, 2020

Codecov Report

Merging #49702 (a93a1d6) into master (963f762) will increase coverage by 5.54%.
The diff coverage is 76.92%.

@@            Coverage Diff             @@
##           master   #49702      +/-   ##
==========================================
+ Coverage   75.14%   80.69%   +5.54%     
==========================================
  Files        1891     1895       +4     
  Lines      205325   205464     +139     
==========================================
+ Hits       154291   165795   +11504     
+ Misses      51034    39669   -11365     

Copy link
Collaborator

@rgommers rgommers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @guilhermeleobas. This looks good. I made a couple of comments on how the ignores could be avoided, but don't make them (yet) because I'm not sure they're desirable. Let's see what @ezyang and @malfet think.

The PR could also be merged as is, that'd be perfectly fine in practice.

torch/nn/quantized/modules/conv.py Outdated Show resolved Hide resolved
@@ -155,7 +164,8 @@ def get_qconv(cls, mod, activation_post_process, weight_post_process=None):
assert weight_post_process.dtype == torch.qint8, \
'Weight observer must have a dtype of qint8'
qweight = _quantize_weight(mod.weight.float(), weight_post_process)
qconv = cls(mod.in_channels, mod.out_channels, mod.kernel_size,
# the __init__ call used is the one from derived classes and not the one from _ConvNd
qconv = cls(mod.in_channels, mod.out_channels, mod.kernel_size, # type: ignore[call-arg]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the issue is that the subclass __init__ methods have two fewer parameters (no transposed, output_padding) than _ConvNd. The current fix works, but technically the current way the class hierarchy is organized is not correct and the better way to do it would be:

    def __init__(self, in_channels, out_channels, kernel_size, stride=1,
                 padding=0, dilation=1, groups=1, bias=True,
                 padding_mode='zeros'):
        # All subclasses have this signature
        raise NotImplementedError

    def _init(self, in_channels, out_channels, kernel_size, stride,
              padding, dilation,
              transposed, output_padding,
              groups, bias,
              padding_mode='zeros'):
        super(_ConvNd, self).__init__()

and then subclasses should call super(Conv1d, self)._init(

I think that would be a good change, but it is only possible if there are no subclasses of _ConvNd floating around (it's private, but you never know).

Let's see what other reviewers think.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I'll wait the other reviewers before changing this part of the code

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Soooo it is a little tricky. torch.nn._ConvNd is sadly de facto public API; see https://github.com/pytorch/fairseq/blob/master/fairseq/modules/quantization/scalar/modules/qconv.py#L14 for an example. But this is quantized ConvNd, and this code is relatively new and there ought to be less uses. cc @jerryzh168 for more thoughts.

I'm trying to decide if I agree that the alternate version is better.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the high level question I have is why the subclass init methods have fewer parameters. This seems like a blatant oversight to me.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the high level question I have is why the subclass init methods have fewer parameters

Because those two parameters are implicitly defined by the dimensionality of subclass - e.g. Conv1d hardcodes transpose, output_padding to False, (0,). It would not make sense for the user to define those when initializing Conv1d.

transpose is always False, so that really doesn't make much sense - probably code just copied from the non-quantized version or something like that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK fair enough. The suggested alternative seems like a reasonable way to shut up the type checker then.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated the code to use _init.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, are we going to have similar changes in https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/conv.py as well?

@ngimel ngimel added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Dec 22, 2020
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@walterddr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ezyang
Copy link
Contributor

ezyang commented Jan 6, 2021

@walterddr I accidentally commandeered the diff, feel free to commandeer it back

@walterddr
Copy link
Contributor

no problem at all @ezyang please go ahead and merge it. :-)

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in 55919a4.

hwangdeyu pushed a commit to hwangdeyu/pytorch that referenced this pull request Jan 14, 2021
Summary:
closes pytorchgh-49700

No mypy issues were found in the first three entries deleted from `mypy.ini`:
```
[mypy-torch.nn.qat.modules.activations]
ignore_errors = True

[mypy-torch.nn.qat.modules.conv]
ignore_errors = True

[mypy-torch.nn.quantized.dynamic.modules.linear]
ignore_errors = True
```

Pull Request resolved: pytorch#49702

Reviewed By: walterddr, zou3519

Differential Revision: D25767119

Pulled By: ezyang

fbshipit-source-id: cb83e53549a299538e1b154cf8b79e3280f7392a
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: typing Related to mypy type annotations open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Enable torch.nn.quantized.modules.conv typechecks during CI
8 participants