Skip to content

Commit

Permalink
unbreak mypy torch/quantization
Browse files Browse the repository at this point in the history
Summary:

Somehow `mypy torch/quantization` got broken in the past couple of days:
https://gist.github.com/vkuzo/07af454246f0a68e6fa8929beeec7e0d
.  I didn't see any relevant PRs other than
#47725, which doesn't seem
related. The error doesn't seem real, as the arguments to
`_cudnn_rnn_flatten_weight` seem correct. For now,
ignoring the failure so we have a clean `mypy` run on
`torch/quantization`.

Test Plan:

```
mypy torch/quantization
```

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
  • Loading branch information
vkuzo committed Dec 17, 2020
1 parent c20b916 commit 2d622fe
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions torch/nn/modules/rnn.py
Original file line number Diff line number Diff line change
Expand Up @@ -172,8 +172,8 @@ def flatten_parameters(self) -> None:
torch._cudnn_rnn_flatten_weight(
self._flat_weights, num_weights,
self.input_size, rnn.get_cudnn_mode(self.mode),
self.hidden_size, self.proj_size, self.num_layers,
self.batch_first, bool(self.bidirectional))
self.hidden_size, self.proj_size, self.num_layers, # type: ignore
self.batch_first, bool(self.bidirectional)) # type: ignore

def _apply(self, fn):
ret = super(RNNBase, self)._apply(fn)
Expand Down

0 comments on commit 2d622fe

Please sign in to comment.