Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add type annotations to torch.nn.modules.fold #49479

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 0 additions & 3 deletions mypy.ini
Expand Up @@ -71,9 +71,6 @@ ignore_errors = True
[mypy-torch.nn.modules.conv]
ignore_errors = True

[mypy-torch.nn.modules.fold]
ignore_errors = True

[mypy-torch.nn.modules.module]
ignore_errors = True

Expand Down
10 changes: 5 additions & 5 deletions torch/nn/functional.pyi.in
@@ -1,7 +1,7 @@
from torch import Tensor
from torch.types import _size
from typing import Any, Optional, Tuple, Dict, List, Callable, Sequence, Union
from .common_types import _ratio_any_t, _size_1_t, _size_2_t, _size_3_t
from .common_types import _ratio_any_t, _size_any_t, _size_1_t, _size_2_t, _size_3_t

# 'TypedDict' is a new accepted type that represents a dictionary with a fixed set of allowed keys.
# It is standards-track but not in `typing` yet. We leave this hear to be uncommented once the feature
Expand Down Expand Up @@ -335,12 +335,12 @@ def normalize(input: Tensor, p: float = ..., dim: int = ..., eps: float = ...,
def assert_int_or_pair(arg: Any, arg_name: Any, message: Any) -> None: ...


def unfold(input: Tensor, kernel_size: _size, dilation: _size = ..., padding: _size = ...,
stride: _size = ...) -> Tensor: ...
def unfold(input: Tensor, kernel_size: _size_any_t, dilation: _size_any_t = ..., padding: _size_any_t = ...,
stride: _size_any_t = ...) -> Tensor: ...


def fold(input: Tensor, output_size: _size, kernel_size: _size, dilation: _size = ..., padding: _size = ...,
stride: _size = ...) -> Tensor: ...
def fold(input: Tensor, output_size: _size_any_t, kernel_size: _size_any_t, dilation: _size_any_t = ..., padding: _size_any_t = ...,
stride: _size_any_t = ...) -> Tensor: ...


def multi_head_attention_forward(query: Tensor,
Expand Down