Skip to content

Commit

Permalink
Clearify the functionality of Conv2dSubsampling1 in python doc
Browse files Browse the repository at this point in the history
  • Loading branch information
tjysdsg committed Jan 29, 2023
1 parent 4734d75 commit c1dc65f
Showing 1 changed file with 6 additions and 4 deletions.
10 changes: 6 additions & 4 deletions espnet/nets/pytorch_backend/transformer/subsampling.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ def __getitem__(self, key):


class Conv2dSubsampling1(torch.nn.Module):
"""Convolutional 2D subsampling (to the same length).
"""Similar to Conv2dSubsampling module, but without any subsampling performed.
Args:
idim (int): Input dimension.
Expand All @@ -128,15 +128,17 @@ def __init__(self, idim, odim, dropout_rate, pos_enc=None):
)

def forward(self, x, x_mask):
"""Subsample x with a ratio of 1.
"""Pass x through 2 Conv2d layers without subsampling.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tensor): Input mask (#batch, 1, time).
Returns:
torch.Tensor: Subsampled tensor (#batch, time, odim).
torch.Tensor: Subsampled mask (#batch, 1, time).
torch.Tensor: Subsampled tensor (#batch, time', odim).
where time' = time - 4.
torch.Tensor: Subsampled mask (#batch, 1, time').
where time' = time - 4.
"""
x = x.unsqueeze(1) # (b, c, t, f)
Expand Down

0 comments on commit c1dc65f

Please sign in to comment.