Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev conv1d module #5280

Merged
merged 114 commits into from
Jul 2, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
114 commits
Select commit Hold shift + click to select a range
a2e86ec
Add partial unary and math functional apis.
hjchen2 Jun 16, 2021
f6493f5
Merge branch 'master' into dev_functional_unary_and_binary_ops
hjchen2 Jun 16, 2021
6987c58
Revert elementwise pow.
hjchen2 Jun 16, 2021
df8105d
Merge branch 'dev_functional_unary_and_binary_ops' of https://github.…
hjchen2 Jun 16, 2021
de6427d
auto format by CI
oneflow-ci-bot Jun 16, 2021
a80f13b
Merge branch 'master' into dev_functional_unary_and_binary_ops
hjchen2 Jun 17, 2021
3354fe3
Support add with large number of inputs.
hjchen2 Jun 18, 2021
8270058
Merge branch 'master' into dev_functional_unary_and_binary_ops
hjchen2 Jun 18, 2021
cffeeb3
Update oneflow/python/nn/modules/math_ops.py
hjchen2 Jun 18, 2021
840e1a6
Refine
hjchen2 Jun 18, 2021
3bce2e2
Merge branch 'master' into dev_functional_unary_and_binary_ops
hjchen2 Jun 18, 2021
30b7a04
Merge branch 'dev_functional_unary_and_binary_ops' of https://github.…
hjchen2 Jun 18, 2021
166d783
Merge branch 'master' into dev_functional_unary_and_binary_ops
Flowingsun007 Jun 18, 2021
dfba890
Merge branch 'master' into dev_functional_unary_and_binary_ops
hjchen2 Jun 20, 2021
ab998e7
Migrate binary and activation ops.
hjchen2 Jun 21, 2021
ae10084
Migrate array ops.
hjchen2 Jun 21, 2021
ab44b50
Add or refactor activation grad funcs.
hjchen2 Jun 21, 2021
5c1463a
Add or refactor activation grad funcs.
hjchen2 Jun 21, 2021
ab71f58
Merge branch 'master' of https://github.com/Oneflow-Inc/oneflow into …
hjchen2 Jun 21, 2021
7942832
Merge branch 'dev_binary_and_act_ops' into dev_array_ops
hjchen2 Jun 21, 2021
9a0bef8
Merge branch 'master' into dev_binary_and_act_ops
hjchen2 Jun 21, 2021
0ceb5ec
Revert unpack all
hjchen2 Jun 21, 2021
d60b7d2
Fix masked fill
hjchen2 Jun 21, 2021
c43e0f7
Refine
hjchen2 Jun 21, 2021
5e98676
Merge branch 'dev_binary_and_act_ops' of https://github.com/Oneflow-I…
hjchen2 Jun 21, 2021
a13a09f
Merge branch 'dev_binary_and_act_ops' into dev_array_ops
hjchen2 Jun 21, 2021
22c69a3
Add nn ops.
hjchen2 Jun 21, 2021
5453ff7
Refine
hjchen2 Jun 21, 2021
53b1820
Refine
hjchen2 Jun 21, 2021
253e452
Merge branch 'dev_array_ops' into dev_nn_ops
hjchen2 Jun 21, 2021
c2c21e5
Migrate conv op
hjchen2 Jun 21, 2021
41e4a93
Merge branch 'master' of https://github.com/Oneflow-Inc/oneflow into …
hjchen2 Jun 22, 2021
a0da4c8
Merge branch 'master' into dev_nn_ops
hjchen2 Jun 23, 2021
a004a48
Fix functional normalization.
hjchen2 Jun 23, 2021
317e282
auto format by CI
oneflow-ci-bot Jun 23, 2021
094094d
Merge branch 'dev_nn_ops' into dev_nn_functional_conv
hjchen2 Jun 23, 2021
aee0ffa
unfinished
MARD1NO Jun 23, 2021
d25c573
Merge branch 'master' into dev_nn_functional_conv
hjchen2 Jun 23, 2021
bc4aef7
Refine code style
hjchen2 Jun 23, 2021
5573cea
align Torch params
MARD1NO Jun 24, 2021
e6a954d
Merge remote-tracking branch 'origin/dev_nn_functional_conv' into dev…
MARD1NO Jun 24, 2021
86b8153
Merge branch 'master' into dev_conv1d_module
MARD1NO Jun 24, 2021
a435fec
Merge branch 'dev_conv1d_module' of https://github.com/Oneflow-Inc/on…
MARD1NO Jun 24, 2021
b8fc823
align Torch params
MARD1NO Jun 24, 2021
2f7eb07
develop unfinish
MARD1NO Jun 24, 2021
cc61549
add conv1d
MARD1NO Jun 24, 2021
18ae4dc
add conv1d docs rst
MARD1NO Jun 24, 2021
271f31c
add conv1d module and docs
MARD1NO Jun 24, 2021
e430e1b
fix bias add error
MARD1NO Jun 24, 2021
926283a
fix groups bug
MARD1NO Jun 24, 2021
51716bc
add test case
MARD1NO Jun 24, 2021
d50209a
Support optional parameter.
hjchen2 Jun 24, 2021
c532a21
Merge branch 'dev_nn_functional_conv' of https://github.com/Oneflow-I…
hjchen2 Jun 24, 2021
9dadf3b
Merge branch 'master' into dev_nn_functional_conv
hjchen2 Jun 24, 2021
786db22
Merge branch 'master' into dev_nn_functional_conv
hjchen2 Jun 25, 2021
f2c9e29
fix group bug
MARD1NO Jun 25, 2021
2389dbe
add new test case
MARD1NO Jun 25, 2021
6e0cebd
Merge branch 'dev_nn_functional_conv' of https://github.com/Oneflow-I…
MARD1NO Jun 25, 2021
129a665
Merge branch 'master' into dev_nn_functional_conv
oneflow-ci-bot Jun 25, 2021
e23eda7
Merge branch 'dev_nn_functional_conv' into dev_conv1d_module
MARD1NO Jun 25, 2021
a4aae72
add more test case
MARD1NO Jun 25, 2021
cc04676
Merge branch 'master' into dev_conv1d_module
MARD1NO Jun 28, 2021
6596101
small fix
MARD1NO Jun 28, 2021
4bb4395
Merge branch 'master' into dev_conv1d_module
MARD1NO Jun 28, 2021
9df9be3
add torch reference
MARD1NO Jun 28, 2021
021faa3
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 28, 2021
64a0ebc
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 28, 2021
c7a5a2c
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 28, 2021
06eb816
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 28, 2021
5f170ca
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 28, 2021
ceadf9a
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 29, 2021
c7f33bb
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 29, 2021
363117b
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 29, 2021
f3cbddd
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 29, 2021
fa6d8f8
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 29, 2021
1cebcf3
add conv base functor
MARD1NO Jun 29, 2021
8b7d2eb
remove useless print
MARD1NO Jun 29, 2021
bb5663f
Merge branch 'dev_conv1d_module' of https://github.com/Oneflow-Inc/on…
MARD1NO Jun 29, 2021
7751026
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 29, 2021
8e40164
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 29, 2021
0cc9be1
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 29, 2021
5d1bad2
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 29, 2021
2fc92c4
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 29, 2021
c068d70
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 29, 2021
0307b79
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 30, 2021
b55d6c8
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 30, 2021
6fa540e
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jun 30, 2021
7f8a346
Merge branch 'master' into dev_conv1d_module
MARD1NO Jun 30, 2021
b0f10ea
reorganize code structure
MARD1NO Jul 1, 2021
99c951d
Merge branch 'master' into dev_conv1d_module
hjchen2 Jul 1, 2021
662dfce
fix name and vector size
MARD1NO Jul 1, 2021
b1d1ec3
Merge branch 'dev_conv1d_module' of https://github.com/Oneflow-Inc/on…
MARD1NO Jul 1, 2021
cd28b4a
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 1, 2021
5ba895a
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 1, 2021
5f8b398
Merge branch 'master' into dev_conv1d_module
MARD1NO Jul 1, 2021
8e097d9
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 1, 2021
76c2923
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 1, 2021
1dec32f
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 1, 2021
dd71900
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 1, 2021
a2f35c0
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 1, 2021
e41e967
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 1, 2021
8af01d6
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 1, 2021
9c084d1
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 1, 2021
bf0c289
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 1, 2021
087638d
fix pushback to at
MARD1NO Jul 2, 2021
4987463
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 2, 2021
761b710
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 2, 2021
7a33c96
Merge branch 'master' into dev_conv1d_module
MARD1NO Jul 2, 2021
bd56a61
small fix for deconv docs
MARD1NO Jul 2, 2021
534d296
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 2, 2021
a8b2085
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 2, 2021
0139226
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 2, 2021
a7a4162
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 2, 2021
fcc9f6d
Merge branch 'master' into dev_conv1d_module
oneflow-ci-bot Jul 2, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/source/experimental.rst
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ Experimental features
.. autofunction:: oneflow.experimental.nn.ParameterDict
.. autofunction:: oneflow.experimental.nn.ModuleList
.. autofunction:: oneflow.experimental.nn.ModuleDict
.. autofunction:: oneflow.experimental.nn.Conv1d
.. autofunction:: oneflow.experimental.nn.Conv2d
.. autofunction:: oneflow.experimental.nn.ConstantPad2d
.. autofunction:: oneflow.experimental.nn.ConvTranspose2d
Expand Down
8 changes: 7 additions & 1 deletion oneflow/core/functional/functional_api.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -266,12 +266,18 @@
signature: "Tensor BiasAdd(Tensor x, Tensor bias, *, Int32 axis=1)"
bind_python: True

- name: "conv1d"
signature:
"Tensor Conv1d(Tensor x, Tensor weight, *, Tensor bias=None, Int32List stride,
Int32List padding, Int32List dilation, Int32 groups=1)"
bind_python: True

- name: "conv2d"
signature:
"Tensor Conv2d(Tensor x, Tensor weight, *, Tensor bias=None, Int32List stride,
Int32List padding, Int32List dilation, Int32 groups=1)"
bind_python: True

- name: "conv_data_grad"
signature:
"Tensor ConvDataGrad(Tensor dy, Tensor weight, Tensor x, *, Int32 num_spatial_dims,
Expand Down
33 changes: 26 additions & 7 deletions oneflow/core/functional/impl/nn_functor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -48,21 +48,22 @@ class BiasAddFunctor {
std::shared_ptr<OpExpr> op_;
};

class Conv2dFunctor {
class ConvBaseFunctor {
public:
Conv2dFunctor() {
conv_op_ =
CHECK_JUST(one::OpBuilder("conv2d").Input("in").Input("weight").Output("out").Build());
explicit ConvBaseFunctor(const int& num_spatial_dims) : num_spatial_dims_(num_spatial_dims) {
bias_op_ = CHECK_JUST(one::OpBuilder("bias_add").Input("a").Input("b").Output("out").Build());
}
virtual ~ConvBaseFunctor() = default;
Maybe<Tensor> operator()(const std::shared_ptr<one::Tensor>& x,
const std::shared_ptr<one::Tensor>& weight,
const Optional<one::Tensor>& bias, const std::vector<int32_t>& stride,
const std::vector<int32_t>& padding,
const std::vector<int32_t>& dilation, const int32_t& groups) const {
MutableAttrMap conv_attrs;
std::vector<int32_t> kernel_size_vec;
for (int i = 0; i < 2; i++) { kernel_size_vec.push_back((weight->shape())->At(i + 2)); }
std::vector<int32_t> kernel_size_vec(num_spatial_dims_);
for (int i = 0; i < num_spatial_dims_; i++) {
kernel_size_vec.at(i) = ((weight->shape())->At(i + 2));
}
JUST(conv_attrs.SetAttr<int32_t>("filters", (weight->shape())->At(0)));
JUST(conv_attrs.SetAttr<std::vector<int32_t>>("padding_before", padding));
JUST(conv_attrs.SetAttr<std::vector<int32_t>>("kernel_size", kernel_size_vec));
Expand All @@ -81,9 +82,26 @@ class Conv2dFunctor {
}
}

private:
protected:
std::shared_ptr<OpExpr> conv_op_;
std::shared_ptr<OpExpr> bias_op_;
int32_t num_spatial_dims_;
};

class Conv1dFunctor : public ConvBaseFunctor {
public:
Conv1dFunctor() : ConvBaseFunctor(/*num_spatial_dims_=*/1) {
conv_op_ =
CHECK_JUST(one::OpBuilder("conv1d").Input("in").Input("weight").Output("out").Build());
}
};

class Conv2dFunctor : public ConvBaseFunctor {
public:
Conv2dFunctor() : ConvBaseFunctor(/*num_spatial_dims_=*/2) {
conv_op_ =
CHECK_JUST(one::OpBuilder("conv2d").Input("in").Input("weight").Output("out").Build());
}
};

class MatMulBaseFunctor {
Expand Down Expand Up @@ -336,6 +354,7 @@ class PadFunctor {

ONEFLOW_FUNCTION_LIBRARY(m) {
m.add_functor<impl::BiasAddFunctor>("BiasAdd");
m.add_functor<impl::Conv1dFunctor>("Conv1d");
m.add_functor<impl::Conv2dFunctor>("Conv2d");
m.add_functor<impl::MatMulFunctor>("MatMul");
m.add_functor<impl::BatchMatMulFunctor>("BatchMatMul");
Expand Down
191 changes: 188 additions & 3 deletions oneflow/python/nn/modules/conv.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@
import oneflow as flow
from oneflow.python.oneflow_export import oneflow_export, experimental_api
from oneflow.python.nn.module import Module
from oneflow.python.nn.modules.utils import _pair
from oneflow.python.nn.common_types import _size_2_t
from oneflow.python.nn.modules.utils import _single, _pair
from oneflow.python.nn.common_types import _size_1_t, _size_2_t
from oneflow.python.nn import init


Expand Down Expand Up @@ -76,10 +76,195 @@ def split(cls, x, axis, split_num):
return result_list


@oneflow_export("nn.Conv1d")
@experimental_api
class Conv1d(Module):
r"""The interface is consistent with PyTorch.
The documentation is referenced from: https://pytorch.org/docs/master/generated/torch.nn.Conv1d.html#conv1d

Applies a 1D convolution over an input signal composed of several input
planes.

In the simplest case, the output value of the layer with input size
:math:`(N, C_{\text{in}}, L)` and output :math:`(N, C_{\text{out}}, L_{\text{out}})` can be
precisely described as:

.. math::
\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) +
\sum_{k = 0}^{C_{in} - 1} \text{weight}(C_{\text{out}_j}, k)
\star \text{input}(N_i, k)

where :math:`\star` is the valid `cross-correlation`_ operator,
:math:`N` is a batch size, :math:`C` denotes a number of channels,
:math:`L` is a length of signal sequence.

* :attr:`stride` controls the stride for the cross-correlation, a single
number or a one-element tuple.

* :attr:`padding` controls the amount of padding applied to the input. It
can be either a string {{'valid', 'same'}} or a tuple of ints giving the
amount of implicit padding applied on both sides.

* :attr:`dilation` controls the spacing between the kernel points; also
known as the à trous algorithm. It is harder to describe, but this `link`_
has a nice visualization of what :attr:`dilation` does.

Note:
``padding='valid'`` is the same as no padding. ``padding='same'`` pads
the input so the output has the shape as the input. However, this mode
doesn't support any stride values other than 1.

Args:
in_channels (int): Number of channels in the input image
out_channels (int): Number of channels produced by the convolution
kernel_size (int or tuple): Size of the convolving kernel
stride (int or tuple, optional): Stride of the convolution. Default: 1
padding (int, tuple or str, optional): Padding added to both sides of
the input. Default: 0
padding_mode (string, optional): ``'zeros'``, ``'reflect'``,
``'replicate'`` or ``'circular'``. Default: ``'zeros'``
dilation (int or tuple, optional): Spacing between kernel
elements. Default: 1
groups (int, optional): Number of blocked connections from input
channels to output channels. Default: 1
bias (bool, optional): If ``True``, adds a learnable bias to the
output. Default: ``True``

Shape:
- Input: :math:`(N, C_{in}, L_{in})`
- Output: :math:`(N, C_{out}, L_{out})` where

.. math::
L_{out} = \left\lfloor\frac{L_{in} + 2 \times \text{padding} - \text{dilation}
\times (\text{kernel\_size} - 1) - 1}{\text{stride}} + 1\right\rfloor

Attributes:
weight (Tensor): the learnable weights of the module of shape
:math:`(\text{out\_channels},
\frac{\text{in\_channels}}{\text{groups}}, \text{kernel\_size})`.
The values of these weights are sampled from
:math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where
:math:`k = \frac{groups}{C_\text{in} * \text{kernel\_size}}`
bias (Tensor): the learnable bias of the module of shape
(out_channels). If :attr:`bias` is ``True``, then the values of these weights are
sampled from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where
:math:`k = \frac{groups}{C_\text{in} * \text{kernel\_size}}`

For example:

.. code-block:: python
MARD1NO marked this conversation as resolved.
Show resolved Hide resolved

>>> import numpy as np
>>> import oneflow.experimental as flow
>>> import oneflow.experimental.nn as nn
>>> flow.enable_eager_execution()

>>> arr = np.random.randn(20, 16, 50)
>>> input = flow.Tensor(arr)
>>> m = nn.Conv1d(16, 33, 3, stride=2)
>>> output = m(input)

.. _cross-correlation:
https://en.wikipedia.org/wiki/Cross-correlation

.. _link:
https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md
"""

def __init__(
self,
in_channels: int,
out_channels: int,
kernel_size: _size_1_t,
stride: _size_1_t = 1,
padding: _size_1_t = 0,
dilation: _size_1_t = 1,
groups: int = 1,
bias: bool = True,
padding_mode: str = "zeros", # TODO: refine this type
):
super().__init__()

assert padding_mode == "zeros"
MARD1NO marked this conversation as resolved.
Show resolved Hide resolved
self.kernel_size = _single(kernel_size)
self.stride = _single(stride)
self.padding = _single(padding)
self.dilation = _single(dilation)
self.groups = groups
assert in_channels % groups == 0
assert out_channels % groups == 0
self.in_channels = in_channels
self.out_channels = out_channels
self.weight = flow.nn.Parameter(
flow.Tensor(out_channels, in_channels // groups, *self.kernel_size)
)
self.out_channel_groups = out_channels // groups
self.bias = None
if bias:
self.bias = flow.nn.Parameter(flow.Tensor(out_channels))
self.reset_parameters()

def reset_parameters(self) -> None:
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
if self.bias is not None:
fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in)
init.uniform_(self.bias, -bound, bound)

def forward(self, x):
if x.device.type == "cpu" and self.groups > 1:
in_channel_axis = 1
weight_channel_axis = 0
bias_channel_axis = 0
in_split_list = ConvUtil.split(
x, axis=in_channel_axis, split_num=self.groups
)
out_list = []
for i in range(len(in_split_list)):
out_list.append(
flow.F.conv1d(
in_split_list[i],
self.weight[
i
* self.out_channel_groups : (i + 1)
* self.out_channel_groups,
:,
:,
],
self.bias[
i
* self.out_channel_groups : (i + 1)
* self.out_channel_groups
]
if self.bias
else None,
stride=self.stride,
padding=self.padding,
dilation=self.dilation,
groups=1,
)
)
res = flow.experimental.cat(out_list, dim=in_channel_axis)
else:
res = flow.F.conv1d(
x,
self.weight,
self.bias,
stride=self.stride,
padding=self.padding,
dilation=self.dilation,
groups=self.groups,
)
return res


@oneflow_export("nn.Conv2d")
@experimental_api
class Conv2d(Module):
r"""Applies a 2D convolution over an input signal composed of several input
r"""The interface is consistent with PyTorch.
The documentation is referenced from: https://pytorch.org/docs/master/generated/torch.nn.Conv2d.html#conv2d

Applies a 2D convolution over an input signal composed of several input
planes.

In the simplest case, the output value of the layer with input size
Expand Down
4 changes: 2 additions & 2 deletions oneflow/python/nn/modules/deconv.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,13 +114,13 @@ class ConvTranspose2d(Module):
\times (\text{kernel_size}[1] - 1) + \text{output_padding}[1] + 1

Attributes:
weight (Tensor): the learnable weights of the module of shape
ConvTranspose2d.weight (Tensor): the learnable weights of the module of shape
:math:`(\text{in_channels}, \frac{\text{out_channels}}{\text{groups}},`
:math:`\text{kernel_size[0]}, \text{kernel_size[1]})`.
The values of these weights are sampled from
:math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where
:math:`k = \frac{groups}{C_\text{out} * \prod_{i=0}^{1}\text{kernel_size}[i]}`
bias (Tensor): the learnable bias of the module of shape (out_channels)
ConvTranspose2d.bias (Tensor): the learnable bias of the module of shape (out_channels)
If :attr:`bias` is ``True``, then the values of these weights are
sampled from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where
:math:`k = \frac{groups}{C_\text{out} * \prod_{i=0}^{1}\text{kernel_size}[i]}`
Expand Down
1 change: 0 additions & 1 deletion oneflow/python/test/modules/test_conv.py
Original file line number Diff line number Diff line change
Expand Up @@ -1539,7 +1539,6 @@ def _test_conv2d_large_out_channel(test_case, device):
m.weight = flow.nn.Parameter(flow.Tensor(weight), requires_grad=True)
m = m.to(device)
output = m(input)
print(output)
np_out = np.array(
[
[
Expand Down
Loading