-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ADD: ConvTranspose1/3d #80
Conversation
…nto conv-transpose
# V = mat.shape[0] | ||
# N, C_out, _ = module.output_shape | ||
# _, C_in, _ = module.input0_shape | ||
|
||
# mat = eingroup("v,n,c,l->vn,c,l", mat).repeat(1, C_in, 1) | ||
# C_in_axis = 1 | ||
# # a,b represent the combined/repeated dimensions | ||
# mat = eingroup("a,b,l->ab,l", mat).unsqueeze(C_in_axis) | ||
|
||
# N_axis = 0 | ||
# input = eingroup("n,c,l->nc,l", module.input0).unsqueeze(N_axis) | ||
# input = input.repeat(1, V, 1) | ||
|
||
# grad_weight = conv1d( | ||
# input, | ||
# mat, | ||
# bias=None, | ||
# stride=module.dilation, | ||
# padding=module.padding, | ||
# dilation=module.stride, | ||
# groups=C_in * N * V, | ||
# ).squeeze(0) | ||
|
||
# K_L_axis = 1 | ||
# _, _, K_L = module.weight.shape | ||
# grad_weight = grad_weight.narrow(K_L_axis, 0, K_L) | ||
|
||
# eingroup_eq = "vnio,x->v,{}o,i,x".format("" if sum_batch else "n,") | ||
# return eingroup( | ||
# eingroup_eq, grad_weight, dim={"v": V, "n": N, "i": C_in, "o": C_out} | ||
# ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove dead code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes
{ | ||
"module_fn": torch.nn.Conv2d, | ||
"module_kwargs": { | ||
"in_channels": 2, | ||
"out_channels": 3, | ||
"kernel_size": 2, | ||
"bias": False, | ||
"padding": 1, | ||
}, | ||
"input_kwargs": {"size": (3, 2, 7, 7)}, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reason for this test being removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because in the example defined on the top, we have an extensive conv2d
example where we test for bias=False
. Hence I thought its redundant
unfold
onCuda
and it works