New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add upsample 3d, subsample 2d and 3d modules #1348
Conversation
Ok, autopep8 didn't solve the linting issues. I will revert the commit and fix issues by hand. |
Just wanted to comment that I have an open PR that affects changing the inputs to bilinear 2d upsampling (#1317). I the the changes here are mostly independent, but you've changed some stuff that overlaps. I think the major thing you did was change the size parameter to be a tuple in a slightly different place. cc @apaszke for guidance |
Also, this is cool work! I'm 👍 for sure! This is a informative PR for me to read .cu code related to what I've been doing, so thanks! |
Thank you Andrew! Sorry for not spotting your PR earlier. |
@lantiga - Heya Luca. I looked at our branches. I think the most narrow, technical difference between our branches is primarily the constructors of More generally, my PR assumes 2d upsampling: |
@lantiga @apaszke I just pushed a change to to #1317 wherein I think I managed to come up with a good way to reconcile these two PRs. I moved all casting of |
Thank you @andrewgiessel, looks good to me! In general |
@lantiga no problem! I will comment in-line on the places in your PR that need adjusting to be congruent with my PR. Edit: I guess I was a little prescriptive here. The convention is contingent upon maintainer approval, etc. of course. |
return _functions.thnn.UpsamplingNearest2d(size, scale_factor)(input) | ||
if input.dim() == 4: | ||
assert type(size) == int or len(size) == 2, '4D tensors expect size as int or Tuple[int, int]' | ||
return _functions.thnn.UpsamplingNearest2d(_pair(size), scale_factor)(input) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
return _functions.thnn.UpsamplingNearest2d(_pair(size), scale_factor)(input) | ||
elif input.dim() == 5: | ||
assert type(size) == int or len(size) == 3, '5D tensors expect size as int or Tuple[int, int, int]' | ||
return _functions.thnn.UpsamplingNearest3d(_triple(size), scale_factor)(input) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
return _functions.thnn.UpsamplingBilinear2d(size, scale_factor)(input) | ||
assert input.dim() == 4, "4D tensors expected in input" | ||
assert type(size) == int or len(size) == 2, '4D tensors expect size as int or Tuple[int, int]' | ||
return _functions.thnn.UpsamplingBilinear2d(_pair(size), scale_factor)(input) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
""" | ||
assert input.dim() == 5, "5D tensors expected in input" | ||
assert type(size) == int or len(size) == 3, '5D tensors expect size as int or Tuple[int, int, int]' | ||
return _functions.thnn.UpsamplingTrilinear3d(_triple(size), scale_factor)(input) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
>>> inputs = autograd.Variable(torch.randn(1,10,4,4)) | ||
>>> F.subsample(inputs, weight, bias, 2, stride=2) | ||
""" | ||
if input.dim() == 4: |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
@@ -64,6 +64,8 @@ class UpsamplingNearest2d(_UpsamplingBase): | |||
[torch.FloatTensor of size 1x1x4x4] | |||
|
|||
""" | |||
def __init__(self, size=None, scale_factor=None): | |||
super(UpsamplingNearest2d, self).__init__(_pair(size), scale_factor) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
@@ -109,6 +111,104 @@ class UpsamplingBilinear2d(_UpsamplingBase): | |||
[torch.FloatTensor of size 1x1x4x4] | |||
|
|||
""" | |||
def __init__(self, size=None, scale_factor=None): | |||
super(UpsamplingBilinear2d, self).__init__(_pair(size), scale_factor) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
||
""" | ||
def __init__(self, size=None, scale_factor=None): | ||
super(UpsamplingNearest3d, self).__init__(_triple(size), scale_factor) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
||
""" | ||
def __init__(self, size=None, scale_factor=None): | ||
super(UpsamplingTrilinear3d, self).__init__(_triple(size), scale_factor) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
I also didn't do any commenting on the 3d upsampling code, but I think the pattern is clear enough. |
Hey, this is a big PR! :) |
@andrewgiessel Thank you! This is super-helpful.
@fmassa Yeah, I know and I feel bad about it :-) It rabbit-holed from needing those upsampling modules for 3d stuff.
Almost: it's like a Conv2d with groups == nInputChannels == nOutputChannels and kernels in which all weight and bias elements are equal. The dim of weight and bias is 1 and the size is (nInputChannel,). |
@fmassa see also torch/nn#944 |
I'm sorry I'm late to the party, but isn't VolumetricUpsamplingNearest the same as VolumetricAveragePooling inverse (possibly with a multiplication coefficient, cudnn has that as a parameter). I mean, in forward upsampling you are putting a same input value into a window of an output tensor, which is what updateGradInput of average pooling does, and in updateGradInput of upsampling you are calculating an average (or a sum) of your gradOutput over some window and put it into gradInput, which is what average pooling forward does, so you could just reuse average pooling kernels probably? |
@ngimel Aha, I see. |
I'm not closely familiar with THCUNN structure, but it definitely should be possible to expose average pooling kernels so that you could call them. I'm just for avoiding code repetition wherever possible:-) You could also call cudnn, that's true. |
You've got a good point for sure. |
@andrewgiessel I have a question on the changes: with your strategy, if you call the module directly from functional - e.g. |
@andrewgiessel I see now, thanks for the pointer. |
Merging conflicts left a few tests failing. I'll fix them asap. |
@lantiga any update on the remaining conflicts? Would be awesome to have this as part of the standard pytorch. |
I had to take a break from PRs due to work, but I'm back now. @soumith are you willing to review this one after I fix the conflicts? |
I'll take a look at this today, make necessary changes and merge it in. you don't have to do any additional changes |
Great! Thank you @soumith |
superceded by #1676 and a yet to be named PR for subsampling |
* Refactor War Sync Insertion Pass (pytorch#1339) * Remove kir::Expr::scope_ (pytorch#1341) * Fusion IR Refactor (pytorch#1343) * Refactor KIR Step 1 - Remove kir::Node (pytorch#1347) * Refactor KIR Step 2 - TMP IrUtils change (pytorch#1348) * Refactor KIR Step 3 - Remove kir::Expr and kir::Val. (pytorch#1349) * Refactor KIR Step 4 - Remove kir::Bool,Double,Int,NamedScalar. (pytorch#1350) * Refactor KIR Step 5 - Remove kir::IterDomain/TensorDomain/TensorView (pytorch#1351) * Refactor KIR Step 6 - Remove kir::UnaryOp/BinaryOp/TernaryOp/ReductionOp/WelfordOp/BroadcastOp. (pytorch#1352) * Refactor KIR Step 7 - Remove kir dispatch (pytorch#1353) * Refactor KIR Step 8 - Clean up lower_utils (pytorch#1355) * Refactor KIR Step 9 - lower_utils ir_utils::applyReplacements. (pytorch#1354) * Refactor KIR Step 10 - Remove kir_printer in favor of io_stream (pytorch#1356)
* add warning to pyprof * add warning to reparameterization note: this module is already not import-able as follows: ``` (base) root@c4bb3f161482:/vscode/apex# python -c 'import torch; import apex; from apex import reparameterization' /vscode/apex/apex/pyprof/__init__.py:5: FutureWarning: pyprof will be removed by the end of June, 2022 warnings.warn("pyprof will be removed by the end of June, 2022", FutureWarning) /vscode/apex/apex/reparameterization/__init__.py:2: FutureWarning: reparameterization will be removed by the end of June, 2022 warnings.warn("reparameterization will be removed by the end of June, 2022", FutureWarning) Traceback (most recent call last): File "<string>", line 1, in <module> File "/vscode/apex/apex/reparameterization/__init__.py", line 4, in <module> from .weight_norm import WeightNorm File "/vscode/apex/apex/reparameterization/weight_norm.py", line 3, in <module> from ..fp16_utils import Fused_Weight_Norm ImportError: cannot import name 'Fused_Weight_Norm' from 'apex.fp16_utils' (/vscode/apex/apex/fp16_utils/__init__.py) ```
As anticipated on Slack #general on Apr 20, this PR includes
VolumetricUpsamplingNearest
VolumetricUpsamplingTrilinear
VolumetricSubsampling
nn
(andfunctional
) modules:UpsamplingNearest2d
,UpsamplingNearest3d
(functional:upsample_nearest
)UpsamplingTrilinear3d
(functional:upsample_trilinear
)Subsampling2d
,Subsampling3d
(functional:subsample
)gradcheck
)Commits may need to be squashed, in which case I'd need some direction.