Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add flexible bilinear upsampling aspect ratio redux #1317

Merged
merged 10 commits into from
May 3, 2017
Merged

Add flexible bilinear upsampling aspect ratio redux #1317

merged 10 commits into from
May 3, 2017

Conversation

andrewgiessel
Copy link
Contributor

This PR addresses issue #1257, which aims to add non-coupled scaling factors for bilinear 2d upsampling. I've added two new tests, and currently everything passes. This is a relatively simple change which only touches python code.

Furthermore, this is is a re-do of PR #1279, wherein I messed up a rebase. There are some in-line comments there that might be worth skimming through, as well, but I tried to address most of the concerns raised there. In particular, I maintained the base class of all the upsampling methods.

Tests pass, on GPU and CPU.

tagging @apaszke @fmassa @soumith. Thanks in advance everyone.

ps: I'll probably need a pointer on how to best rebase to master, eventually ;)

self.size = size
if scale_factor is not None and not isinstance(scale_factor, (Integral, tuple)):
raise ValueError('scale_factor must be of integer type or tuple of integer types')
self.size = _pair(size)

This comment was marked as off-topic.

# we have to be a tuple at this point
try:
assert len(self.scale_factor) == 2
for i in self.scale_factor:

This comment was marked as off-topic.

self.output_size = (
input.size(2) * self.scale_factor,
input.size(3) * self.scale_factor,
input.size(2) * self.scale_factor[0],

This comment was marked as off-topic.

@@ -110,5 +115,21 @@ class UpsamplingBilinear2d(_UpsamplingBase):

"""

def __init__(self, size=None, scale_factor=None):
super(UpsamplingBilinear2d, self).__init__(size, scale_factor)

This comment was marked as off-topic.

if scale_factor is not None and not isinstance(scale_factor, Integral):
raise ValueError('scale_factor must be of integer type')
if scale_factor is not None and not isinstance(scale_factor, (Integral, tuple)):
raise ValueError('scale_factor must be of integer type or tuple of integer types')
self.size = _pair(size)

This comment was marked as off-topic.


if self.scale_factor is not None:
self.scale_factor = _pair(self.scale_factor)
# we have to be a tuple at this point

This comment was marked as off-topic.

@andrewgiessel
Copy link
Contributor Author

Thanks for the comments @apaszke! I think I made all the changes you asked for. Please, let me know if I didn't understand something (in particular I just c/p your __setstate__() function).

This allows for the base class to be used for upsampling routines other
than 2d.  I also renamed _check_bilinear_2d_scale_factor().
@andrewgiessel
Copy link
Contributor Author

Please see the comment I made here on #1348 explaining the most recent commit.

@soumith soumith merged commit 2e7635b into pytorch:master May 3, 2017
@soumith
Copy link
Member

soumith commented May 3, 2017

thanks Andrew!

Jiaming-Liu pushed a commit to Jiaming-Liu/pytorch that referenced this pull request May 18, 2017
houseroad added a commit to houseroad/pytorch that referenced this pull request Sep 6, 2018
…8ffb52 (pytorch#11346)

Summary:
Pull Request resolved: pytorch#11346

Previous import was 1b09eb14c2c781fae078fa6b1c0390ba6fc0898c

Included changes:
- **[bff0b88](onnx/onnx@bff0b88)**: Add DynamicSlice experimental op (pytorch#1377) <James Reed>
- **[91a7b8e](onnx/onnx@91a7b8e)**: statCoverage(model) (pytorch#1246) <Akshay Chalana>
- **[36643c6](onnx/onnx@36643c6)**: fix the doc for softmax (pytorch#1374) <Lu Fang>
- **[8c64acd](onnx/onnx@8c64acd)**: Silence usused result warning in ONNXIFI wrapper cleanup. Fix pytorch#1344 (pytorch#1371) <Marat Dukhan>
- **[53b20f6](onnx/onnx@53b20f6)**: Add the ability to deprecate an OpSchema (pytorch#1317) <Ryan Hill>
- **[8aec4e2](onnx/onnx@8aec4e2)**: [Anderspapitto patch] fix the shape inference for broadcasting (pytorch#1368) <Lu Fang>

Reviewed By: jamesr66a

Differential Revision: D9691533

fbshipit-source-id: 1a8c22262ae4946897e4be030d3f1cf3a3ad58b6
facebook-github-bot pushed a commit that referenced this pull request Sep 7, 2018
…8ffb52 (#11346)

Summary:
Pull Request resolved: #11346

Previous import was 1b09eb14c2c781fae078fa6b1c0390ba6fc0898c

Included changes:
- **[bff0b88](onnx/onnx@bff0b88)**: Add DynamicSlice experimental op (#1377) <James Reed>
- **[91a7b8e](onnx/onnx@91a7b8e)**: statCoverage(model) (#1246) <Akshay Chalana>
- **[36643c6](onnx/onnx@36643c6)**: fix the doc for softmax (#1374) <Lu Fang>
- **[8c64acd](onnx/onnx@8c64acd)**: Silence usused result warning in ONNXIFI wrapper cleanup. Fix #1344 (#1371) <Marat Dukhan>
- **[53b20f6](onnx/onnx@53b20f6)**: Add the ability to deprecate an OpSchema (#1317) <Ryan Hill>
- **[8aec4e2](onnx/onnx@8aec4e2)**: [Anderspapitto patch] fix the shape inference for broadcasting (#1368) <Lu Fang>

Reviewed By: jamesr66a

Differential Revision: D9691533

fbshipit-source-id: 6aff6ce04ade37182e2ffe9bc83eb86846bc722d
PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this pull request Sep 11, 2018
…8ffb52 (pytorch#11346)

Summary:
Pull Request resolved: pytorch#11346

Previous import was 1b09eb14c2c781fae078fa6b1c0390ba6fc0898c

Included changes:
- **[bff0b88](onnx/onnx@bff0b88)**: Add DynamicSlice experimental op (pytorch#1377) <James Reed>
- **[91a7b8e](onnx/onnx@91a7b8e)**: statCoverage(model) (pytorch#1246) <Akshay Chalana>
- **[36643c6](onnx/onnx@36643c6)**: fix the doc for softmax (pytorch#1374) <Lu Fang>
- **[8c64acd](onnx/onnx@8c64acd)**: Silence usused result warning in ONNXIFI wrapper cleanup. Fix pytorch#1344 (pytorch#1371) <Marat Dukhan>
- **[53b20f6](onnx/onnx@53b20f6)**: Add the ability to deprecate an OpSchema (pytorch#1317) <Ryan Hill>
- **[8aec4e2](onnx/onnx@8aec4e2)**: [Anderspapitto patch] fix the shape inference for broadcasting (pytorch#1368) <Lu Fang>

Reviewed By: jamesr66a

Differential Revision: D9691533

fbshipit-source-id: 6aff6ce04ade37182e2ffe9bc83eb86846bc722d
hubertlu-tw pushed a commit to hubertlu-tw/pytorch that referenced this pull request Nov 1, 2022
… and `fused_weight_gradient_mlp_cuda` is missing (pytorch#1317)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants