-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge from upstream #39
Commits on Jul 16, 2018
-
Fix out-of-range error for test_neg (pytorch#9431)
Summary: `test_neg` sometimes fails internally because `random_()` can generate an out-of-range value for CharTensor. This PR fixes it. Pull Request resolved: pytorch#9431 Reviewed By: SsnL Differential Revision: D8843284 Pulled By: yf225 fbshipit-source-id: bf516cceb8f780e133fa54f7364c77821eb7c013
Configuration menu - View commit details
-
Copy full SHA for 52abcdd - Browse repository at this point
Copy the full SHA 52abcddView commit details -
Add peephole optimization for type_as operators. (pytorch#9316)
Summary: If the type_as operator takes in two values with the same type, remove that operator. Pull Request resolved: pytorch#9316 Reviewed By: zdevito Differential Revision: D8808355 fbshipit-source-id: 2d5710a6380b22f4568fc38a439061b5340c4eb1
Configuration menu - View commit details
-
Copy full SHA for 66fe3b5 - Browse repository at this point
Copy the full SHA 66fe3b5View commit details -
Add test case for segmentation fault fix in grad_fn (pytorch#9457)
Reviewed By: apaszke Differential Revision: D8863572 Pulled By: ezyang fbshipit-source-id: 13749f51320a4e403644674b0335aed4987fa887
Configuration menu - View commit details
-
Copy full SHA for b0c5c86 - Browse repository at this point
Copy the full SHA b0c5c86View commit details -
Nuke TestCollectEnv (pytorch#9459)
Summary: The tests were too flaky, and the procedure for legitimately updating versions of software too onerous, to warrant continually testing these. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Pull Request resolved: pytorch#9459 Reviewed By: zou3519 Differential Revision: D8852357 Pulled By: ezyang fbshipit-source-id: 24e99cd00b4252cdeec2a1d9af92456b4a54912a
Configuration menu - View commit details
-
Copy full SHA for 9413fab - Browse repository at this point
Copy the full SHA 9413fabView commit details -
Implement tensor weak references (pytorch#9363)
Summary: Add `WeakTensor` - a `Tensor` counterpart which doesn't keep the data (or any other expensive resources) alive. They can be `.lock()`ed and return `at::optional<Tensor>` if they're still alive. Pull Request resolved: pytorch#9363 Reviewed By: ezyang Differential Revision: D8815434 Pulled By: apaszke fbshipit-source-id: 1b3e96503c1285d78ef124c585e65c7630f3253e
Configuration menu - View commit details
-
Copy full SHA for 9ae77cc - Browse repository at this point
Copy the full SHA 9ae77ccView commit details -
Add a tagged union type that replaces tensor in the interpreter. (pyt…
…orch#9368) Summary: IValue is short for interpreter value. It is used frequently so a short name is important. This will allow us to implement more non-tensor types in an efficient way and remove many hacks from the compiler. This PR is limited. It only introduces IValue and changes interpreter to use it. Follow up PRs will: * Change the way aten_ops consume non-tensor types so that integer lists, are no longer represented as Tensors. * Introduce TensorList as a fundamental type and remove all vararg handling in gen_jit_dispatch * Change the compiler to implement math on primitive numbers rather than converting to tensors. jamesr66a apaszke Pull Request resolved: pytorch#9368 Reviewed By: ezyang Differential Revision: D8817598 Pulled By: zdevito fbshipit-source-id: 29dce80611ce5f6384234de9d12a67861d2b112f
Configuration menu - View commit details
-
Copy full SHA for 9ed2190 - Browse repository at this point
Copy the full SHA 9ed2190View commit details -
Eliminate storage views. (pytorch#9466)
Summary: Storage views were previously used to implement CUDA IPC sharing, but they weren't necessary. The new strategy is described in Note [CUDA IPC and the caching allocator]. This also fixes an unrelated bug, where we weren't actually using the Tensor forking pickler, because we didn't register a pickler for torch.Tensor. Fixes pytorch#9447. Fixes ROCm#46. Signed-off-by: Edward Z. Yang <ezyang@fb.com> CC apaszke Pull Request resolved: pytorch#9466 Reviewed By: apaszke Differential Revision: D8859698 Pulled By: ezyang fbshipit-source-id: 3362cb92f6ae4aa37084c57d79b31004bd0b4a97
Configuration menu - View commit details
-
Copy full SHA for 976f925 - Browse repository at this point
Copy the full SHA 976f925View commit details -
Skip PyTorch ROCm tests in the script. (pytorch#9467)
Summary: Signed-off-by: Edward Z. Yang <ezyang@fb.com> Pull Request resolved: pytorch#9467 Reviewed By: houseroad Differential Revision: D8860794 Pulled By: ezyang fbshipit-source-id: 9b11475d9bb4b3361973865d7f68e562bffbf9d8
Configuration menu - View commit details
-
Copy full SHA for 80160f6 - Browse repository at this point
Copy the full SHA 80160f6View commit details -
move batchop import to init to avoid debugging confusions (pytorch#9425)
Summary: fixes pytorch#9409 Pull Request resolved: pytorch#9425 Reviewed By: ezyang Differential Revision: D8842844 Pulled By: wanchaol fbshipit-source-id: 3c6b26470d59d8d1fc5f79caa70252b9de7290e4
Configuration menu - View commit details
-
Copy full SHA for 5ff6866 - Browse repository at this point
Copy the full SHA 5ff6866View commit details
Commits on Jul 17, 2018
-
Update onnx-tensort module to the latest (pytorch#9469)
Summary: Update onnx-tensort to follow up recent changes. Pull Request resolved: pytorch#9469 Reviewed By: Maratyszcza Differential Revision: D8866704 Pulled By: yinghai fbshipit-source-id: 3b96ec2fa28470f0d4b5a7c62ab332eeba4bdb12
Configuration menu - View commit details
-
Copy full SHA for 4514036 - Browse repository at this point
Copy the full SHA 4514036View commit details -
Summary: Pull Request resolved: pytorch#9473 Reviewed By: houseroad Differential Revision: D8865754 Pulled By: bddppq fbshipit-source-id: 406eda6c145f03a0ee35c4643ec1ec0092fbce88
Configuration menu - View commit details
-
Copy full SHA for 7df48d0 - Browse repository at this point
Copy the full SHA 7df48d0View commit details -
Additional operator information values (pytorch#9153)
Summary: Pull Request resolved: pytorch#9153 Closes pytorch#9153 Modified the values reported by the benchmarking platform to include tensor_shape and op_args. These values have a different naming scheme to values like flops and latency. Reviewed By: sf-wind Differential Revision: D8729791 fbshipit-source-id: f050200be01c6d0794bf5faaa6e8cef12a00affe
Configuration menu - View commit details
-
Copy full SHA for c4bff25 - Browse repository at this point
Copy the full SHA c4bff25View commit details -
Pass THDRequest as void* pointer to THDRequest_free (pytorch#9398)
Summary: This fixes pytorch#9054. Pull Request resolved: pytorch#9398 Reviewed By: ezyang Differential Revision: D8827778 Pulled By: yf225 fbshipit-source-id: 862287802cb69c6ac71ff4df19cadb89b1face1d
Configuration menu - View commit details
-
Copy full SHA for ad74006 - Browse repository at this point
Copy the full SHA ad74006View commit details -
Clip horizontal bounding boxes during rotated detection for backward …
…compatibility (pytorch#9403) Summary: Pull Request resolved: pytorch#9403 In BBoxTransform and GenerateProposal ops, clip_boxes makes sure the bbox fits within the images. For rotated boxes, this doesn't always make sense as there could be multiple ways to clip a rotated box within an image boundary. Moreover, clipping to a horizontal box means we leave out pixels of interest potentially. Therefore, we clip only boxes with angle almost equal to 0 (with a specified `angle_thresh` tolerance). Reviewed By: pjh5 Differential Revision: D8828588 fbshipit-source-id: 39c1eafdb5d39d383780faa0a47e76149145e50c
Configuration menu - View commit details
-
Copy full SHA for 9235ff5 - Browse repository at this point
Copy the full SHA 9235ff5View commit details -
Enable Conv fusion optimizations in optimizeForIdeep (pytorch#9255)
Summary: Enable fusion for IDEEP in optimizeForIdeep including Conv+ReLU, Conv+Sum, Conv+Sum+ReLU, Conv+BN Pull Request resolved: pytorch#9255 Reviewed By: bddppq Differential Revision: D8809030 Pulled By: yinghai fbshipit-source-id: af30bad3b96cb965bd26a4dfa810370faec4bb88
Configuration menu - View commit details
-
Copy full SHA for e8b8c38 - Browse repository at this point
Copy the full SHA e8b8c38View commit details -
Fix Sequential::clone() (pytorch#9372)
Summary: I noticed that `Sequential::clone()` does not work. This is because `Sequential` does not use `reset()` which is normally where modules have to initialize and register its submodules. Further, this is because of the way `Sequential` allows its modules to be passed in the constructor, which doesn't work with `reset()` (since it does "late" initialization). I've added some better error messages inside `Cloneable::clone()` which makes this kind of mistake clearer for other users, and tests for `Sequential::clone()`. I also had to give `AnyModule` a deep `clone()` method. ebetica ezyang Pull Request resolved: pytorch#9372 Differential Revision: D8865189 Pulled By: goldsborough fbshipit-source-id: b81586e0d3157cd3c4265b19ac8dd87c5d8dcf94
Configuration menu - View commit details
-
Copy full SHA for ae44a6b - Browse repository at this point
Copy the full SHA ae44a6bView commit details -
Update onnx to onnx/onnx@b2817a6 (pytorch#9476)
Summary: onnx/onnx@b2817a6 Pull Request resolved: pytorch#9476 Reviewed By: houseroad Differential Revision: D8868253 Pulled By: bddppq fbshipit-source-id: b1f14bab47f020f0bc0239da7e2bbf959a407d6a
Configuration menu - View commit details
-
Copy full SHA for 4ff636a - Browse repository at this point
Copy the full SHA 4ff636aView commit details -
Remove HTML tags from README.md (pytorch#9296)
Summary: This change makes README.md compatible with both Github and VSTS markdown engines. Images can be reduced if necessary Pull Request resolved: pytorch#9296 Differential Revision: D8874931 Pulled By: soumith fbshipit-source-id: 0c530c1e00b06fc891301644c92c33007060bf27
Configuration menu - View commit details
-
Copy full SHA for 11fc16d - Browse repository at this point
Copy the full SHA 11fc16dView commit details -
Configuration menu - View commit details
-
Copy full SHA for b29376c - Browse repository at this point
Copy the full SHA b29376cView commit details