-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve documentation and warning message for creation of a tensor with from_numpy() #49516
Conversation
💊 CI failures summary and remediationsAs of commit 888f69c (more details on the Dr. CI page):
🕵️ 3 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages: pytorch_linux_xenial_py3_6_gcc5_4_build (1/3)Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)
|
Hey @leonvol, could you please update the PR title and summary to be more informative? Thanks! |
Hey @leonvol! Thanks for this PR. I suggested a couple clarifications for consistency that should be simple to apply. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @leonvol!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Had to unland this, sorry @leonvol, looks like it acquired a merge conflict that affected the doc formatting and is now throwing an error:
@leonvol, would you try rebasing this and resolve the doc build complaint? |
) Summary: Pull Request resolved: pytorch#50004 Test Plan: Imported from OSS Reviewed By: lw Differential Revision: D25750602 Pulled By: mrshenli fbshipit-source-id: 06854a77f4fb5cc4c34a1ede843301157ebf7309
…ytorch#50005) Summary: Pull Request resolved: pytorch#50005 Test Plan: Imported from OSS Reviewed By: lw Differential Revision: D25750663 Pulled By: mrshenli fbshipit-source-id: 6d97156b61d82aa19dd0567ca72fe04bd7b5d1e7
) Summary: Pull Request resolved: pytorch#50020 Test Plan: Imported from OSS Reviewed By: lw Differential Revision: D25752968 Pulled By: mrshenli fbshipit-source-id: 138d37e204b6f9a584633cfc79fd44c8c9c00f41
Summary: Pull Request resolved: pytorch#50023 Test Plan: Imported from OSS Reviewed By: lw Differential Revision: D25753217 Pulled By: mrshenli fbshipit-source-id: 5a98473c17535c8f92043abe143064e7fca4413b
Summary: Pull Request resolved: pytorch#50024 Test Plan: Imported from OSS Reviewed By: lw Differential Revision: D25753386 Pulled By: mrshenli fbshipit-source-id: fdca051b805762a2c88f965ceb3edf1c25d40a56
…#50025) Summary: Pull Request resolved: pytorch#50025 Test Plan: Imported from OSS Reviewed By: lw Differential Revision: D25753587 Pulled By: mrshenli fbshipit-source-id: a5d4106a10d1b0d3e4c406751795f19af8afd120
Summary: Pull Request resolved: pytorch#48719 Attempt to break this PR (pytorch#33019) into two parts. As per our discussion with eellison, the first part is to make sure our aten::slice operator take optional parameters for begin/step/end. This will help with refactoring ir_emitter.cpp for genering handling for list and slice striding. Once this PR merged, we will submit a second PR with compiler change. Test Plan: None for this PR, but new tests will be added for the second part. Imported from OSS Reviewed By: jamesr66a Differential Revision: D25929902 fbshipit-source-id: 5385df04e6d61ded0699b09bbfec6691396b56c3
Summary: This PR helps with pytorch#50513 by reducing the complexity of our `mypy` test suite and making it easier to reproduce on the command line. Previously, to reproduce how `mypy` was actually run on tracked source files (ignoring the doctest typechecking) in CI, you technically needed to run 9 different commands with various arguments: ``` $ mypy --cache-dir=.mypy_cache/normal --check-untyped-defs --follow-imports silent $ mypy --cache-dir=.mypy_cache/examples --follow-imports silent --check-untyped-defs test/type_hint_tests/module_list.py $ mypy --cache-dir=.mypy_cache/examples --follow-imports silent --check-untyped-defs test/type_hint_tests/namedtuple.py $ mypy --cache-dir=.mypy_cache/examples --follow-imports silent --check-untyped-defs test/type_hint_tests/opt_size.py $ mypy --cache-dir=.mypy_cache/examples --follow-imports silent --check-untyped-defs test/type_hint_tests/size.py $ mypy --cache-dir=.mypy_cache/examples --follow-imports silent --check-untyped-defs test/type_hint_tests/tensor_copy.py $ mypy --cache-dir=.mypy_cache/examples --follow-imports silent --check-untyped-defs test/type_hint_tests/torch_cuda_random.py $ mypy --cache-dir=.mypy_cache/examples --follow-imports silent --check-untyped-defs test/type_hint_tests/torch_optim.py $ mypy --cache-dir=.mypy_cache/strict --config mypy-strict.ini ``` Now you only have to run 2 much simpler commands: ``` $ mypy $ mypy --config mypy-strict.ini ``` One reason this is useful is because it will make it easier to integrate PyTorch's `mypy` setup into editors (remaining work on this to be done in a followup PR). Also, as shown in the test plan, this also reduces the time it takes to run `test/test_type_hints.py` incrementally, by reducing the number of times `mypy` is invoked while still checking the same set of files with the same configs. (Because this PR merges `test_type_hint_examples` (added in pytorch#34595) into `test_run_mypy` (added in pytorch#36584), I've added some people involved in those PRs as reviewers, in case there's a specific reason they weren't combined in the first place.) Pull Request resolved: pytorch#50631 Test Plan: Run this twice (the first time is to warm the cache): ``` $ python test/test_type_hints.py -v ``` - *Before:* ``` test_doc_examples (__main__.TestTypeHints) Run documentation examples through mypy. ... ok test_run_mypy (__main__.TestTypeHints) Runs mypy over all files specified in mypy.ini ... ok test_run_mypy_strict (__main__.TestTypeHints) Runs mypy over all files specified in mypy-strict.ini ... ok test_type_hint_examples (__main__.TestTypeHints) Runs mypy over all the test examples present in ... ok ---------------------------------------------------------------------- Ran 4 tests in 5.090s OK ``` You can also just run `mypy` to see how many files it checks: ``` $ mypy --cache-dir=.mypy_cache/normal --check-untyped-defs --follow-imports silent Success: no issues found in 1192 source files ``` - *After:* ``` test_doc_examples (__main__.TestTypeHints) Run documentation examples through mypy. ... ok test_run_mypy (__main__.TestTypeHints) Runs mypy over all files specified in mypy.ini ... ok test_run_mypy_strict (__main__.TestTypeHints) Runs mypy over all files specified in mypy-strict.ini ... ok ---------------------------------------------------------------------- Ran 3 tests in 2.404s OK ``` Now `mypy` checks 7 more files, which is the number in `test/type_hint_tests`: ``` $ mypy Success: no issues found in 1199 source files ``` Reviewed By: zou3519 Differential Revision: D25932660 Pulled By: samestep fbshipit-source-id: 26c6f00f338e7b44954e5ed89522ce24e2fdc5f0
Summary: Pull Request resolved: pytorch#50615 The method tests for some of the ops have been ported to the new OpInfo based tests. This PR removes those op names from `complex_list` in `test_autograd.py` Test Plan: Imported from OSS Reviewed By: mruberry Differential Revision: D25931268 Pulled By: anjali411 fbshipit-source-id: 4d08626431c61c34cdca18044933e4f5b9b25232
Summary: Pull Request resolved: pytorch#50733 Test Plan: Ensure that CI jobs succeed on GitHub before landing. Reviewed By: beauby Differential Revision: D25954026 fbshipit-source-id: 44d21768379b301144518aafc9c68147db49d931
Summary: Pull Request resolved: pytorch#50299 Reviewed By: zhangguanheng66 Differential Revision: D25865423 Pulled By: ezyang fbshipit-source-id: e2af5f00f99de3c0d38b6b6fedfd9b0027ed9b0b
Summary: Using `insert` instead of `append` to add torch root directory to `sys.path`, to fix `ModuleNotFoundError: No module named 'tools.codegen'`, as mentioned in pytorch#47553 Pull Request resolved: pytorch#50353 Reviewed By: mrshenli Differential Revision: D25893827 Pulled By: ezyang fbshipit-source-id: 841e28898fee5502495f3890801b49d9b442f9d6
…ytorch#50580) Summary: Pull Request resolved: pytorch#50580 Due to what looked like a bug in CUDA, TensorPipe was sometimes failing to auto-detect the device of a CUDA pointer. A workaround, on the PyTorch side, was to always initialize a CUDA context on device 0. Now that TensorPipe has fixed that we can undo the workaround. Reviewed By: mrshenli Differential Revision: D25952929 fbshipit-source-id: 57a5f73241f7371661855c767e44a64ca3b84a74
Summary: `torch.fx.wrap()` could not be used as a decorator as the docstring claimed because it returned None. Pull Request resolved: pytorch#50677 Test Plan: Added `test_wrapped_via_decorator` which used to fail with `'NoneType' object is not callable` and now passes Reviewed By: jamesr66a Differential Revision: D25949313 Pulled By: jansel fbshipit-source-id: 02d0f9adeed812f58ec94c94dd4adc43578f21ce
Can someone please take over this PR? I cannot build the docs locally and have issues rebasing. |
Hi @SmrutiSikha, thanks for your interest. The actual change is pretty simple. It's currently buried under a ton of unrelated commits, however. Find the original change and create a new PR with just it. |
Implements very simple changes suggested in the short discussion of the issue. Updated documentation to inform user that creation of tensor with memory mapped read only numpy arrays will probably cause a crash of the program. The displayed warning message was also updated to contain the information about issues concerning the use of a memory mapped read only numpy array. Closes #46741.