Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve documentation and warning message for creation of a tensor with from_numpy() #49516

Closed
wants to merge 633 commits into from

Conversation

leonvol
Copy link
Contributor

@leonvol leonvol commented Dec 16, 2020

Implements very simple changes suggested in the short discussion of the issue. Updated documentation to inform user that creation of tensor with memory mapped read only numpy arrays will probably cause a crash of the program. The displayed warning message was also updated to contain the information about issues concerning the use of a memory mapped read only numpy array. Closes #46741.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Dec 16, 2020

💊 CI failures summary and remediations

As of commit 888f69c (more details on the Dr. CI page):


  • 4/5 failures possibly* introduced in this PR
    • 1/4 non-CircleCI failure(s)
  • 1/5 broken upstream at merge base 399b07a on Dec 16 from 10:17am to 5:52pm

🕵️ 3 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_build (1/3)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
Auto-merging .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
CONFLICT (add/add): Merge conflict in .circleci/scripts/binary_linux_test.sh
Auto-merging .circleci/scripts/binary_linux_test.sh
CONFLICT (add/add): Merge conflict in .circleci/docker/common/install_conda.sh
Auto-merging .circleci/docker/common/install_conda.sh
CONFLICT (add/add): Merge conflict in .circleci/config.yml
Auto-merging .circleci/config.yml
CONFLICT (add/add): Merge conflict in .circleci/cimodel/data/dimensions.py
Auto-merging .circleci/cimodel/data/dimensions.py
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1

See CircleCI build pytorch_linux_xenial_cuda9_2_cudnn7_py3_gcc5_4_build (2/3)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
Auto-merging .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
CONFLICT (add/add): Merge conflict in .circleci/scripts/binary_linux_test.sh
Auto-merging .circleci/scripts/binary_linux_test.sh
CONFLICT (add/add): Merge conflict in .circleci/docker/common/install_conda.sh
Auto-merging .circleci/docker/common/install_conda.sh
CONFLICT (add/add): Merge conflict in .circleci/config.yml
Auto-merging .circleci/config.yml
CONFLICT (add/add): Merge conflict in .circleci/cimodel/data/dimensions.py
Auto-merging .circleci/cimodel/data/dimensions.py
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_build (3/3)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
Auto-merging .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
CONFLICT (add/add): Merge conflict in .circleci/scripts/binary_linux_test.sh
Auto-merging .circleci/scripts/binary_linux_test.sh
CONFLICT (add/add): Merge conflict in .circleci/docker/common/install_conda.sh
Auto-merging .circleci/docker/common/install_conda.sh
CONFLICT (add/add): Merge conflict in .circleci/config.yml
Auto-merging .circleci/config.yml
CONFLICT (add/add): Merge conflict in .circleci/cimodel/data/dimensions.py
Auto-merging .circleci/cimodel/data/dimensions.py
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1


🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 20 times.

@mrshenli mrshenli added module: numpy Related to numpy support, and also numpy compatibility of our operators triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Dec 18, 2020
@mrshenli
Copy link
Contributor

Hey @leonvol, could you please update the PR title and summary to be more informative? Thanks!

@leonvol leonvol changed the title Fixes #46741 Improve documentation and warning message for creation of a tensor with from_numpy() Dec 18, 2020
torch/_torch_docs.py Outdated Show resolved Hide resolved
@mruberry
Copy link
Collaborator

Hey @leonvol! Thanks for this PR. I suggested a couple clarifications for consistency that should be simple to apply.

torch/_torch_docs.py Outdated Show resolved Hide resolved
@leonvol leonvol requested a review from mruberry January 1, 2021 15:26
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @leonvol!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in 4a6c178.

@mruberry
Copy link
Collaborator

mruberry commented Jan 6, 2021

Had to unland this, sorry @leonvol, looks like it acquired a merge conflict that affected the doc formatting and is now throwing an error:

Jan 06 00:10:52 /opt/conda/lib/python3.6/site-packages/torch/__init__.py:docstring of torch.from_numpy:15: WARNING: Explicit markup ends without a blank line; unexpected unindent.

@leonvol, would you try rebasing this and resolve the doc build complaint?

@mruberry mruberry reopened this Jan 6, 2021
)

Summary: Pull Request resolved: pytorch#50004

Test Plan: Imported from OSS

Reviewed By: lw

Differential Revision: D25750602

Pulled By: mrshenli

fbshipit-source-id: 06854a77f4fb5cc4c34a1ede843301157ebf7309
…ytorch#50005)

Summary: Pull Request resolved: pytorch#50005

Test Plan: Imported from OSS

Reviewed By: lw

Differential Revision: D25750663

Pulled By: mrshenli

fbshipit-source-id: 6d97156b61d82aa19dd0567ca72fe04bd7b5d1e7
)

Summary: Pull Request resolved: pytorch#50020

Test Plan: Imported from OSS

Reviewed By: lw

Differential Revision: D25752968

Pulled By: mrshenli

fbshipit-source-id: 138d37e204b6f9a584633cfc79fd44c8c9c00f41
Summary: Pull Request resolved: pytorch#50023

Test Plan: Imported from OSS

Reviewed By: lw

Differential Revision: D25753217

Pulled By: mrshenli

fbshipit-source-id: 5a98473c17535c8f92043abe143064e7fca4413b
Summary: Pull Request resolved: pytorch#50024

Test Plan: Imported from OSS

Reviewed By: lw

Differential Revision: D25753386

Pulled By: mrshenli

fbshipit-source-id: fdca051b805762a2c88f965ceb3edf1c25d40a56
…#50025)

Summary: Pull Request resolved: pytorch#50025

Test Plan: Imported from OSS

Reviewed By: lw

Differential Revision: D25753587

Pulled By: mrshenli

fbshipit-source-id: a5d4106a10d1b0d3e4c406751795f19af8afd120
tugsbayasgalan and others added 15 commits January 19, 2021 09:30
Summary:
Pull Request resolved: pytorch#48719

Attempt to break this PR (pytorch#33019) into two parts. As per our discussion with eellison,  the first part is to make sure our aten::slice operator take optional parameters for begin/step/end. This will help with refactoring ir_emitter.cpp for genering handling for list and slice striding. Once this PR merged, we will submit a second PR with compiler change.

Test Plan:
None for this PR, but new tests will be added for the second part.

Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D25929902

fbshipit-source-id: 5385df04e6d61ded0699b09bbfec6691396b56c3
Summary:
This PR helps with pytorch#50513 by reducing the complexity of our `mypy` test suite and making it easier to reproduce on the command line. Previously, to reproduce how `mypy` was actually run on tracked source files (ignoring the doctest typechecking) in CI, you technically needed to run 9 different commands with various arguments:
```
$ mypy --cache-dir=.mypy_cache/normal --check-untyped-defs --follow-imports silent
$ mypy --cache-dir=.mypy_cache/examples --follow-imports silent --check-untyped-defs test/type_hint_tests/module_list.py
$ mypy --cache-dir=.mypy_cache/examples --follow-imports silent --check-untyped-defs test/type_hint_tests/namedtuple.py
$ mypy --cache-dir=.mypy_cache/examples --follow-imports silent --check-untyped-defs test/type_hint_tests/opt_size.py
$ mypy --cache-dir=.mypy_cache/examples --follow-imports silent --check-untyped-defs test/type_hint_tests/size.py
$ mypy --cache-dir=.mypy_cache/examples --follow-imports silent --check-untyped-defs test/type_hint_tests/tensor_copy.py
$ mypy --cache-dir=.mypy_cache/examples --follow-imports silent --check-untyped-defs test/type_hint_tests/torch_cuda_random.py
$ mypy --cache-dir=.mypy_cache/examples --follow-imports silent --check-untyped-defs test/type_hint_tests/torch_optim.py
$ mypy --cache-dir=.mypy_cache/strict --config mypy-strict.ini
```
Now you only have to run 2 much simpler commands:
```
$ mypy
$ mypy --config mypy-strict.ini
```
One reason this is useful is because it will make it easier to integrate PyTorch's `mypy` setup into editors (remaining work on this to be done in a followup PR).

Also, as shown in the test plan, this also reduces the time it takes to run `test/test_type_hints.py` incrementally, by reducing the number of times `mypy` is invoked while still checking the same set of files with the same configs.

(Because this PR merges `test_type_hint_examples` (added in pytorch#34595) into `test_run_mypy` (added in pytorch#36584), I've added some people involved in those PRs as reviewers, in case there's a specific reason they weren't combined in the first place.)

Pull Request resolved: pytorch#50631

Test Plan:
Run this twice (the first time is to warm the cache):
```
$ python test/test_type_hints.py -v
```

- *Before:*
  ```
  test_doc_examples (__main__.TestTypeHints)
  Run documentation examples through mypy. ... ok
  test_run_mypy (__main__.TestTypeHints)
  Runs mypy over all files specified in mypy.ini ... ok
  test_run_mypy_strict (__main__.TestTypeHints)
  Runs mypy over all files specified in mypy-strict.ini ... ok
  test_type_hint_examples (__main__.TestTypeHints)
  Runs mypy over all the test examples present in ... ok

  ----------------------------------------------------------------------
  Ran 4 tests in 5.090s

  OK
  ```
  You can also just run `mypy` to see how many files it checks:
  ```
  $ mypy --cache-dir=.mypy_cache/normal --check-untyped-defs --follow-imports silent
  Success: no issues found in 1192 source files
  ```
- *After:*
  ```
  test_doc_examples (__main__.TestTypeHints)
  Run documentation examples through mypy. ... ok
  test_run_mypy (__main__.TestTypeHints)
  Runs mypy over all files specified in mypy.ini ... ok
  test_run_mypy_strict (__main__.TestTypeHints)
  Runs mypy over all files specified in mypy-strict.ini ... ok

  ----------------------------------------------------------------------
  Ran 3 tests in 2.404s

  OK
  ```
  Now `mypy` checks 7 more files, which is the number in `test/type_hint_tests`:
  ```
  $ mypy
  Success: no issues found in 1199 source files
  ```

Reviewed By: zou3519

Differential Revision: D25932660

Pulled By: samestep

fbshipit-source-id: 26c6f00f338e7b44954e5ed89522ce24e2fdc5f0
Summary:
Pull Request resolved: pytorch#50615

The method tests for some of the ops have been ported to the new OpInfo based tests. This PR removes those op names from `complex_list` in `test_autograd.py`

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25931268

Pulled By: anjali411

fbshipit-source-id: 4d08626431c61c34cdca18044933e4f5b9b25232
Summary: Pull Request resolved: pytorch#50733

Test Plan: Ensure that CI jobs succeed on GitHub before landing.

Reviewed By: beauby

Differential Revision: D25954026

fbshipit-source-id: 44d21768379b301144518aafc9c68147db49d931
Summary: Pull Request resolved: pytorch#50299

Reviewed By: zhangguanheng66

Differential Revision: D25865423

Pulled By: ezyang

fbshipit-source-id: e2af5f00f99de3c0d38b6b6fedfd9b0027ed9b0b
Summary:
Using `insert` instead of `append` to add torch root directory to `sys.path`, to fix `ModuleNotFoundError: No module named 'tools.codegen'`, as mentioned in pytorch#47553

Pull Request resolved: pytorch#50353

Reviewed By: mrshenli

Differential Revision: D25893827

Pulled By: ezyang

fbshipit-source-id: 841e28898fee5502495f3890801b49d9b442f9d6
…ytorch#50580)

Summary:
Pull Request resolved: pytorch#50580

Due to what looked like a bug in CUDA, TensorPipe was sometimes failing to auto-detect the device of a CUDA pointer. A workaround, on the PyTorch side, was to always initialize a CUDA context on device 0. Now that TensorPipe has fixed that we can undo the workaround.

Reviewed By: mrshenli

Differential Revision: D25952929

fbshipit-source-id: 57a5f73241f7371661855c767e44a64ca3b84a74
Summary:
`torch.fx.wrap()` could not be used as a decorator as the docstring claimed because it returned None.

Pull Request resolved: pytorch#50677

Test Plan: Added `test_wrapped_via_decorator` which used to fail with `'NoneType' object is not callable` and now passes

Reviewed By: jamesr66a

Differential Revision: D25949313

Pulled By: jansel

fbshipit-source-id: 02d0f9adeed812f58ec94c94dd4adc43578f21ce
@leonvol
Copy link
Contributor Author

leonvol commented Jan 20, 2021

Can someone please take over this PR? I cannot build the docs locally and have issues rebasing.

@SmrutiSikha
Copy link

@leonvol @mruberry I want to take over this PR? But can you help me to setup this locally? Can you tell me what are the problems that you faced?

@mruberry
Copy link
Collaborator

mruberry commented Feb 5, 2021

Hi @SmrutiSikha, thanks for your interest. The actual change is pretty simple. It's currently buried under a ton of unrelated commits, however. Find the original change and create a new PR with just it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: numpy Related to numpy support, and also numpy compatibility of our operators open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Tensor with read only memory mapped numpy array causes seg fault.