Skip to content

Conversation

zou3519
Copy link
Contributor

@zou3519 zou3519 commented Jun 15, 2020

Stack from ghstack:

We have this call native::size directly. Some alternatives I considered
were:

  • Call VariableType::size directly. That seems isomorphic to what we're
    doing now.
  • when creating a BatchedTensor from a regular tensor, put all of the
    keys on that tensor into the BatchedTensor's dispatch key set and use
    the dispatcher fallthrough mechanism. That seems weird because
    BatchedTensor is a tensor wrapper and also error prone because if
    BatchedTensor gets the VariableType key, there's a chance that if
    something goes wrong, an autogradmeta gets created on it...

Test Plan:

  • ./build/bin/vmap_test

Differential Revision: D22070655

We have this call native::size directly. Some alternatives I considered
were:
- Call VariableType::size directly. That seems isomorphic to what we're
doing now.
- when creating a BatchedTensor from a regular tensor, put all of the
keys on that tensor into the BatchedTensor's dispatch key set and use
the dispatcher fallthrough mechanism. That seems weird because
BatchedTensor is a tensor wrapper and also error prone because if
BatchedTensor gets the VariableType key, there's a chance that if
something goes wrong, an autogradmeta gets created on it...

Test Plan:
- `./build/bin/vmap_test`

[ghstack-poisoned]
@zou3519 zou3519 requested review from cpuhrsch and ezyang June 15, 2020 15:44
We have this call native::size directly. Some alternatives I considered
were:
- Call VariableType::size directly. That seems isomorphic to what we're
doing now.
- when creating a BatchedTensor from a regular tensor, put all of the
keys on that tensor into the BatchedTensor's dispatch key set and use
the dispatcher fallthrough mechanism. That seems weird because
BatchedTensor is a tensor wrapper and also error prone because if
BatchedTensor gets the VariableType key, there's a chance that if
something goes wrong, an autogradmeta gets created on it...

Test Plan:
- `./build/bin/vmap_test`

[ghstack-poisoned]
@dr-ci
Copy link

dr-ci bot commented Jun 15, 2020

💊 CI failures summary and remediations

As of commit 69ec2ca (more details on the Dr. CI page):


None of the CI failures appear to be your fault 💚



❄️ 3 failures tentatively classified as flaky

but reruns have not yet been triggered to confirm:

See CircleCI build pytorch_windows_vs2019_py36_cuda10.1_on_cpu_test1 (1/3)

Step: "Checkout code" (full log | diagnosis details | 🔁 rerun) ❄️

Writing SSH key for checkout to id_rsa
Creating .ssh directory
Adding the following entries to known_hosts:
github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==

Writing SSH key for checkout to id_rsa

See CircleCI build pytorch_windows_vs2019_py36_cpu_test1 (2/3)

Step: "Checkout code" (full log | diagnosis details | 🔁 rerun) ❄️

Writing SSH key for checkout to id_rsa
Creating .ssh directory
Adding the following entries to known_hosts:
github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==

Writing SSH key for checkout to id_rsa

See CircleCI build pytorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_test (3/3)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun) ❄️

Jun 16 19:01:21 ConnectionResetError: [Errno 104] Connection reset by peer
Jun 16 19:01:21   File "/opt/conda/lib/python3.6/multiprocessing/connection.py", line 493, in Client 
Jun 16 19:01:21     answer_challenge(c, authkey) 
Jun 16 19:01:21   File "/opt/conda/lib/python3.6/multiprocessing/connection.py", line 737, in answer_challenge 
Jun 16 19:01:21     response = connection.recv_bytes(256)        # reject large message 
Jun 16 19:01:21   File "/opt/conda/lib/python3.6/multiprocessing/connection.py", line 216, in recv_bytes 
Jun 16 19:01:21     buf = self._recv_bytes(maxlength) 
Jun 16 19:01:21   File "/opt/conda/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes 
Jun 16 19:01:21     buf = self._recv(4) 
Jun 16 19:01:21   File "/opt/conda/lib/python3.6/multiprocessing/connection.py", line 379, in _recv 
Jun 16 19:01:21     chunk = read(handle, remaining) 
Jun 16 19:01:21 ConnectionResetError: [Errno 104] Connection reset by peer 
Jun 16 19:01:21  
Jun 16 19:01:21 Process ErrorTrackingProcess-120: 
Jun 16 19:01:21 Traceback (most recent call last): 
Jun 16 19:01:21   File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap 
Jun 16 19:01:21     self.run() 
Jun 16 19:01:21   File "/var/lib/jenkins/workspace/test/test_dataloader.py", line 360, in run 
Jun 16 19:01:21     super(ErrorTrackingProcess, self).run() 
Jun 16 19:01:21   File "/opt/conda/lib/python3.6/multiprocessing/process.py", line 93, in run 
Jun 16 19:01:21     self._target(*self._args, **self._kwargs) 
Jun 16 19:01:21   File "/var/lib/jenkins/workspace/test/test_dataloader.py", line 628, in _test_proper_exit 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 12 times.

We have this call native::size directly. Some alternatives I considered
were:
- Call VariableType::size directly. That seems isomorphic to what we're
doing now.
- when creating a BatchedTensor from a regular tensor, put all of the
keys on that tensor into the BatchedTensor's dispatch key set and use
the dispatcher fallthrough mechanism. That seems weird because
BatchedTensor is a tensor wrapper and also error prone because if
BatchedTensor gets the VariableType key, there's a chance that if
something goes wrong, an autogradmeta gets created on it...

Test Plan:
- `./build/bin/vmap_test`

[ghstack-poisoned]
We have this call native::size directly. Some alternatives I considered
were:
- Call VariableType::size directly. That seems isomorphic to what we're
doing now.
- when creating a BatchedTensor from a regular tensor, put all of the
keys on that tensor into the BatchedTensor's dispatch key set and use
the dispatcher fallthrough mechanism. That seems weird because
BatchedTensor is a tensor wrapper and also error prone because if
BatchedTensor gets the VariableType key, there's a chance that if
something goes wrong, an autogradmeta gets created on it...

Test Plan:
- `./build/bin/vmap_test`

[ghstack-poisoned]
We have this call native::size directly. Some alternatives I considered
were:
- Call VariableType::size directly. That seems isomorphic to what we're
doing now.
- when creating a BatchedTensor from a regular tensor, put all of the
keys on that tensor into the BatchedTensor's dispatch key set and use
the dispatcher fallthrough mechanism. That seems weird because
BatchedTensor is a tensor wrapper and also error prone because if
BatchedTensor gets the VariableType key, there's a chance that if
something goes wrong, an autogradmeta gets created on it...

Test Plan:
- `./build/bin/vmap_test`

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

@zou3519 merged this pull request in 161fd5f.

xwang233 pushed a commit to xwang233/pytorch that referenced this pull request Jun 20, 2020
Summary:
Pull Request resolved: pytorch#40028

We have this call native::size directly. Some alternatives I considered
were:
- Call VariableType::size directly. That seems isomorphic to what we're
doing now.
- when creating a BatchedTensor from a regular tensor, put all of the
keys on that tensor into the BatchedTensor's dispatch key set and use
the dispatcher fallthrough mechanism. That seems weird because
BatchedTensor is a tensor wrapper and also error prone because if
BatchedTensor gets the VariableType key, there's a chance that if
something goes wrong, an autogradmeta gets created on it...

Test Plan: - `./build/bin/vmap_test`

Differential Revision: D22070655

Pulled By: zou3519

fbshipit-source-id: 18530579ad41f3c4f96589da41eb24a46caf7af9
@facebook-github-bot facebook-github-bot deleted the gh/zou3519/259/head branch June 21, 2020 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants