Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make out ops c10-full (with hacky-wrapper) #48912

Closed
wants to merge 24 commits into from

Conversation

smessmer
Copy link
Contributor

@smessmer smessmer commented Dec 7, 2020

Stack from ghstack:

Benchmark:

Old (i.e. codegenerated unboxing wrapper + no hacky_wrapper):

<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7f64d03ebcd0>
torch.absolute(t, out=o)
setup:
  t = torch.empty([1])
  o = torch.empty([1])

                           All          Noisy symbols removed
    Instructions:       657204                     634396
    Baseline:             4192                       3786
100 runs per measurement, 1 thread

New (i.e. templated unboxing wrapper + hacky_wrapper):

<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7fa7de211cd0>
torch.absolute(t, out=o)
setup:
  t = torch.empty([1])
  o = torch.empty([1])

                           All          Noisy symbols removed
    Instructions:       658160                     633996
    Baseline:             4210                       3786
100 runs per measurement, 1 thread

Differential Revision: D25363335

@dr-ci
Copy link

dr-ci bot commented Dec 7, 2020

💊 CI failures summary and remediations

As of commit c3cd1d8 (more details on the Dr. CI page):



🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_macos_10_13_py3_test (1/2)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

Dec 15 19:37:11 RuntimeError: test_utils failed!
Dec 15 19:37:11 Generated XML report: test-reports/dist-gloo/TEST-TestFFI-20201215193630.xml 
Dec 15 19:37:11 Generated XML report: test-reports/dist-gloo/TEST-TestHipify-20201215193630.xml 
Dec 15 19:37:11 Generated XML report: test-reports/dist-gloo/TEST-TestHub-20201215193630.xml 
Dec 15 19:37:11 Generated XML report: test-reports/dist-gloo/TEST-TestONNXUtils-20201215193630.xml 
Dec 15 19:37:11 Generated XML report: test-reports/dist-gloo/TEST-TestStandaloneCPPJIT-20201215193630.xml 
Dec 15 19:37:11 Traceback (most recent call last): 
Dec 15 19:37:11   File "test/run_test.py", line 896, in <module> 
Dec 15 19:37:11     main() 
Dec 15 19:37:11   File "test/run_test.py", line 879, in main 
Dec 15 19:37:11     raise RuntimeError(err_message) 
Dec 15 19:37:11 RuntimeError: test_utils failed! 
Dec 15 19:37:12 + cleanup 
Dec 15 19:37:12 + retcode=1 
Dec 15 19:37:12 + set +x 

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_build (2/2)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

Dec 15 18:22:24 Failed to generate ATEN bindings: ['/var/lib/jenkins/workspace/xla/scripts/generate_code.sh']
Dec 15 18:22:23 AtenXlaType function missed override: Tensor& bitwise_or_out(Tensor& out, const Tensor& self, Scalar other); // bitwise_or_out(Tensor,Tensor,Scalar)->Tensor 
Dec 15 18:22:23 AtenXlaType function missed override: Tensor& bitwise_xor_out(Tensor& out, const Tensor& self, Scalar other); // bitwise_xor_out(Tensor,Tensor,Scalar)->Tensor 
Dec 15 18:22:23 AtenXlaType function missed override: Tensor& eye_out(Tensor& out, int64_t n); // eye_out(Tensor,int64_t)->Tensor 
Dec 15 18:22:23 AtenXlaType function missed override: Tensor& eye_out(Tensor& out, int64_t n, int64_t m); // eye_out(Tensor,int64_t,int64_t)->Tensor 
Dec 15 18:22:23 Traceback (most recent call last): 
Dec 15 18:22:23   File "/var/lib/jenkins/workspace/xla/scripts/gen.py", line 1172, in <module> 
Dec 15 18:22:23     generate(args) 
Dec 15 18:22:23   File "/var/lib/jenkins/workspace/xla/scripts/gen.py", line 1142, in generate 
Dec 15 18:22:23     assert check_overrides(overrides, overridden) 
Dec 15 18:22:23 AssertionError 
Dec 15 18:22:24 Failed to generate ATEN bindings: ['/var/lib/jenkins/workspace/xla/scripts/generate_code.sh'] 
Dec 15 18:22:24 Building torch_xla version: 1.6 
Dec 15 18:22:24 XLA Commit ID: fe89172b2bd1c9a1c104bd2cbe565f30e6c8e328 
Dec 15 18:22:24 PyTorch Commit ID: f406f17a822af0798cd5aedc8de30af2de5a5736 
Dec 15 18:22:24 + cleanup 
Dec 15 18:22:24 + retcode=1 
Dec 15 18:22:24 + set +x 
Dec 15 18:22:24 =================== sccache compilation log =================== 
Dec 15 18:22:24 =========== If your build fails, please take a look at the log above for possible reasons =========== 
Dec 15 18:22:24 Compile requests                    4562 
Dec 15 18:22:24 Compile requests executed           4268 

❄️ 1 failure tentatively classified as flaky

but reruns have not yet been triggered to confirm:

See CircleCI build pytorch_windows_vs2019_py36_cuda10.1_build (1/1)

Step: "Checkout code" (full log | diagnosis details | 🔁 rerun) ❄️

Writing SSH key for checkout to id_rsa
Creating .ssh directory
Adding the following entries to known_hosts:
github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==

Writing SSH key for checkout to id_rsa

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 110 times.

smessmer added a commit that referenced this pull request Dec 8, 2020
Pull Request resolved: #48912


ghstack-source-id: 118059483

Differential Revision: [D25363335](https://our.internmc.facebook.com/intern/diff/D25363335/)
@smessmer
Copy link
Contributor Author

Added a benchmark to the PR description

Benchmark:
---
Old (i.e. codegenerated unboxing wrapper + no hacky_wrapper):
```
<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7f64d03ebcd0>
torch.absolute(t, out=o)
setup:
  t = torch.empty([1])
  o = torch.empty([1])

                           All          Noisy symbols removed
    Instructions:       657204                     634396
    Baseline:             4192                       3786
100 runs per measurement, 1 thread
```

New (i.e. templated unboxing wrapper + hacky_wrapper):
```
<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7fa7de211cd0>
torch.absolute(t, out=o)
setup:
  t = torch.empty([1])
  o = torch.empty([1])

                           All          Noisy symbols removed
    Instructions:       658160                     633996
    Baseline:             4210                       3786
100 runs per measurement, 1 thread
```

Differential Revision: [D25363335](https://our.internmc.facebook.com/intern/diff/D25363335/)

[ghstack-poisoned]
Benchmark:
---
Old (i.e. codegenerated unboxing wrapper + no hacky_wrapper):
```
<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7f64d03ebcd0>
torch.absolute(t, out=o)
setup:
  t = torch.empty([1])
  o = torch.empty([1])

                           All          Noisy symbols removed
    Instructions:       657204                     634396
    Baseline:             4192                       3786
100 runs per measurement, 1 thread
```

New (i.e. templated unboxing wrapper + hacky_wrapper):
```
<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7fa7de211cd0>
torch.absolute(t, out=o)
setup:
  t = torch.empty([1])
  o = torch.empty([1])

                           All          Noisy symbols removed
    Instructions:       658160                     633996
    Baseline:             4210                       3786
100 runs per measurement, 1 thread
```

Differential Revision: [D25363335](https://our.internmc.facebook.com/intern/diff/D25363335/)

[ghstack-poisoned]
Benchmark:
---
Old (i.e. codegenerated unboxing wrapper + no hacky_wrapper):
```
<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7f64d03ebcd0>
torch.absolute(t, out=o)
setup:
  t = torch.empty([1])
  o = torch.empty([1])

                           All          Noisy symbols removed
    Instructions:       657204                     634396
    Baseline:             4192                       3786
100 runs per measurement, 1 thread
```

New (i.e. templated unboxing wrapper + hacky_wrapper):
```
<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7fa7de211cd0>
torch.absolute(t, out=o)
setup:
  t = torch.empty([1])
  o = torch.empty([1])

                           All          Noisy symbols removed
    Instructions:       658160                     633996
    Baseline:             4210                       3786
100 runs per measurement, 1 thread
```

Differential Revision: [D25363335](https://our.internmc.facebook.com/intern/diff/D25363335/)

[ghstack-poisoned]
Benchmark:
---
Old (i.e. codegenerated unboxing wrapper + no hacky_wrapper):
```
<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7f64d03ebcd0>
torch.absolute(t, out=o)
setup:
  t = torch.empty([1])
  o = torch.empty([1])

                           All          Noisy symbols removed
    Instructions:       657204                     634396
    Baseline:             4192                       3786
100 runs per measurement, 1 thread
```

New (i.e. templated unboxing wrapper + hacky_wrapper):
```
<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7fa7de211cd0>
torch.absolute(t, out=o)
setup:
  t = torch.empty([1])
  o = torch.empty([1])

                           All          Noisy symbols removed
    Instructions:       658160                     633996
    Baseline:             4210                       3786
100 runs per measurement, 1 thread
```

Differential Revision: [D25363335](https://our.internmc.facebook.com/intern/diff/D25363335/)

[ghstack-poisoned]
Benchmark:
---
Old (i.e. codegenerated unboxing wrapper + no hacky_wrapper):
```
<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7f64d03ebcd0>
torch.absolute(t, out=o)
setup:
  t = torch.empty([1])
  o = torch.empty([1])

                           All          Noisy symbols removed
    Instructions:       657204                     634396
    Baseline:             4192                       3786
100 runs per measurement, 1 thread
```

New (i.e. templated unboxing wrapper + hacky_wrapper):
```
<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7fa7de211cd0>
torch.absolute(t, out=o)
setup:
  t = torch.empty([1])
  o = torch.empty([1])

                           All          Noisy symbols removed
    Instructions:       658160                     633996
    Baseline:             4210                       3786
100 runs per measurement, 1 thread
```

Differential Revision: [D25363335](https://our.internmc.facebook.com/intern/diff/D25363335/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 40a02e2.

@facebook-github-bot facebook-github-bot deleted the gh/smessmer/272/head branch December 19, 2020 15:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants