Skip to content

Conversation

@alanwaketan
Copy link
Collaborator

Summary:
Code-generated mean.dim and mean.out which is the out variant.

Test Plan:
lazy_tensor_core/test/cpp/build/test_ptltc --gtest_filter=AtenLtcTsTensorTest.TestMean*

Fixes #65576.

Summary:
Code-generated mean.dim and mean.out which is the out variant.

Test Plan:
lazy_tensor_core/test/cpp/build/test_ptltc --gtest_filter=AtenLtcTsTensorTest.TestMean*
@pytorch-probot
Copy link

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/117ab940bc96c551c370fa8e08e497a44ce1f040/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Oct 25, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 117ab94 (more details on the Dr. CI page):


  • 2/2 failures introduced in this PR

🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-xenial-py3.6-gcc5.4 / test (backwards_compat, 1, 1, linux.2xlarge) (1/2)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-10-25T07:02:14.8819115Z The PR is introduc...m to confirm whether this change is wanted or not.
2021-10-25T07:02:14.8806042Z processing existing schema:  alltoall_base(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor _1, Tensor _2, int[] _3, int[] _4) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-10-25T07:02:14.8807400Z processing existing schema:  alltoall(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, Tensor[] _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-10-25T07:02:14.8808699Z processing existing schema:  send(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-10-25T07:02:14.8809991Z processing existing schema:  recv(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-10-25T07:02:14.8811296Z processing existing schema:  recv_anysource(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-10-25T07:02:14.8812564Z processing existing schema:  barrier(__torch__.torch.classes.dist_c10d.ProcessGroup _0) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-10-25T07:02:14.8813632Z processing existing schema:  __init__(__torch__.torch.classes.dist_c10d.frontend _0) -> (NoneType _0)
2021-10-25T07:02:14.8815030Z processing existing schema:  new_process_group_helper(__torch__.torch.classes.dist_c10d.frontend _0, int _1, int _2, int[] _3, str _4, __torch__.torch.classes.dist_c10d.Store _5, str? _6, int _7) -> (__torch__.torch.classes.dist_c10d.ProcessGroup _0)
2021-10-25T07:02:14.8816588Z processing existing schema:  get_process_group_by_name(__torch__.torch.classes.dist_c10d.frontend _0, str _1) -> (__torch__.torch.classes.dist_c10d.ProcessGroup _0)
2021-10-25T07:02:14.8817961Z processing existing schema:  get_name_of_process_group(__torch__.torch.classes.dist_c10d.frontend _0, __torch__.torch.classes.dist_c10d.ProcessGroup _1) -> (str _0)
2021-10-25T07:02:14.8819115Z The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. 
2021-10-25T07:02:14.8819715Z 
2021-10-25T07:02:14.8819975Z Broken ops: [
2021-10-25T07:02:14.8820597Z 	aten::_torch_cuda_cu_linker_symbol_op(Tensor self) -> (Tensor)
2021-10-25T07:02:14.8821477Z 	aten::_histogramdd_from_bin_tensors(Tensor self, Tensor[] bins, *, Tensor? weight=None, bool density=False) -> (Tensor)
2021-10-25T07:02:14.8822539Z 	aten::_histogramdd_bin_edges(Tensor self, int[] bins, *, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor[])
2021-10-25T07:02:14.8823645Z 	aten::_histogramdd_from_bin_cts(Tensor self, int[] bins, *, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor)
2021-10-25T07:02:14.8824193Z ]
2021-10-25T07:02:14.8824450Z + cleanup
2021-10-25T07:02:14.8824717Z + retcode=1
2021-10-25T07:02:14.8824990Z + set +x

See GitHub Actions build Test tools / test (2/2)

Step: "Test tools" (full log | diagnosis details | 🔁 rerun)

2021-10-25T06:54:38.1735021Z AssertionError: 'x...ck (most recent call last):\n Fil[154 chars]\'\n'
2021-10-25T06:54:38.1725013Z   File "/opt/hostedtoolcache/Python/3.10.0/x64/lib/python3.10/unittest/async_case.py", line 65, in _callTestMethod
2021-10-25T06:54:38.1725958Z     self._callMaybeAsync(method)
2021-10-25T06:54:38.1726944Z   File "/opt/hostedtoolcache/Python/3.10.0/x64/lib/python3.10/unittest/async_case.py", line 88, in _callMaybeAsync
2021-10-25T06:54:38.1728028Z     return self._asyncioTestLoop.run_until_complete(fut)
2021-10-25T06:54:38.1729138Z   File "/opt/hostedtoolcache/Python/3.10.0/x64/lib/python3.10/asyncio/base_events.py", line 641, in run_until_complete
2021-10-25T06:54:38.1730025Z     return future.result()
2021-10-25T06:54:38.1731018Z   File "/opt/hostedtoolcache/Python/3.10.0/x64/lib/python3.10/unittest/async_case.py", line 102, in _asyncioLoopRunner
2021-10-25T06:54:38.1732028Z     ret = await awaitable
2021-10-25T06:54:38.1732791Z   File "/home/runner/work/pytorch/pytorch/tools/test/test_actions_local_runner.py", line 187, in test_mypy
2021-10-25T06:54:38.1733673Z     self.assertEqual(expected, f.getvalue())
2021-10-25T06:54:38.1735021Z AssertionError: 'x my[29 chars]on)\ntorch/some_stubs.pyi:3:17: error: Incompa[788 chars]t]\n' != 'x my[29 chars]on)\nTraceback (most recent call last):\n  Fil[154 chars]\'\n'
2021-10-25T06:54:38.1735998Z   x mypy (skipped typestub generation)
2021-10-25T06:54:38.1736588Z + Traceback (most recent call last):
2021-10-25T06:54:38.1737336Z +   File "/home/runner/work/pytorch/pytorch/tools/linter/mypy_wrapper.py", line 27, in <module>
2021-10-25T06:54:38.1738030Z +     import mypy.api
2021-10-25T06:54:38.1738780Z + ModuleNotFoundError: No module named 'mypy'
2021-10-25T06:54:38.1740033Z - torch/some_stubs.pyi:3:17: error: Incompatible types in assignment (expression has type "None", variable has type "str")  [assignment]
2021-10-25T06:54:38.1741550Z - torch/some_stubs.pyi:4:17: error: Incompatible types in assignment (expression has type "float", variable has type "str")  [assignment]
2021-10-25T06:54:38.1743087Z - torch/some_cool_file.py:3:17: error: Incompatible types in assignment (expression has type "None", variable has type "str")  [assignment]
2021-10-25T06:54:38.1744893Z - torch/some_cool_file.py:4:17: error: Incompatible types in assignment (expression has type "float", variable has type "str")  [assignment]
2021-10-25T06:54:38.1746765Z - caffe2/some_cool_file.py:3:17: error: Incompatible types in assignment (expression has type "None", variable has type "str")  [assignment]

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@alanwaketan alanwaketan requested a review from wconstab October 25, 2021 06:45
@alanwaketan
Copy link
Collaborator Author

Thanks @wconstab for approving the PR. Merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants