Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make pytorch clang-tidy clean #60649

Closed
wants to merge 1 commit into from
Closed

Conversation

1ntEgr8
Copy link
Contributor

@1ntEgr8 1ntEgr8 commented Jun 24, 2021

This PR suppresses clang-tidy warnings in the codebase (for now) so that we can re-enable clang-tidy checks on master.

I ran this script to add the NOLINTNEXTLINE comments (on a devserver):

python3 setup.py develop

# Uses same script that's run on CI and adds the -j (parallel), -s (add comments), -k (continue if diagnostic errors are found) options
python3 tools/clang_tidy.py \
  -j \
  -s \
  -k \
  -v \
  --paths torch/csrc/ \
  -g"-torch/csrc/jit/passes/onnx/helper.cpp" \
  -g"-torch/csrc/jit/passes/onnx/shape_type_inference.cpp" \
  -g"-torch/csrc/jit/serialization/onnx.cpp" \
  -g"-torch/csrc/jit/serialization/export.cpp" \
  -g"-torch/csrc/jit/serialization/import.cpp" \
  -g"-torch/csrc/jit/serialization/import_legacy.cpp" \
  -g"-torch/csrc/onnx/init.cpp" \
  -g"-torch/csrc/cuda/nccl.*" \
  -g"-torch/csrc/cuda/python_nccl.cpp" \
  -g"-torch/csrc/autograd/FunctionsManual.cpp" \
  -g"-torch/csrc/generic/*.cpp" \
  -g"-torch/csrc/jit/codegen/cuda/runtime/*" \
  -g"-torch/csrc/deploy/interpreter/interpreter.cpp" \
  -g"-torch/csrc/deploy/interpreter/interpreter.h" \
  -g"-torch/csrc/deploy/interpreter/interpreter_impl.h" \
  -g"-torch/csrc/deploy/interpreter/test_main.cpp"

Test plan:
Verified changes by re-running the script (without the -s option) and seeing no warnings/errors.

@1ntEgr8 1ntEgr8 requested review from malfet, driazati and a team June 24, 2021 14:23
@facebook-github-bot facebook-github-bot added oncall: jit Add this issue/PR to JIT oncall triage queue cla signed oncall: distributed Add this issue/PR to distributed oncall triage queue labels Jun 24, 2021
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jun 24, 2021

💊 CI failures summary and remediations

As of commit 52d4671 (more details on the Dr. CI page and at hud.pytorch.org/pr/60649):



🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build Lint / clang-tidy (1/1)

Step: "Run clang-tidy" (full log | diagnosis details | 🔁 rerun)

2021-06-30T23:03:12.2057641Z ../aten/src/ATen/P...r: 'omp.h' file not found [clang-diagnostic-error]
2021-06-30T23:03:12.2052671Z stderr:
2021-06-30T23:03:12.2053006Z 6898 warnings and 1 error generated.
2021-06-30T23:03:12.2053575Z Error while processing /__w/pytorch/pytorch/torch/csrc/jit/runtime/register_prim_ops.cpp.
2021-06-30T23:03:12.2054378Z Suppressed 7011 warnings (6895 in non-user code, 3 due to line filter, 113 NOLINT).
2021-06-30T23:03:12.2055345Z Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well.
2021-06-30T23:03:12.2056043Z Found compiler error(s).
2021-06-30T23:03:12.2056270Z 
2021-06-30T23:03:12.2056485Z <<<
2021-06-30T23:03:12.2056715Z >>>
2021-06-30T23:03:12.2056947Z stdout:
2021-06-30T23:03:12.2057641Z ../aten/src/ATen/ParallelOpenMP.h:11:10: error: 'omp.h' file not found [clang-diagnostic-error]
2021-06-30T23:03:12.2058220Z #include <omp.h>
2021-06-30T23:03:12.2058491Z          ^
2021-06-30T23:03:12.2058657Z 
2021-06-30T23:03:12.2058889Z stderr:
2021-06-30T23:03:12.2059227Z 6901 warnings and 1 error generated.
2021-06-30T23:03:12.2059827Z Error while processing /__w/pytorch/pytorch/torch/csrc/jit/runtime/register_prim_ops_fulljit.cpp.
2021-06-30T23:03:12.2060659Z Suppressed 6976 warnings (6895 in non-user code, 6 due to line filter, 75 NOLINT).
2021-06-30T23:03:12.2061627Z Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well.
2021-06-30T23:03:12.2062327Z Found compiler error(s).
2021-06-30T23:03:12.2062554Z 

❄️ 1 failure tentatively classified as flaky

but reruns have not yet been triggered to confirm:

See CircleCI build pytorch_linux_bionic_py3_6_clang9_noarch_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun) ❄️

Jul 01 00:43:16 RuntimeError: Process 1 terminated or timed out after 105.0793685913086 seconds
Jul 01 00:43:16 ======================================================================
Jul 01 00:43:16 ERROR [105.113s]: test_ddp_invalid_comm_hook_init (__main__.DistributedDataParallelTest)
Jul 01 00:43:16 ----------------------------------------------------------------------
Jul 01 00:43:16 Traceback (most recent call last):
Jul 01 00:43:16   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 399, in wrapper
Jul 01 00:43:16     self._join_processes(fn)
Jul 01 00:43:16   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 606, in _join_processes
Jul 01 00:43:16     self._check_return_codes(elapsed_time)
Jul 01 00:43:16   File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_distributed.py", line 654, in _check_return_codes
Jul 01 00:43:16     raise RuntimeError('Process {} terminated or timed out after {} seconds'.format(i, elapsed_time))
Jul 01 00:43:16 RuntimeError: Process 1 terminated or timed out after 105.0793685913086 seconds
Jul 01 00:43:16 
Jul 01 00:43:16 ----------------------------------------------------------------------
Jul 01 00:43:16 Ran 83 tests in 131.452s
Jul 01 00:43:16 
Jul 01 00:43:16 FAILED (errors=1, skipped=31)
Jul 01 00:43:16 
Jul 01 00:43:16 Generating XML reports...
Jul 01 00:43:16 Generated XML report: test-reports/python-unittest/distributed.test_c10d_gloo/TEST-CommTest-20210701004104.xml
Jul 01 00:43:16 Generated XML report: test-reports/python-unittest/distributed.test_c10d_gloo/TEST-DistributedDataParallelTest-20210701004104.xml
Jul 01 00:43:16 Generated XML report: test-reports/python-unittest/distributed.test_c10d_gloo/TEST-ProcessGroupGlooTest-20210701004104.xml

Preview docs built from this PR

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@janeyx99
Copy link
Contributor

From the lint job, it looks like there are still some clang-tidy errors that should be fixed, like in torch/csrc/deploy/interpreter/interpreter_impl.cpp

Copy link
Contributor

@driazati driazati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, some minor formatting weirdness but I don't want this to get stuck rebasing over and over

torch/csrc/autograd/forward_grad.h Show resolved Hide resolved
@facebook-github-bot
Copy link
Contributor

@1ntEgr8 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Summary:
Fixes pytorch#60187

Pull Request resolved: pytorch#60842

Reviewed By: gdankel

Differential Revision: D29498498

Pulled By: malfet

fbshipit-source-id: 78e1b25a2e6bdfd3ba0c988d023c7a7f79a22cf4
@facebook-github-bot
Copy link
Contributor

@1ntEgr8 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@1ntEgr8 merged this pull request in 6ecc1a4.

@malfet
Copy link
Contributor

malfet commented Jul 2, 2021

Hmm, this PR introduced a lot of false-positive clang-tidy warnings, for example:

  // NOLINTNEXTLINE(cppcoreguidelines-init-variables)
  std::unique_ptr<CudaIPCSentData> sent_data(
      static_cast<CudaIPCSentData*>(ptr));

jjsjann123 pushed a commit to jjsjann123/nvfuser that referenced this pull request Oct 29, 2022
Summary:
This PR suppresses clang-tidy warnings in the codebase (for now) so that we can re-enable clang-tidy checks on master.

I ran this script to add the `NOLINTNEXTLINE` comments (on a devserver):
```bash
python3 setup.py develop

# Uses same script that's run on CI and adds the -j (parallel), -s (add comments), -k (continue if diagnostic errors are found) options
python3 tools/clang_tidy.py \
  -j \
  -s \
  -k \
  -v \
  --paths torch/csrc/ \
  -g"-torch/csrc/jit/passes/onnx/helper.cpp" \
  -g"-torch/csrc/jit/passes/onnx/shape_type_inference.cpp" \
  -g"-torch/csrc/jit/serialization/onnx.cpp" \
  -g"-torch/csrc/jit/serialization/export.cpp" \
  -g"-torch/csrc/jit/serialization/import.cpp" \
  -g"-torch/csrc/jit/serialization/import_legacy.cpp" \
  -g"-torch/csrc/onnx/init.cpp" \
  -g"-torch/csrc/cuda/nccl.*" \
  -g"-torch/csrc/cuda/python_nccl.cpp" \
  -g"-torch/csrc/autograd/FunctionsManual.cpp" \
  -g"-torch/csrc/generic/*.cpp" \
  -g"-torch/csrc/jit/codegen/cuda/runtime/*" \
  -g"-torch/csrc/deploy/interpreter/interpreter.cpp" \
  -g"-torch/csrc/deploy/interpreter/interpreter.h" \
  -g"-torch/csrc/deploy/interpreter/interpreter_impl.h" \
  -g"-torch/csrc/deploy/interpreter/test_main.cpp"
```

Pull Request resolved: pytorch/pytorch#60649

Test Plan: Verified changes by re-running the script (without the `-s` option) and seeing no warnings/errors.

Reviewed By: walterddr, janeyx99

Differential Revision: D29504258

Pulled By: 1ntEgr8

fbshipit-source-id: 78310b30ee8213b73ddb4771ad874665323e7a4e
jjsjann123 pushed a commit to jjsjann123/nvfuser that referenced this pull request Nov 10, 2022
Summary:
This PR suppresses clang-tidy warnings in the codebase (for now) so that we can re-enable clang-tidy checks on master.

I ran this script to add the `NOLINTNEXTLINE` comments (on a devserver):
```bash
python3 setup.py develop

# Uses same script that's run on CI and adds the -j (parallel), -s (add comments), -k (continue if diagnostic errors are found) options
python3 tools/clang_tidy.py \
  -j \
  -s \
  -k \
  -v \
  --paths torch/csrc/ \
  -g"-torch/csrc/jit/passes/onnx/helper.cpp" \
  -g"-torch/csrc/jit/passes/onnx/shape_type_inference.cpp" \
  -g"-torch/csrc/jit/serialization/onnx.cpp" \
  -g"-torch/csrc/jit/serialization/export.cpp" \
  -g"-torch/csrc/jit/serialization/import.cpp" \
  -g"-torch/csrc/jit/serialization/import_legacy.cpp" \
  -g"-torch/csrc/onnx/init.cpp" \
  -g"-torch/csrc/cuda/nccl.*" \
  -g"-torch/csrc/cuda/python_nccl.cpp" \
  -g"-torch/csrc/autograd/FunctionsManual.cpp" \
  -g"-torch/csrc/generic/*.cpp" \
  -g"-torch/csrc/jit/codegen/cuda/runtime/*" \
  -g"-torch/csrc/deploy/interpreter/interpreter.cpp" \
  -g"-torch/csrc/deploy/interpreter/interpreter.h" \
  -g"-torch/csrc/deploy/interpreter/interpreter_impl.h" \
  -g"-torch/csrc/deploy/interpreter/test_main.cpp"
```

Pull Request resolved: pytorch/pytorch#60649

Test Plan: Verified changes by re-running the script (without the `-s` option) and seeing no warnings/errors.

Reviewed By: walterddr, janeyx99

Differential Revision: D29504258

Pulled By: 1ntEgr8

fbshipit-source-id: 78310b30ee8213b73ddb4771ad874665323e7a4e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged oncall: distributed Add this issue/PR to distributed oncall triage queue oncall: jit Add this issue/PR to JIT oncall triage queue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants