Skip to content

Conversation

hlu1
Copy link
Contributor

@hlu1 hlu1 commented Oct 20, 2021

Summary: The total number of 'out' variant nodes/total number of nodes is now 100% for all the models, which isn't true obviously.

Differential Revision: D31783028

Summary: The total number of 'out' variant nodes/total number of nodes is now 100% for all the models, which isn't true obviously.

Differential Revision: D31783028

fbshipit-source-id: 2e6b38aef0467fd72d017eabefd53f245861868b
@pytorch-probot
Copy link

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/hlu1/pytorch/blob/79f0d321233300a2787f39a3c66552e3306d341c/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-dynamic ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-py3-clang5-mobile-code-analysis ciflow/all, ciflow/linux, ciflow/mobile 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Oct 20, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 79f0d32 (more details on the Dr. CI page):


None of the CI failures appear to be your fault 💚



❄️ 2 failures tentatively classified as flaky

but reruns have not yet been triggered to confirm:

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_test (1/2)

Step: "Test" (full log | diagnosis details | 🔁 rerun) ❄️

Oct 20 02:30:24 unknown file: Failure
Oct 20 02:30:24 
Oct 20 02:30:24 [----------] 2 tests from ScriptProfileTest
Oct 20 02:30:24 [ RUN      ] ScriptProfileTest.Basic
Oct 20 02:30:24 [       OK ] ScriptProfileTest.Basic (0 ms)
Oct 20 02:30:24 [ RUN      ] ScriptProfileTest.CallingOrder
Oct 20 02:30:24 [       OK ] ScriptProfileTest.CallingOrder (2 ms)
Oct 20 02:30:24 [----------] 2 tests from ScriptProfileTest (2 ms total)
Oct 20 02:30:24 
Oct 20 02:30:24 [----------] 1 test from ShapeAnalysisTest
Oct 20 02:30:24 [ RUN      ] ShapeAnalysisTest.DynamicShapesFusion
Oct 20 02:30:24 unknown file: Failure
Oct 20 02:30:24 C++ exception with description "tuple->elements().at(0).isInt()INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/test/cpp/jit/test_shape_analysis.cpp":229, please report a bug to PyTorch. 
Oct 20 02:30:24 Exception raised from TestBody at /var/lib/jenkins/workspace/test/cpp/jit/test_shape_analysis.cpp:229 (most recent call first):
Oct 20 02:30:24 frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x69 (0x7f86cc185229 in /opt/conda/lib/python3.6/site-packages/torch/bin/libc10.so)
Oct 20 02:30:24 frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0xc5 (0x7f86cc181955 in /opt/conda/lib/python3.6/site-packages/torch/bin/libc10.so)
Oct 20 02:30:24 frame #2: torch::jit::ShapeAnalysisTest_DynamicShapesFusion_Test::TestBody() + 0x378f (0x68298f in /opt/conda/lib/python3.6/site-packages/torch/bin/test_jit)
Oct 20 02:30:24 frame #3: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x43 (0x6b8023 in /opt/conda/lib/python3.6/site-packages/torch/bin/test_jit)
Oct 20 02:30:24 frame #4: /opt/conda/lib/python3.6/site-packages/torch/bin/test_jit() [0x6a6e4c]
Oct 20 02:30:24 frame #5: /opt/conda/lib/python3.6/site-packages/torch/bin/test_jit() [0x6a70ea]
Oct 20 02:30:24 frame #6: /opt/conda/lib/python3.6/site-packages/torch/bin/test_jit() [0x6a793d]
Oct 20 02:30:24 frame #7: testing::internal::UnitTestImpl::RunAllTests() + 0xe1e (0x6b140e in /opt/conda/lib/python3.6/site-packages/torch/bin/test_jit)

See GitHub Actions build linux-xenial-py3.6-gcc5.4 / test (default, 2, 2, linux.2xlarge) (2/2)

Step: "Test" (full log | diagnosis details | 🔁 rerun) ❄️

2021-10-20T01:46:45.7563409Z unknown file: Failure
2021-10-20T01:46:45.7142196Z 
2021-10-20T01:46:45.7142647Z �[0;32m[----------] �[m2 tests from ScriptProfileTest
2021-10-20T01:46:45.7143281Z �[0;32m[ RUN      ] �[mScriptProfileTest.Basic
2021-10-20T01:46:45.7143938Z �[0;32m[       OK ] �[mScriptProfileTest.Basic (0 ms)
2021-10-20T01:46:45.7144658Z �[0;32m[ RUN      ] �[mScriptProfileTest.CallingOrder
2021-10-20T01:46:45.7145500Z �[0;32m[       OK ] �[mScriptProfileTest.CallingOrder (1 ms)
2021-10-20T01:46:45.7146242Z �[0;32m[----------] �[m2 tests from ScriptProfileTest (2 ms total)
2021-10-20T01:46:45.7146577Z 
2021-10-20T01:46:45.7147041Z �[0;32m[----------] �[m1 test from ShapeAnalysisTest
2021-10-20T01:46:45.7147822Z �[0;32m[ RUN      ] �[mShapeAnalysisTest.DynamicShapesFusion
2021-10-20T01:46:45.7563409Z unknown file: Failure
2021-10-20T01:46:45.7564614Z C++ exception with description "tuple->elements().at(0).isInt()INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/test/cpp/jit/test_shape_analysis.cpp":229, please report a bug to PyTorch. 
2021-10-20T01:46:45.7565716Z Exception raised from TestBody at /var/lib/jenkins/workspace/test/cpp/jit/test_shape_analysis.cpp:229 (most recent call first):
2021-10-20T01:46:45.7567534Z frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x69 (0x7f72b16a4229 in /opt/conda/lib/python3.6/site-packages/torch/bin/libc10.so)
2021-10-20T01:46:45.7568920Z frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0xc5 (0x7f72b16a0955 in /opt/conda/lib/python3.6/site-packages/torch/bin/libc10.so)
2021-10-20T01:46:45.7570328Z frame #2: torch::jit::ShapeAnalysisTest_DynamicShapesFusion_Test::TestBody() + 0x378f (0x68298f in /opt/conda/lib/python3.6/site-packages/torch/bin/test_jit)
2021-10-20T01:46:45.7571988Z frame #3: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x43 (0x6b8023 in /opt/conda/lib/python3.6/site-packages/torch/bin/test_jit)
2021-10-20T01:46:45.7573218Z frame #4: /opt/conda/lib/python3.6/site-packages/torch/bin/test_jit() [0x6a6e4c]
2021-10-20T01:46:45.7573944Z frame #5: /opt/conda/lib/python3.6/site-packages/torch/bin/test_jit() [0x6a70ea]
2021-10-20T01:46:45.7574667Z frame #6: /opt/conda/lib/python3.6/site-packages/torch/bin/test_jit() [0x6a793d]
2021-10-20T01:46:45.7575673Z frame #7: testing::internal::UnitTestImpl::RunAllTests() + 0xe1e (0x6b140e in /opt/conda/lib/python3.6/site-packages/torch/bin/test_jit)

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot facebook-github-bot added oncall: jit Add this issue/PR to JIT oncall triage queue fb-exported labels Oct 20, 2021
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D31783028

facebook-github-bot pushed a commit that referenced this pull request Oct 20, 2021
…er (#66917)

Summary:
Pull Request resolved: #66917

The total number of 'out' variant nodes/total number of nodes is now 100% for all the models, which isn't true obviously.

Reviewed By: swolchok, mikeiovine

Differential Revision: D31783028

fbshipit-source-id: e0bc2c6614aa3c3a235283c9125de1b339f42585
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed fb-exported oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants