Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Don't Review] Test for #49639 #49651

Closed
wants to merge 1 commit into from
Closed

[Don't Review] Test for #49639 #49651

wants to merge 1 commit into from

Conversation

wayi1
Copy link
Contributor

@wayi1 wayi1 commented Dec 20, 2020

Test for #49639

Summary:
Test for #49639

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Dec 20, 2020

💊 CI failures summary and remediations

As of commit 8063b2c (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_windows_vs2019_py36_cuda10.1_on_cpu_test1 (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

ls: cannot access '/c/Users/circleci/project/build/win_tmp/ci_scripts/*': No such file or directory
+ TEST_DIR_WIN='C:\Users\circleci\project\test'
+ export PYTORCH_FINAL_PACKAGE_DIR=/c/users/circleci/workspace/build-results
+ PYTORCH_FINAL_PACKAGE_DIR=/c/users/circleci/workspace/build-results
++ cygpath -w /c/users/circleci/workspace/build-results
+ export 'PYTORCH_FINAL_PACKAGE_DIR_WIN=C:\users\circleci\workspace\build-results'
+ PYTORCH_FINAL_PACKAGE_DIR_WIN='C:\users\circleci\workspace\build-results'
+ mkdir -p /c/Users/circleci/project/build/win_tmp/build/torch
+ CI_SCRIPTS_DIR=/c/Users/circleci/project/build/win_tmp/ci_scripts
+ mkdir -p /c/Users/circleci/project/build/win_tmp/ci_scripts
++ ls '/c/Users/circleci/project/build/win_tmp/ci_scripts/*'
ls: cannot access '/c/Users/circleci/project/build/win_tmp/ci_scripts/*': No such file or directory
+ '[' -n '' ']'
+ export SCRIPT_HELPERS_DIR=/c/Users/circleci/project/.jenkins/pytorch/win-test-helpers
+ SCRIPT_HELPERS_DIR=/c/Users/circleci/project/.jenkins/pytorch/win-test-helpers
+ '[' -n https://github.com/pytorch/pytorch/pull/49651 ']'
+ DETERMINE_FROM=/c/Users/circleci/project/build/win_tmp/determine_from
+ file_diff_from_base /c/Users/circleci/project/build/win_tmp/determine_from
+ set +e
+ git fetch origin master --quiet
Warning: Permanently added the RSA host key for IP address '140.82.113.3' to the list of known hosts.
+ set -e

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 2 times.

@wayi1
Copy link
Contributor Author

wayi1 commented Dec 20, 2020

Verified that pytorch_linux_xenial_cuda10_2_cudnn7_py3_multigpu_test (failed on #49417) is passed on CI:
https://app.circleci.com/pipelines/github/pytorch/pytorch/253644/workflows/c1c02b70-0877-40e6-8b4c-61f60f6b70ed/jobs/9768079

@wayi1 wayi1 closed this Dec 20, 2020
facebook-github-bot pushed a commit that referenced this pull request Dec 20, 2020
…erSGD (#49639)

Summary:
Pull Request resolved: #49639

Resubmit #49417 with a fix for distributed_test.

The previous submission broke a multi-gpu test that runs on 4 GPUs. Since this test only runs on master, couldn't detect it before the submission.

The real diff is:
4ca1014

This time I have verified that the previous failed test `pytorch_linux_xenial_cuda10_2_cudnn7_py3_multigpu_test` could pass after creating a PR (#49651) from a separate branch:
https://app.circleci.com/pipelines/github/pytorch/pytorch/253644/workflows/c1c02b70-0877-40e6-8b4c-61f60f6b70ed/jobs/9768079

ghstack-source-id: 118969912

Test Plan: buck test mode/dev-nosan caffe2/test/distributed:distributed_nccl_fork -- test_DistributedDataParallel_powerSGD_ddp_comm_hook、

Reviewed By: mrshenli

Differential Revision: D25654961

fbshipit-source-id: 2a45c8ceb9bdb54ff7309a8b66ec87e913e0150e
hwangdeyu pushed a commit to hwangdeyu/pytorch that referenced this pull request Jan 6, 2021
…erSGD (pytorch#49639)

Summary:
Pull Request resolved: pytorch#49639

Resubmit pytorch#49417 with a fix for distributed_test.

The previous submission broke a multi-gpu test that runs on 4 GPUs. Since this test only runs on master, couldn't detect it before the submission.

The real diff is:
pytorch@4ca1014

This time I have verified that the previous failed test `pytorch_linux_xenial_cuda10_2_cudnn7_py3_multigpu_test` could pass after creating a PR (pytorch#49651) from a separate branch:
https://app.circleci.com/pipelines/github/pytorch/pytorch/253644/workflows/c1c02b70-0877-40e6-8b4c-61f60f6b70ed/jobs/9768079

ghstack-source-id: 118969912

Test Plan: buck test mode/dev-nosan caffe2/test/distributed:distributed_nccl_fork -- test_DistributedDataParallel_powerSGD_ddp_comm_hook、

Reviewed By: mrshenli

Differential Revision: D25654961

fbshipit-source-id: 2a45c8ceb9bdb54ff7309a8b66ec87e913e0150e
@facebook-github-bot facebook-github-bot deleted the ci-all/wayi branch January 27, 2021 18:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed oncall: distributed Add this issue/PR to distributed oncall triage queue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants