Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove old references to 9.2 in documentation #65059

Closed
wants to merge 1 commit into from

Conversation

janeyx99
Copy link
Contributor

Removes references in .rst and README.md and comments in the Dockerfile

@pytorch-probot
Copy link

pytorch-probot bot commented Sep 15, 2021

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/janeyx99/pytorch/blob/640b2170895ce527d6c6c47aa2b61e6390aca84f/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default,ciflow/win

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-bionic-py3.8-gcc9-coverage ciflow/all, ciflow/coverage, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda10.2-py3 ciflow/all, ciflow/cuda, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
paralleltbb-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Sep 15, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 640b217 (more details on the Dr. CI page):


  • 4/4 failures introduced in this PR

🕵️ 3 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-bionic-py3.8-gcc9-coverage / test (distributed, 1, 1, linux.2xlarge) (1/3)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2021-09-16T16:03:33.9418764Z test_udf_remote_...yUniqueId(created_on=0, local_id=0) to be created.
2021-09-16T16:02:52.8459285Z frame #15: <unknown function> + 0x486ea (0x7f489fc206ea in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10.so)
2021-09-16T16:02:52.8460999Z frame #16: <unknown function> + 0xc9039 (0x7f489fb2c039 in /opt/conda/lib/libstdc++.so.6)
2021-09-16T16:02:52.8462865Z frame #17: <unknown function> + 0x76db (0x7f48c377e6db in /lib/x86_64-linux-gnu/libpthread.so.0)
2021-09-16T16:02:52.8464571Z frame #18: clone + 0x3f (0x7f48c34a771f in /lib/x86_64-linux-gnu/libc.so.6)
2021-09-16T16:02:52.8465310Z 
2021-09-16T16:02:53.3085478Z ok (3.823s)
2021-09-16T16:03:08.5566769Z   test_rpc_builtin_timeout (__main__.FaultyFaultyAgentRpcTest) ... ok (15.248s)
2021-09-16T16:03:17.8934453Z   test_rpc_script_timeout (__main__.FaultyFaultyAgentRpcTest) ... ok (9.337s)
2021-09-16T16:03:21.7172617Z   test_rref_to_here_timeout (__main__.FaultyFaultyAgentRpcTest) ... ok (3.824s)
2021-09-16T16:03:29.5479910Z   test_udf_remote_message_delay_timeout (__main__.FaultyFaultyAgentRpcTest) ... ok (7.831s)
2021-09-16T16:03:33.9418764Z   test_udf_remote_message_delay_timeout_to_self (__main__.FaultyFaultyAgentRpcTest) ... [E request_callback_no_python.cpp:559] Received error while processing request type 261: falseINTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/torch/csrc/distributed/rpc/rref_context.cpp":385, please report a bug to PyTorch. Expected OwnerRRef with id GloballyUniqueId(created_on=0, local_id=0) to be created.
2021-09-16T16:03:33.9421690Z Exception raised from getOwnerRRef at /var/lib/jenkins/workspace/torch/csrc/distributed/rpc/rref_context.cpp:385 (most recent call first):
2021-09-16T16:03:33.9423816Z frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x59 (0x7fb17271cf59 in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10.so)
2021-09-16T16:03:33.9425965Z frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xa3 (0x7fb1726f3b34 in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10.so)
2021-09-16T16:03:33.9428440Z frame #2: c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x61 (0x7fb17271a341 in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10.so)
2021-09-16T16:03:33.9430608Z frame #3: torch::distributed::rpc::RRefContext::getOwnerRRef(torch::distributed::rpc::GloballyUniqueId const&, bool) + 0x628 (0x7fb17bcae608 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
2021-09-16T16:03:33.9433406Z frame #4: torch::distributed::rpc::RequestCallbackNoPython::assignOwnerRRef(torch::distributed::rpc::GloballyUniqueId const&, torch::distributed::rpc::GloballyUniqueId const&, c10::intrusive_ptr<c10::ivalue::Future, c10::detail::intrusive_target_default_null_type<c10::ivalue::Future> >) const + 0x8c (0x7fb17bc94e6c in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
2021-09-16T16:03:33.9436329Z frame #5: torch::distributed::rpc::RequestCallbackImpl::processPythonRemoteCall(torch::distributed::rpc::RpcCommandBase&, std::vector<c10::Stream, std::allocator<c10::Stream> >) const + 0xf5 (0x7fb18c609e15 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
2021-09-16T16:03:33.9439329Z frame #6: torch::distributed::rpc::RequestCallbackNoPython::processRpc(torch::distributed::rpc::RpcCommandBase&, torch::distributed::rpc::MessageType const&, std::vector<c10::Stream, std::allocator<c10::Stream> >) const + 0x1f0 (0x7fb17bc9b9f0 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
2021-09-16T16:03:33.9442194Z frame #7: torch::distributed::rpc::RequestCallbackImpl::processRpcWithErrors(torch::distributed::rpc::RpcCommandBase&, torch::distributed::rpc::MessageType const&, std::vector<c10::Stream, std::allocator<c10::Stream> >) const + 0x60 (0x7fb18c6096e0 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
2021-09-16T16:03:33.9444151Z frame #8: <unknown function> + 0x93069a0 (0x7fb17bc909a0 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)

See GitHub Actions build linux-xenial-cuda11.3-py3.6-gcc7 / test (default, 2, 2, linux.8xlarge.nvidia.gpu) (2/3)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2021-09-16T16:04:44.0488698Z Intel MKL ERROR: Parameter 5 was incorrect on entry to DLASCL.
2021-09-16T16:04:43.7198290Z Intel MKL ERROR: Parameter 5 was incorrect on entry to DLASCL.
2021-09-16T16:04:43.7235487Z ok (0.168s)
2021-09-16T16:04:43.8459973Z   test_cond_cuda_float32 (__main__.TestLinalgCUDA) ... 
2021-09-16T16:04:43.8460743Z Intel MKL ERROR: Parameter 4 was incorrect on entry to DLASCL.
2021-09-16T16:04:43.8461460Z 
2021-09-16T16:04:43.8462038Z Intel MKL ERROR: Parameter 5 was incorrect on entry to DLASCL.
2021-09-16T16:04:43.8497554Z ok (0.126s)
2021-09-16T16:04:44.0486596Z   test_cond_cuda_float64 (__main__.TestLinalgCUDA) ... 
2021-09-16T16:04:44.0487651Z Intel MKL ERROR: Parameter 4 was incorrect on entry to DLASCL.
2021-09-16T16:04:44.0488146Z 
2021-09-16T16:04:44.0488698Z Intel MKL ERROR: Parameter 5 was incorrect on entry to DLASCL.
2021-09-16T16:04:44.0524044Z ok (0.203s)
2021-09-16T16:04:44.1187215Z   test_cond_errors_and_warnings_cuda_complex128 (__main__.TestLinalgCUDA) ... ok (0.066s)
2021-09-16T16:04:44.1848177Z   test_cond_errors_and_warnings_cuda_complex64 (__main__.TestLinalgCUDA) ... ok (0.066s)
2021-09-16T16:04:44.2500244Z   test_cond_errors_and_warnings_cuda_float32 (__main__.TestLinalgCUDA) ... ok (0.065s)
2021-09-16T16:04:44.3169657Z   test_cond_errors_and_warnings_cuda_float64 (__main__.TestLinalgCUDA) ... ok (0.067s)
2021-09-16T16:04:44.3181803Z   test_cross_cuda_float32 (__main__.TestLinalgCUDA) ... skip (0.001s)
2021-09-16T16:04:44.3493322Z   test_cross_errors_cuda (__main__.TestLinalgCUDA) ... ok (0.031s)
2021-09-16T16:04:44.3505974Z   test_cross_with_and_without_dim_cuda_float32 (__main__.TestLinalgCUDA) ... skip (0.001s)
2021-09-16T16:04:44.4032691Z   test_det_cuda_complex128 (__main__.TestLinalgCUDA) ... ok (0.053s)
2021-09-16T16:04:44.4341341Z   test_det_cuda_float64 (__main__.TestLinalgCUDA) ... ok (0.031s)

See GitHub Actions build linux-xenial-cuda11.3-py3.6-gcc7 / test (default, 1, 2, linux.8xlarge.nvidia.gpu) (3/3)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2021-09-16T15:44:09.2666993Z CONTINUE_THROUGH_ERROR: false
  "cla signed",
  "ciflow/default",
  "ciflow/win"
]
2021-09-16T15:44:09.2662574Z   DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda11.3-cudnn8-py3-gcc7:74e757e8b0cf750d2f91db6aa4c29640abce32ea
2021-09-16T15:44:09.2664355Z   JOB_BASE_NAME: linux-xenial-cuda11.3-py3.6-gcc7-test
2021-09-16T15:44:09.2665118Z   TEST_CONFIG: default
2021-09-16T15:44:09.2665562Z   SHARD_NUMBER: 1
2021-09-16T15:44:09.2665963Z   NUM_TEST_SHARDS: 2
2021-09-16T15:44:09.2666462Z   PYTORCH_IGNORE_DISABLED_ISSUES: 
2021-09-16T15:44:09.2666993Z   CONTINUE_THROUGH_ERROR: false
2021-09-16T15:44:09.2667481Z   GPU_FLAG: --gpus all
2021-09-16T15:44:09.2667877Z   SHM_SIZE: 2g
2021-09-16T15:44:09.2668268Z   PR_NUMBER: 65059
2021-09-16T15:44:09.2668682Z ##[endgroup]
2021-09-16T15:44:32.4736131Z Processing ./dist/torch-1.10.0a0+git8f135be-cp36-cp36m-linux_x86_64.whl
2021-09-16T15:44:32.5137661Z Requirement already satisfied: dataclasses in /opt/conda/lib/python3.6/site-packages (from torch==1.10.0a0+git8f135be) (0.8)
2021-09-16T15:44:32.5143701Z Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.6/site-packages (from torch==1.10.0a0+git8f135be) (3.10.0.0)
2021-09-16T15:44:32.9284177Z Installing collected packages: torch
2021-09-16T15:44:42.9407541Z Successfully installed torch-1.10.0a0+git8f135be
2021-09-16T15:44:43.0419927Z ++++ dirname .jenkins/pytorch/common.sh

1 failure not recognized by patterns:

Job Step Action
GitHub Actions linux-xenial-py3.6-gcc5.4 / build-docs (python) Unknown 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot
Copy link
Contributor

@janeyx99 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@janeyx99 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@janeyx99 janeyx99 requested a review from a team September 16, 2021 15:09
@codecov
Copy link

codecov bot commented Sep 16, 2021

Codecov Report

Merging #65059 (640b217) into master (8800a8b) will decrease coverage by 4.46%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##           master   #65059      +/-   ##
==========================================
- Coverage   66.37%   61.91%   -4.47%     
==========================================
  Files         727      727              
  Lines       93571    93571              
==========================================
- Hits        62109    57931    -4178     
- Misses      31462    35640    +4178     

@facebook-github-bot
Copy link
Contributor

@janeyx99 merged this pull request in 4c4c031.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants