Skip to content

Conversation

malfet
Copy link
Contributor

@malfet malfet commented Sep 2, 2021

Separate ParallelTBB move to #64193 as it requires some further investiagation

@pytorch-probot
Copy link

pytorch-probot bot commented Sep 2, 2021

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/malfet/pytorch/blob/d15874fa470da1297b46a9c4c9775d95549b5841/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default,ciflow/linux

Workflows Labels (bold enabled) Status
Triggered Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux ✅ triggered
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux ✅ triggered
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow ✅ triggered
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-bionic-py3.8-gcc9-coverage ciflow/all, ciflow/coverage, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux ✅ triggered
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled ✅ triggered
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled ✅ triggered
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda10.1-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/win 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Sep 2, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit d15874f (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot
Copy link
Contributor

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@malfet
Copy link
Contributor Author

malfet commented Sep 2, 2021

@pytorchbot ciflow rerun -l ciflow/linux

@pytorch-probot pytorch-probot bot assigned pytorchbot and unassigned pytorchbot Sep 2, 2021
Copy link
Contributor

@janeyx99 janeyx99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may add logic to remove the confusion with on_pull_request

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#CIWorkflow(
# CIWorkflow(

flake8

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#),
# ),

flake8

@codecov
Copy link

codecov bot commented Sep 2, 2021

Codecov Report

Merging #64452 (add55c5) into master (c19bd05) will decrease coverage by 0.07%.
The diff coverage is n/a.

❗ Current head add55c5 differs from pull request most recent head 1eb15ef. Consider uploading reports for the commit 1eb15ef to get more accurate results

@@            Coverage Diff             @@
##           master   #64452      +/-   ##
==========================================
- Coverage   66.77%   66.70%   -0.08%     
==========================================
  Files         707      708       +1     
  Lines       92316    92305      -11     
==========================================
- Hits        61642    61568      -74     
- Misses      30674    30737      +63     

@malfet malfet force-pushed the malfet/move-parallelnative-to-GHA branch from add55c5 to 1eb15ef Compare September 3, 2021 17:54
@pytorch-probot pytorch-probot bot assigned pytorchbot and unassigned pytorchbot Sep 3, 2021
@facebook-github-bot
Copy link
Contributor

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@malfet malfet force-pushed the malfet/move-parallelnative-to-GHA branch from 1eb15ef to b97136d Compare September 6, 2021 17:10
@pytorch-probot pytorch-probot bot assigned pytorchbot and unassigned pytorchbot Sep 6, 2021
@pytorch-probot pytorch-probot bot assigned pytorchbot and unassigned pytorchbot Sep 6, 2021
@facebook-github-bot
Copy link
Contributor

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@malfet merged this pull request in 571a2be.

@malfet malfet deleted the malfet/move-parallelnative-to-GHA branch September 7, 2021 13:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants