Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Nightly automated benchmark #4414

Merged
merged 43 commits into from
Feb 9, 2022

Conversation

viclafargue
Copy link
Contributor

This PR contains the code allowing the nightly automated runs of benchmarks for cuML.

@viclafargue viclafargue requested a review from a team as a code owner December 1, 2021 14:46
@github-actions github-actions bot added the Cython / Python Cython or Python issue label Dec 1, 2021
Copy link
Member

@dantegd dantegd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had a few initial comments

@caryr35 caryr35 added this to PR-WIP in v22.02 Release via automation Dec 8, 2021
@dantegd dantegd added improvement Improvement / enhancement to an existing function non-breaking Non-breaking change labels Jan 13, 2022
Copy link
Member

@dantegd dantegd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just have one question before we merge

@@ -97,7 +101,7 @@ def __init__(
cpu_data_prep_hook=None,
cuml_data_prep_hook=None,
accuracy_function=None,
bench_func=fit,
bench_func=fit_transform,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: Why change the default to fit_transform? What if I want to benchmark only training without inference?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea was simply for benchmark to cover both training and inference by default. This way, regressions on the inference side won't be missed. This is however, still possible to parameterize an AlgorithmPair with an other function to benchmark. Please tell me if you would like to revert that change.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we want to revert that change and if possible divide training and inference, will make triaging regressions easier and each benchmark to be as simple as possible while still being useful

v22.02 Release automation moved this from PR-WIP to PR-Needs review Jan 13, 2022
@ajschmidt8 ajschmidt8 removed the request for review from a team January 18, 2022 19:23
@ajschmidt8
Copy link
Member

Removing ops-codeowners from the required reviews since it doesn't seem there are any file changes that we're responsible for. Feel free to add us back if necessary.

@dantegd
Copy link
Member

dantegd commented Jan 24, 2022

rerun tests

@viclafargue viclafargue changed the base branch from branch-22.02 to branch-22.04 January 28, 2022 16:30
@caryr35 caryr35 added this to PR-WIP in v22.04 Release via automation Feb 7, 2022
@caryr35 caryr35 moved this from PR-WIP to PR-Needs review in v22.04 Release Feb 7, 2022
@caryr35 caryr35 removed this from PR-Needs review in v22.02 Release Feb 7, 2022
@dantegd
Copy link
Member

dantegd commented Feb 9, 2022

rerun tests

@codecov-commenter
Copy link

Codecov Report

❗ No coverage uploaded for pull request base (branch-22.04@3ccf77f). Click here to learn what that means.
The diff coverage is n/a.

Impacted file tree graph

@@               Coverage Diff               @@
##             branch-22.04    #4414   +/-   ##
===============================================
  Coverage                ?   84.32%           
===============================================
  Files                   ?      250           
  Lines                   ?    20421           
  Branches                ?        0           
===============================================
  Hits                    ?    17220           
  Misses                  ?     3201           
  Partials                ?        0           
Flag Coverage Δ
dask 45.15% <0.00%> (?)
non-dask 77.56% <0.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.


Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3ccf77f...7e488ee. Read the comment docs.

v22.04 Release automation moved this from PR-Needs review to PR-Reviewer approved Feb 9, 2022
@dantegd
Copy link
Member

dantegd commented Feb 9, 2022

@gpucibot merge

@rapids-bot rapids-bot bot merged commit 1dd32dc into rapidsai:branch-22.04 Feb 9, 2022
v22.04 Release automation moved this from PR-Reviewer approved to Done Feb 9, 2022
vimarsh6739 pushed a commit to vimarsh6739/cuml that referenced this pull request Oct 9, 2023
This PR contains the code allowing the nightly automated runs of benchmarks for `cuML`.

Authors:
  - Victor Lafargue (https://github.com/viclafargue)
  - Nanthini (https://github.com/Nanthini10)

Approvers:
  - Dante Gama Dessavre (https://github.com/dantegd)

URL: rapidsai#4414
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Cython / Python Cython or Python issue improvement Improvement / enhancement to an existing function non-breaking Non-breaking change
Projects
No open projects
Development

Successfully merging this pull request may close these issues.

None yet

5 participants