Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor component test for different environments #2957

Merged
merged 33 commits into from Oct 28, 2021

Conversation

angela97lin
Copy link
Contributor

@angela97lin angela97lin commented Oct 25, 2021

Closes #2882.

Unfortunately, codecov/patch will majorly fail since all of our cases (conda/windows/py39) are not covered by codecov and I've changed those lines :')

@angela97lin angela97lin self-assigned this Oct 25, 2021
@codecov
Copy link

codecov bot commented Oct 25, 2021

Codecov Report

Merging #2957 (95129d1) into main (6b204dc) will increase coverage by 0.1%.
The diff coverage is 73.4%.

Impacted file tree graph

@@           Coverage Diff           @@
##            main   #2957     +/-   ##
=======================================
+ Coverage   99.7%   99.7%   +0.1%     
=======================================
  Files        307     307             
  Lines      29257   29265      +8     
=======================================
+ Hits       29166   29174      +8     
  Misses        91      91             
Impacted Files Coverage Δ
evalml/tests/component_tests/test_utils.py 95.7% <73.4%> (+0.5%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6b204dc...95129d1. Read the comment docs.

@@ -1,8 +1,8 @@
name: Nightly unit tests, linux

on:
schedule:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file was updated just for testing purposes. Will revert before merging! :)

@@ -1,8 +1,8 @@
name: Nightly unit tests, windows

on:
schedule:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same with this file, updated just for testing purposes. Will revert before merging! :)

@@ -47,11 +47,11 @@ git-test-minimal-deps-parallel:

.PHONY: git-test-automl-core
git-test-automl-core:
pytest evalml/tests/automl_tests evalml/tests/tuner_tests -n 2 --ignore=evalml/tests/automl_tests/parallel_tests --durations 0 --timeout 300 --doctest-modules --cov=evalml --junitxml=test-reports/git-test-automl-core-junit.xml --has-minimal-dependencies
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned in review... found some more unnecessary calls to doctest-modules, cleaning up here but unrelated to the PR.

Copy link
Contributor

@eccabay eccabay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love this, it's going to make adding/changing components so much easier!

Comment on lines 121 to 125
expected_components = [
component
for component in minimum_dependencies_list + requirements_list
if component not in not_supported_in_conda
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick suggestion: What if we used sets for these instead of lists? Since checking membership in a list is slower, it would be faster (but hopefully no less readable) to replace this with something like:

expected_components = minimum_dependencies_list.union(requirements_list).difference(not_supported_in_conda)

For sanity checking I did a quick timeit run comparing the two, using lists with n=10000 took 1.6 seconds and using sets took 0.1!

To make it even faster/more readable, we could also save minimum_dependencies_list+requirements_list/minimum_dependencies_list.union(requirements_list) as its own variable, say all_requirements_list or something.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup yup, this is a great suggestion. Will update 😁

Copy link
Contributor

@freddyaboulton freddyaboulton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@angela97lin Looks good to me! I think keeping track of the expected components with a list will make the error more informative if the test ever fails so although it's more verbose I think it's worth it.

I think @eccabay 's suggestion to use sets is worth considering, esp since there won't be duplicates.

evalml/tests/component_tests/test_utils.py Outdated Show resolved Hide resolved
@angela97lin angela97lin merged commit 474f3fb into main Oct 28, 2021
@angela97lin angela97lin deleted the 2882_refactor_comp_env_tests branch October 28, 2021 20:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Refactor component environment tests
3 participants