Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workflow - Linux Unit Tests #1846

Merged
merged 38 commits into from
Mar 1, 2021
Merged

Conversation

ParthivNaresh
Copy link
Contributor

Fixes #1825

@ParthivNaresh ParthivNaresh self-assigned this Feb 15, 2021
@gsheni
Copy link
Contributor

gsheni commented Mar 1, 2021

codecov can be finicky and it might be worth disabling as a CI check until it gets merged into main (then re-enabling later on).
For this branch, the codecov reports are being generated:
https://app.codecov.io/gh/alteryx/evalml/branch/1825-Linux-Python-Unit-Tests

@gsheni
Copy link
Contributor

gsheni commented Mar 1, 2021

To clarify, I would suggest the following steps.

  1. Codecov CI check for main has to be disabled by @dsherry. As an example see per below (look at the last 2 uncheck boxes):

Screen Shot 2021-03-01 at 11 40 41 AM

  1. Merge this MR into main
  2. Make sure codecov report generates for main at https://app.codecov.io/gh/alteryx/evalml/branch/main
  3. Re-enable CodeCov CI check for main, which has to be done by @dsherry

codecov: false
- python_version: "3.8"
core_dependencies: false
codecov: false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ParthivNaresh this should be true, right? Our old CI collects coverage report data on python 3.8 both with and without core_dependencies. That way, we cover if statements which are only checked in one of the two cases, and then codecov.io merges the two coverage reports. Does that make sense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gsheni does the fact that codecov.io merges these reports make it better? Just want to make sure we're all on the same page haha

Copy link
Contributor

@gsheni gsheni Mar 1, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I wasn't aware that CodeCov merges report. Yes, that is fine, and @dsherry is correct

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah our CI doesn't make it super obvious that that's what's going on under the hood, lol. Because we just ship both the core_deps=false and core_deps=true runs over to codecov.io and then they handle merging (summing) the reports. But yep, we need both at the moment!

.coveragerc Outdated
if self._verbose:
if verbose:
if profile:
pytest.skip
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ParthivNaresh would you mind explaining why its necessary to add these lines? If they're not necessary, perhaps we can delete this change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can absolutely get rid of it, I was trying to emulate the featuretools approach to the .coveragerc file as best as I could but I can delete them if they seem unnecessary in our case

@@ -3,5 +3,5 @@ pytest-xdist==1.26.1
pytest-cov==2.6.1
nbval==0.9.3
IPython>=5.0.0
codecov==2.1.0
codecov==2.1.8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Contributor

@dsherry dsherry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ParthivNaresh cool!! Looking close.

I had one blocking comment: We need to enable coverage report for both runs of python 3.8, not just for core_dependencies=true. Otherwise a few of our core_dependencies=false unit tests won't be included in the combined coverage report.

And one other comment about whether the changes to .coveragerc are actually necessary

@@ -1,5 +1,6 @@
[run]
source=evalml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ParthivNaresh what does this do?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dsherry This is specifying the source directory that the coverage is going to measure against

Copy link
Contributor

@dsherry dsherry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ParthivNaresh thanks!! 🚢 😁

@ParthivNaresh ParthivNaresh merged commit 0367dee into main Mar 1, 2021
@@ -22,11 +22,11 @@ test:

.PHONY: git-test
git-test:
pytest evalml/ -n 8 --doctest-modules --cov=evalml --junitxml=test-reports/junit.xml --doctest-continue-on-failure
pytest evalml/ -n 4 --doctest-modules --cov=evalml --junitxml=test-reports/junit.xml --doctest-continue-on-failure
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did we need to reduce the number of parallel processes because the github vms have less memory?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that having 8 processes causes one of them to crash during execution, and I do think it has something to do with there being less memory. It requires deeper digging though because I don't think it should require all the space it's been given

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

GitHub Actions - Workflow - linux python X.X unit tests
5 participants