-
Notifications
You must be signed in to change notification settings - Fork 92
Workflow - Linux Unit Tests #1846
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
# Conflicts: # .circleci/config.yml # docs/source/release_notes.rst
…yx/evalml into 1825-Linux-Python-Unit-Tests
|
codecov can be finicky and it might be worth disabling as a CI check until it gets merged into main (then re-enabling later on). |
|
To clarify, I would suggest the following steps.
|
…yx/evalml into 1825-Linux-Python-Unit-Tests
| codecov: false | ||
| - python_version: "3.8" | ||
| core_dependencies: false | ||
| codecov: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ParthivNaresh this should be true, right? Our old CI collects coverage report data on python 3.8 both with and without core_dependencies. That way, we cover if statements which are only checked in one of the two cases, and then codecov.io merges the two coverage reports. Does that make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gsheni does the fact that codecov.io merges these reports make it better? Just want to make sure we're all on the same page haha
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I wasn't aware that CodeCov merges report. Yes, that is fine, and @dsherry is correct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah our CI doesn't make it super obvious that that's what's going on under the hood, lol. Because we just ship both the core_deps=false and core_deps=true runs over to codecov.io and then they handle merging (summing) the reports. But yep, we need both at the moment!
.coveragerc
Outdated
| if self._verbose: | ||
| if verbose: | ||
| if profile: | ||
| pytest.skip |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ParthivNaresh would you mind explaining why its necessary to add these lines? If they're not necessary, perhaps we can delete this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can absolutely get rid of it, I was trying to emulate the featuretools approach to the .coveragerc file as best as I could but I can delete them if they seem unnecessary in our case
| nbval==0.9.3 | ||
| IPython>=5.0.0 | ||
| codecov==2.1.0 | ||
| codecov==2.1.8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
dsherry
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ParthivNaresh cool!! Looking close.
I had one blocking comment: We need to enable coverage report for both runs of python 3.8, not just for core_dependencies=true. Otherwise a few of our core_dependencies=false unit tests won't be included in the combined coverage report.
And one other comment about whether the changes to .coveragerc are actually necessary
| @@ -1,5 +1,6 @@ | |||
| [run] | |||
| source=evalml | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ParthivNaresh what does this do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dsherry This is specifying the source directory that the coverage is going to measure against
dsherry
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ParthivNaresh thanks!! 🚢 😁
| .PHONY: git-test | ||
| git-test: | ||
| pytest evalml/ -n 8 --doctest-modules --cov=evalml --junitxml=test-reports/junit.xml --doctest-continue-on-failure | ||
| pytest evalml/ -n 4 --doctest-modules --cov=evalml --junitxml=test-reports/junit.xml --doctest-continue-on-failure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did we need to reduce the number of parallel processes because the github vms have less memory?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that having 8 processes causes one of them to crash during execution, and I do think it has something to do with there being less memory. It requires deeper digging though because I don't think it should require all the space it's been given

Fixes #1825