Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update our Logger to use python's logging package, revised #763

Merged
merged 18 commits into from May 21, 2020

Conversation

angela97lin
Copy link
Contributor

@angela97lin angela97lin commented May 9, 2020

Closes #690, continuation of work in #694

Note:

log_subtitle and log_title will log using name=logger.py since that's where the call to logger.info is.

@codecov
Copy link

codecov bot commented May 10, 2020

Codecov Report

Merging #763 into master will increase coverage by 0.08%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #763      +/-   ##
==========================================
+ Coverage   99.42%   99.51%   +0.08%     
==========================================
  Files         150      150              
  Lines        5709     5718       +9     
==========================================
+ Hits         5676     5690      +14     
+ Misses         33       28       -5     
Impacted Files Coverage Δ
evalml/automl/auto_search_base.py 98.38% <100.00%> (+0.61%) ⬆️
evalml/pipelines/components/component_base.py 100.00% <100.00%> (+5.88%) ⬆️
evalml/pipelines/pipeline_base.py 100.00% <100.00%> (ø)
...ts/automl_tests/test_auto_classification_search.py 100.00% <100.00%> (ø)
.../tests/automl_tests/test_auto_regression_search.py 100.00% <100.00%> (ø)
evalml/tests/automl_tests/test_autobase.py 100.00% <100.00%> (ø)
evalml/tests/component_tests/test_components.py 100.00% <100.00%> (ø)
evalml/tests/pipeline_tests/test_pipelines.py 99.73% <100.00%> (ø)
evalml/tests/utils_tests/test_cli_utils.py 100.00% <100.00%> (ø)
evalml/tests/utils_tests/test_logger.py 100.00% <100.00%> (ø)
... and 3 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ed68580...1f32b56. Read the comment docs.

@angela97lin angela97lin requested a review from dsherry May 18, 2020
@angela97lin angela97lin marked this pull request as ready for review May 18, 2020
evalml/utils/logger.py Show resolved Hide resolved
evalml/utils/logger.py Show resolved Hide resolved
evalml/utils/logger.py Outdated Show resolved Hide resolved
evalml/utils/logger.py Outdated Show resolved Hide resolved
evalml/automl/auto_search_base.py Outdated Show resolved Hide resolved
assert "EvalML version:" in caplog.text
assert "EvalML installation directory:" in caplog.text
assert "SYSTEM INFO" in caplog.text
assert "INSTALLED VERSIONS" in caplog.text
Copy link
Collaborator

@dsherry dsherry May 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this still use capsys, since this is testing a CLI?

Copy link
Contributor Author

@angela97lin angela97lin May 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like caplog grabs the output of the logger--which has a handler to stdout and hence prints to stdout. Similar to the other comment about capsys/caplog

def test_logger_critical(caplog):
logger.critical("Test critical")
assert "Test critical" in caplog.text
assert "CRITICAL" in caplog.text
Copy link
Collaborator

@dsherry dsherry May 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@angela97lin these tests are helpful! Love it.

I suggest you also add capsys to them, and add another similar test for DEBUG-level. That way, we can ensure everything goes into the log file, and that INFO and above make it into stdout while DEBUG doesn't.

Copy link
Contributor Author

@angela97lin angela97lin May 20, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dsherry I thought about this, but I don't think I'm able to add capsys fixture for logging. I can share screenshots of what I've tried:

image
image

So oddly, it's printed out in the "Captured stdout call" but out returns an empty string. I can keep poking around though!

Copy link
Collaborator

@dsherry dsherry May 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, got it. No problem! That's so weird.

Copy link
Contributor Author

@angela97lin angela97lin May 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah... I saw a potentially related issue: pytest-dev/pytest#5997

I manually tested by removing one handler and checking that tests still passed but 🤷‍♀️

Copy link
Collaborator

@dsherry dsherry left a comment

@angela97lin , this rocks!! 😁 So happy this is coming in time for the release too. Great stuff. I left one testing request and some other minor comments. Go ahead and merge once you feel those are addressed!

@dsherry
Copy link
Collaborator

dsherry commented May 21, 2020

@angela97lin : I responded to a few of your comments. This is still ready to go from my perspective!

elif self.max_pipelines:
if num_pipelines >= self.max_pipelines:
return False
elif self.max_time and elapsed >= self.max_time:
logger.log("\n\nMax time elapsed. Stopping search early.")
Copy link
Contributor Author

@angela97lin angela97lin May 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed the elif and cleaned this up because I don't this this was reachable code (would have hit the first if statement)

Copy link
Collaborator

@dsherry dsherry May 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! I saw this in codecov a couple weeks ago and tried to clean it up and it kept failing unit tests. Cool that you got it passing

@angela97lin angela97lin merged commit 8b1024a into master May 21, 2020
2 checks passed
@dsherry dsherry deleted the 690_logging_re branch Oct 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update logging: use python's logging package
2 participants