Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logging client #1017

Merged
merged 14 commits into from
Dec 17, 2020
Merged

Logging client #1017

merged 14 commits into from
Dec 17, 2020

Conversation

franchuterivera
Copy link
Contributor

  • Moving to log client exclusively and removed other support

  • Adding checker to make sure expected messages are in log file

@codecov
Copy link

codecov bot commented Nov 26, 2020

Codecov Report

Merging #1017 (d2b317c) into development (7ef86da) will increase coverage by 0.15%.
The diff coverage is 95.83%.

Impacted file tree graph

@@               Coverage Diff               @@
##           development    #1017      +/-   ##
===============================================
+ Coverage        85.36%   85.51%   +0.15%     
===============================================
  Files              125      127       +2     
  Lines             9891    10145     +254     
===============================================
+ Hits              8443     8676     +233     
- Misses            1448     1469      +21     
Impacted Files Coverage Δ
autosklearn/evaluation/test_evaluator.py 90.90% <ø> (ø)
autosklearn/evaluation/train_evaluator.py 72.74% <ø> (ø)
autosklearn/metalearning/mismbo.py 100.00% <ø> (ø)
autosklearn/metalearning/metalearning/meta_base.py 87.83% <50.00%> (ø)
autosklearn/util/logging_.py 89.68% <81.81%> (-4.63%) ⬇️
autosklearn/smbo.py 82.53% <85.71%> (+0.24%) ⬆️
autosklearn/evaluation/abstract_evaluator.py 88.44% <87.50%> (+0.01%) ⬆️
autosklearn/util/backend.py 76.15% <94.44%> (+1.98%) ⬆️
autosklearn/automl.py 85.02% <100.00%> (+0.14%) ⬆️
autosklearn/ensemble_builder.py 76.56% <100.00%> (ø)
... and 28 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7ef86da...d2b317c. Read the comment docs.

Copy link
Contributor

@mfeurer mfeurer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First batch of review, but only this simple stuff so far...

autosklearn/metalearning/metafeatures/metafeatures.py Outdated Show resolved Hide resolved
autosklearn/metalearning/metafeatures/metafeature.py Outdated Show resolved Hide resolved
autosklearn/metalearning/metalearning/meta_base.py Outdated Show resolved Hide resolved
autosklearn/evaluation/abstract_evaluator.py Outdated Show resolved Hide resolved
.pre-commit-config.yaml Outdated Show resolved Hide resolved
test/test_evaluation/test_train_evaluator.py Show resolved Hide resolved
test/test_evaluation/test_test_evaluator.py Outdated Show resolved Hide resolved
test/test_evaluation/test_evaluation.py Outdated Show resolved Hide resolved
Copy link
Contributor

@mfeurer mfeurer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Part 2.

autosklearn/evaluation/__init__.py Outdated Show resolved Hide resolved
autosklearn/util/backend.py Outdated Show resolved Hide resolved
test/test_automl/automl_utils.py Outdated Show resolved Hide resolved
test/test_automl/automl_utils.py Show resolved Hide resolved
test/test_automl/automl_utils.py Show resolved Hide resolved
Copy link
Contributor

@mfeurer mfeurer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review 3/3.

Did you check if this works in singularity? Could you maybe add a test the log files are not created in the current working directory in case the logging is set up correctly?

test/test_automl/automl_utils.py Outdated Show resolved Hide resolved
test/test_automl/automl_utils.py Outdated Show resolved Hide resolved
test/test_automl/automl_utils.py Show resolved Hide resolved
test/test_automl/automl_utils.py Outdated Show resolved Hide resolved
test/test_automl/automl_utils.py Outdated Show resolved Hide resolved
autosklearn/util/logging_.py Outdated Show resolved Hide resolved
@mfeurer
Copy link
Contributor

mfeurer commented Nov 30, 2020

Thanks for the updates @franchuterivera the changes look good, but I'll wait with approving until all tests are green.

When looking at the tests I realized that there's a lot of output now and I'm not sure where it comes from, but it makes finding the failed tests quite hard. Do you know where it comes from and whether it can be reduced again?

@franchuterivera
Copy link
Contributor Author

Sorry, I also missed to add the sparse file and there was a bug here and there.

I have fixed the output of the pytest.

I have added a new test to see what things are left behind by pytest to catch if we will be able to run singularity. This is likely going to fail if we miss some file. Once I see it I will fix this.

Copy link
Contributor

@mfeurer mfeurer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This now looks good to me. However, there are still several files under autosklearn.metalearning which have logger=None in the constructor. Could you please make this a mandatory argument?

@franchuterivera franchuterivera marked this pull request as ready for review December 14, 2020 16:04
# Make sure that running jobs are properly tracked. Killed runs do not always
# print the return value to the log file (yet the run history has this information)
if run_value.status == StatusType.SUCCESS:
# Success is not sufficient to guarantee a return message in the log file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just checking, if the run doesn't print r'pynisher]\s+return value:\s+' it should print either or "Your function call closed the pipe prematurely -> Subprocess probably got an uncatchable signal." or "Something else went wrong, sorry." in the controlling process, so we could actually check that, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a failure happens here we will get a double count message, because while sending the pipe message it could break.

However, we can make this deterministic and add a message at the end of the pynisher function call (inside pynisher)? For instance here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But aren't the two error messages I mentioned exactly the error messages showing up in case of a failure to write to the pipe?

@@ -261,8 +266,15 @@ def run(
if self.init_params is not None:
init_params.update(self.init_params)

if self.port is None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The handling of port is None is different for this logger and the logger above. Can this be unified?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only reason this is not unified is to get a different name for the TAE and pynisher call. Is it ok if we only get TAE in the message name?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I meant that in one case it check if the port is None, in the other it doesn't check.

@franchuterivera
Copy link
Contributor Author

I have created #1039 to make testing more restrictive after the running task left out at the end of smac is resolved

@mfeurer mfeurer merged commit 4c6bef0 into automl:development Dec 17, 2020
github-actions bot pushed a commit that referenced this pull request Dec 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants