-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logging client #1017
Logging client #1017
Conversation
Codecov Report
@@ Coverage Diff @@
## development #1017 +/- ##
===============================================
+ Coverage 85.36% 85.51% +0.15%
===============================================
Files 125 127 +2
Lines 9891 10145 +254
===============================================
+ Hits 8443 8676 +233
- Misses 1448 1469 +21
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First batch of review, but only this simple stuff so far...
autosklearn/metalearning/optimizers/metalearn_optimizer/metalearner.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Part 2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review 3/3.
Did you check if this works in singularity? Could you maybe add a test the log files are not created in the current working directory in case the logging is set up correctly?
Thanks for the updates @franchuterivera the changes look good, but I'll wait with approving until all tests are green. When looking at the tests I realized that there's a lot of output now and I'm not sure where it comes from, but it makes finding the failed tests quite hard. Do you know where it comes from and whether it can be reduced again? |
Sorry, I also missed to add the sparse file and there was a bug here and there. I have fixed the output of the pytest. I have added a new test to see what things are left behind by pytest to catch if we will be able to run singularity. This is likely going to fail if we miss some file. Once I see it I will fix this. |
9f87fe3
to
a406274
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This now looks good to me. However, there are still several files under autosklearn.metalearning
which have logger=None
in the constructor. Could you please make this a mandatory argument?
9b2ca34
to
3ba5604
Compare
63c45b9
to
9d9762d
Compare
test/test_automl/test_automl.py
Outdated
# Make sure that running jobs are properly tracked. Killed runs do not always | ||
# print the return value to the log file (yet the run history has this information) | ||
if run_value.status == StatusType.SUCCESS: | ||
# Success is not sufficient to guarantee a return message in the log file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just checking, if the run doesn't print r'pynisher]\s+return value:\s+'
it should print either or "Your function call closed the pipe prematurely -> Subprocess probably got an uncatchable signal."
or "Something else went wrong, sorry."
in the controlling process, so we could actually check that, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But aren't the two error messages I mentioned exactly the error messages showing up in case of a failure to write to the pipe?
@@ -261,8 +266,15 @@ def run( | |||
if self.init_params is not None: | |||
init_params.update(self.init_params) | |||
|
|||
if self.port is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The handling of port is None
is different for this logger and the logger above. Can this be unified?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only reason this is not unified is to get a different name for the TAE and pynisher call. Is it ok if we only get TAE in the message name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I meant that in one case it check if the port is None, in the other it doesn't check.
I have created #1039 to make testing more restrictive after the running task left out at the end of smac is resolved |
Moving to log client exclusively and removed other support
Adding checker to make sure expected messages are in log file