-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added support for unlim pipelines with max_time limit #70
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good, just some comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Codecov Report
@@ Coverage Diff @@
## master #70 +/- ##
==========================================
+ Coverage 95.52% 95.75% +0.22%
==========================================
Files 57 58 +1
Lines 1498 1531 +33
==========================================
+ Hits 1431 1466 +35
+ Misses 67 65 -2
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Progress bar needs some changes and I think something is up with the PR with the merged in changes!
evalml/models/auto_base.py
Outdated
start = time.time() | ||
elapsed = 0 | ||
last_time = start | ||
with tqdm(total=self.max_time, disable=not self.verbose, file=stdout) as pbar: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, the progress bar may be misleading since if training starts at 99% it could hang depending on how long training would take to finish. Maybe we can remove the progress bar and only show elapsed time using bar_format
in tqdm or somehow let the bar reflect time elapsed and not dependent on training iterations.
@christopherbunn can we document this new functionality? i think the best place would be in this section https://evalml.featurelabs.com/en/latest/automl/pipeline_search.html#Limiting-Search-Time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
almost there! two more comments
evalml/models/auto_base.py
Outdated
start = time.time() | ||
elapsed = 0 | ||
pbar = tqdm(total=self.max_time, disable=not self.verbose, file=stdout, bar_format='{desc} | Elapsed:{elapsed}') | ||
while elapsed <= self.max_time: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we just make this while time.time() - start <= self.max_time:
and remove the elapsed variable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@christopherbunn if it's not too much of a hassle, could you move the changelog line to "Future Releases" instead of under v.0.5.0? |
@angela97lin Sure |
* Added support for unlim pipelines with max_time limit * Fixed lint errors * Increased number of test_binary_auto pipelines to 5 * Fixed max_pipelines=None behavior and removed extraneous comment * Revered some Autoclassifier tests to use max_pipelines=5 * Changed the format of the progress logs for max_time * Changed to new pbar format and modified error msg * Updated notebook example to include search limit * Updated limit handling to allow for no time parameters * Fixed lint errors * Updated changelog * Closed pbear on early termination and removed new_line * Status bar changes * Fixed lint error * Updated test and removed elasped variable * Fixed position in changelog * Removed duplicate line
Added support for the construction of unlimited pipelines, as long as a time limit is specified. If no limits are specified, an error is thrown. (#14)