-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add pipeline.should_skip_featurization flag #3849
Conversation
Codecov Report
@@ Coverage Diff @@
## main #3849 +/- ##
=======================================
+ Coverage 99.7% 99.7% +0.1%
=======================================
Files 344 344
Lines 36185 36190 +5
=======================================
+ Hits 36048 36053 +5
Misses 137 137
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
}, | ||
) | ||
y = pd.Series(range(PERIODS)) | ||
if problem_type == ProblemTypes.TIME_SERIES_BINARY: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you add the parametrization for these problem types?
assert pipeline.should_skip_featurization | ||
|
||
|
||
@patch( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you think its cleaner to add the pipeline fitting logic to test_can_run_automl_for_time_series_with_exclude_featurizers
and check after search is run? I think either works!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great idea, done!
Fixes an issue where time series native estimators were double-featurizing during predict when featurization happens before running evalml