Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes UnboundLocalError: local variable 'cv_pipeline' referenced before assignment when error in automl search #996

Merged
merged 10 commits into from
Jul 31, 2020
1 change: 1 addition & 0 deletions docs/source/api_reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,7 @@ Transformers are components that take in data as input and output transformed da
OneHotEncoder
PerColumnImputer
SimpleImputer
Imputer
angela97lin marked this conversation as resolved.
Show resolved Hide resolved
StandardScaler
RFRegressorSelectFromModel
RFClassifierSelectFromModel
Expand Down
1 change: 1 addition & 0 deletions docs/source/release_notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ Release Notes
* Removed incorrect parameter passed to pipeline classes in `_add_baseline_pipelines` :pr:`941`
* Added universal error for calling `predict`, `predict_proba`, `transform`, and `feature_importances` before fitting :pr:`969`, :pr:`994`
* Made `TextFeaturizer` component and pip dependencies `featuretools` and `nlp_primitives` optional :pr:`976`
* Fixed UnboundLocalError for`cv_pipeline` when automl search errors :pr:`996`
* Changes
* Moved `get_estimators ` to `evalml.pipelines.components.utils` :pr:`934`
* Modified Pipelines to raise `PipelineScoreError` when they encounter an error during scoring :pr:`936`
Expand Down
1 change: 1 addition & 0 deletions evalml/automl/automl_search.py
Original file line number Diff line number Diff line change
Expand Up @@ -540,6 +540,7 @@ def _compute_cv_scores(self, pipeline, X, y):
X_train, X_test = X.iloc[train], X.iloc[test]
y_train, y_test = y.iloc[train], y.iloc[test]
objectives_to_score = [self.objective] + self.additional_objectives
cv_pipeline = None
angela97lin marked this conversation as resolved.
Show resolved Hide resolved
angela97lin marked this conversation as resolved.
Show resolved Hide resolved
try:
X_threshold_tuning = None
y_threshold_tuning = None
Expand Down
2 changes: 1 addition & 1 deletion evalml/pipelines/prediction_explanations/__init__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
# flake8:noqa
from .explainers import explain_prediction
from .explainers import explain_prediction
13 changes: 13 additions & 0 deletions evalml/tests/automl_tests/test_automl.py
Original file line number Diff line number Diff line change
Expand Up @@ -888,3 +888,16 @@ def test_catch_keyboard_interrupt(mock_fit, mock_score, mock_input,
automl.search(X, y)

assert len(automl._results['pipeline_results']) == number_results


@patch('evalml.automl.automl_search.train_test_split')
@patch('evalml.pipelines.BinaryClassificationPipeline.score')
@patch('evalml.pipelines.BinaryClassificationPipeline.fit')
def test_error_during_train_test_split(mock_fit, mock_score, mock_train_test_split, X_y_binary):
X, y = X_y_binary
mock_score.return_value = {'Log Loss Binary': 1.0}
mock_train_test_split.side_effect = RuntimeError()
automl = AutoMLSearch(problem_type='binary', objective='accuracy_binary', max_pipelines=2, optimize_thresholds=True)
automl.search(X, y)
for pipeline in automl.results['pipeline_results'].values():
assert np.isnan(pipeline['score'])