Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash with ValueError when ensemble=True #130

Closed
stepthom opened this issue Jul 8, 2021 · 11 comments · Fixed by #138
Closed

Crash with ValueError when ensemble=True #130

stepthom opened this issue Jul 8, 2021 · 11 comments · Fixed by #138

Comments

@stepthom
Copy link
Collaborator

stepthom commented Jul 8, 2021

When I set ensemble=True, and my data has categorical features, I get the following error at the end of the FLAML run:

[flaml.automl: 07-08 09:40:44] {1141} INFO -  at 9373.5s,       best extra_tree's error=0.2056, best rf's error=0.1950[flaml.automl: 07-08 09:40:44] {993} INFO - iteration 52, current learner rf[flaml.automl: 07-08 09:41:42] {1141} INFO -  at 9431.7s,       best rf's error=0.1950, best rf's error=0.1950
[flaml.automl: 07-08 09:41:42] {993} INFO - iteration 53, current learner rf
[flaml.automl: 07-08 09:42:11] {1141} INFO -  at 9460.7s,       best rf's error=0.1950, best rf's error=0.1950[flaml.automl: 07-08 09:42:11] {993} INFO - iteration 54, current learner rf[flaml.automl: 07-08 09:50:15] {1141} INFO -  at 9944.4s,       best rf's error=0.1949, best rf's error=0.1949
[flaml.automl: 07-08 09:50:15] {1187} INFO - selected model: RandomForestClassifier(criterion='entropy', max_features=0.7294599478674504,
                       n_estimators=347, n_jobs=10)[flaml.automl: 07-08 09:50:15] {1197} INFO - [('rf', <flaml.model.RandomForestEstimator object at 0x7fca69effaf0>), ('extra_tree', <flaml.model.ExtraTreeEstimator object at 0x7fca8cc1f8e0>), ('lgbm', <flaml.model.LGBMEstimator object at 0x7fc799985190>), ('catboost', <flaml.model.CatBoostEstimator object at 0x7fc
a8cc884f0>), ('xgboost', <flaml.model.XGBoostSklearnEstimator object at 0x7fca8cd0e610>)]
/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecat
ed and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier
object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
  warnings.warn(label_encoder_deprecation_msg, UserWarning)
/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecat
ed and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier
object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
  warnings.warn(label_encoder_deprecation_msg, UserWarning)
Traceback (most recent call last):  File "search.py", line 212, in <module>    dump_json(data_sheet_file, data_sheet)
  File "search.py", line 208, in main
    with open(data_sheet_file) as f:  File "search.py", line 163, in run_data_sheet    run['flaml_settings'] = jsonpickle.encode(automl_settings, unpicklable=False, keys=True)
  File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/flaml/automl.py", line 943, in fit
    self._search()  File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/flaml/automl.py", line 1212, in _search    stacker.fit(self._X_train_all, self._y_train_all,
  File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/sklearn/ensemble/_stacking.py", line 441, in fit
    return super().fit(X, self._le.transform(y), sample_weight)  File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/sklearn/ensemble/_stacking.py", line 196, in fit    _fit_single_estimator(self.final_estimator_, X_meta, y,
  File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/sklearn/ensemble/_base.py", line 39, in _fit_single_estimator
    estimator.fit(X, y)
  File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/flaml/model.py", line 296, in fit
    self._fit(X_train, y_train, **kwargs)
  File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/flaml/model.py", line 78, in _fit
    model.fit(X_train, y_train, **kwargs)
  File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/sklearn/ensemble/_forest.py", line 304, in fit
    X, y = self._validate_data(X, y, multi_output=True,
  File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/sklearn/base.py", line 433, in _validate_data
    X, y = check_X_y(X, y, **check_params)
  File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/sklearn/utils/validation.py", line 63, in inner_f
    return f(*args, **kwargs)
  File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/sklearn/utils/validation.py", line 871, in check_X_y
    X = check_array(X, accept_sparse=accept_sparse,
  File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/sklearn/utils/validation.py", line 63, in inner_f
    return f(*args, **kwargs)
  File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/sklearn/utils/validation.py", line 673, in check_array
    array = np.asarray(array, order=order, dtype=dtype)
  File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/numpy/core/_asarray.py", line 83, in asarray
    return array(a, dtype, copy=False, order=order)
ValueError: could not convert string to float: '__OTHER__'

This error does not occur if ensemble=False or if I remove (or encode) the categorical features from my dataset

My guess is that FLAML properly encodes categorical features when training the base estimators (LGBM, RF, etc), but not when training the stacking classifier.

@sonichi
Copy link
Collaborator

sonichi commented Jul 8, 2021

Could you check whether this line is executed?

X[cat_columns] = X[cat_columns].apply(lambda x: x.cat.codes)

It is supposed to be executed to preprocess the categorical features before hitting
File "/global/home/hpc3552/.conda/envs/myenv/lib/python3.8/site-packages/flaml/model.py", line 78, in _fit
model.fit(X_train, y_train, **kwargs)
assuming you are using pandas dataframe

@stepthom
Copy link
Collaborator Author

stepthom commented Jul 9, 2021

Thanks for the response. While investigating this issue further, I realized that my FLAML installation was old. I updated to the latest (0.5.6) and have now run into a new error. I have created another issue for that #133. Once that error is resolved, I will come back to this issue to see if it still exists.

@stepthom
Copy link
Collaborator Author

stepthom commented Jul 9, 2021

I can confirm that this issue still exists for version 0.5.2.

@sonichi I added some print statements around the line you mention in model.py. That line is definitely being executed many times while FLAML builds the normal/individual estimators.

But that line of code seems like it is not executed when building the ensemble's version of the dataset. I added a print just before the call to .fit() for the stacker, around here:

stacker.fit(self._X_train_all, self._y_train_all,

I printed out print(self._X_train_all.head()) and it shows that the data still contains categorical/string values. So somehow, self._X_train_all has not been put through the preprocessor.

@sonichi
Copy link
Collaborator

sonichi commented Jul 10, 2021

Could you check the latest version on github? I just merged a PR that fixes #133

@sonichi
Copy link
Collaborator

sonichi commented Jul 12, 2021

It's also uploaded to pypi v0.5.7.

@stepthom
Copy link
Collaborator Author

@sonichi I am still getting this error, even with the latest version 0.5.7.

@sonichi
Copy link
Collaborator

sonichi commented Jul 13, 2021

Could you share a minimal example so that we can reproduce this error?

@stepthom
Copy link
Collaborator Author

I am out of office for a few days, but I will send a test case next week.

@stepthom
Copy link
Collaborator Author

@sonichi Here is a minimal example that causes the error; note that if you set ensemble to False, then there is no error.

pip install flaml==0.5.7
import pandas as pd
from flaml import AutoML

X = pd.DataFrame({
'f1': [1, -2, 3, -4, 5, -6, -7, 8, -9, -10, -11, -12, -13, -14],
'f2': [3., 16., 10., 12., 3., 14., 11., 12., 5., 14., 20., 16., 15., 11.,],
'f3': ['a', 'b', 'a', 'c', 'c', 'b', 'b', 'b', 'b', 'a', 'b', 'e', 'e', 'a'],
})

y = pd.Series([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1])

automl_settings = {
    "time_budget": 60,
    "task": 'classification',
    "n_jobs": 1,
    "estimator_list": ['lgbm', 'xgboost', 'rf', 'extra_tree', 'catboost'],
    "eval_method": "cv",
    "n_splits": 3,
    "metric": "accuracy",
    "log_training_metric": True,
    "verbose": 1,
    "ensemble": True,
}
pipe = AutoML()
pipe.fit(X, y, **automl_settings)

Output:

[flaml.automl: 07-26 15:19:20] {911} INFO - Evaluation method: cv
[flaml.automl: 07-26 15:19:20] {606} INFO - Using StratifiedKFold
[flaml.automl: 07-26 15:19:20] {932} INFO - Minimizing error metric: 1-accuracy
[flaml.automl: 07-26 15:19:20] {952} INFO - List of ML learners in AutoML Run: ['lgbm', 'xgboost', 'rf', 'extra_tree', 'catboost']
[flaml.automl: 07-26 15:19:20] {1018} INFO - iteration 0, current learner lgbm
[flaml.automl: 07-26 15:19:20] {1178} INFO -  at 0.4s,	best lgbm's error=0.5000,	best lgbm's error=0.5000
[flaml.automl: 07-26 15:19:20] {1018} INFO - iteration 1, current learner lgbm
[flaml.automl: 07-26 15:19:20] {1178} INFO -  at 0.6s,	best lgbm's error=0.5000,	best lgbm's error=0.5000
[flaml.automl: 07-26 15:19:20] {1018} INFO - iteration 2, current learner lgbm
[flaml.automl: 07-26 15:19:21] {1178} INFO -  at 0.8s,	best lgbm's error=0.1190,	best lgbm's error=0.1190
[flaml.automl: 07-26 15:19:21] {1018} INFO - iteration 3, current learner xgboost
[flaml.automl: 07-26 15:19:21] {1178} INFO -  at 1.0s,	best xgboost's error=0.0714,	best xgboost's error=0.0714
[flaml.automl: 07-26 15:19:21] {1018} INFO - iteration 4, current learner lgbm
[flaml.automl: 07-26 15:19:21] {1178} INFO -  at 1.2s,	best lgbm's error=0.1190,	best xgboost's error=0.0714
[flaml.automl: 07-26 15:19:21] {1018} INFO - iteration 5, current learner lgbm
[flaml.automl: 07-26 15:19:21] {1178} INFO -  at 1.4s,	best lgbm's error=0.1190,	best xgboost's error=0.0714
[flaml.automl: 07-26 15:19:21] {1018} INFO - iteration 6, current learner lgbm
[flaml.automl: 07-26 15:19:21] {1178} INFO -  at 1.6s,	best lgbm's error=0.1190,	best xgboost's error=0.0714
[flaml.automl: 07-26 15:19:21] {1018} INFO - iteration 7, current learner lgbm
[flaml.automl: 07-26 15:19:22] {1178} INFO -  at 1.8s,	best lgbm's error=0.1190,	best xgboost's error=0.0714
[flaml.automl: 07-26 15:19:22] {1018} INFO - iteration 8, current learner lgbm
[flaml.automl: 07-26 15:19:22] {1178} INFO -  at 2.0s,	best lgbm's error=0.1190,	best xgboost's error=0.0714
[flaml.automl: 07-26 15:19:22] {1018} INFO - iteration 9, current learner xgboost
[flaml.automl: 07-26 15:19:22] {1178} INFO -  at 2.1s,	best xgboost's error=0.0714,	best xgboost's error=0.0714
[flaml.automl: 07-26 15:19:22] {1018} INFO - iteration 10, current learner xgboost
[flaml.automl: 07-26 15:19:22] {1178} INFO -  at 2.4s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:22] {1018} INFO - iteration 11, current learner xgboost
[flaml.automl: 07-26 15:19:22] {1178} INFO -  at 2.6s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:22] {1018} INFO - iteration 12, current learner xgboost
[flaml.automl: 07-26 15:19:23] {1178} INFO -  at 2.8s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:23] {1018} INFO - iteration 13, current learner xgboost
[flaml.automl: 07-26 15:19:23] {1178} INFO -  at 2.9s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:23] {1018} INFO - iteration 14, current learner extra_tree
[flaml.automl: 07-26 15:19:23] {1178} INFO -  at 3.2s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:23] {1018} INFO - iteration 15, current learner xgboost
[flaml.automl: 07-26 15:19:23] {1178} INFO -  at 3.4s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:23] {1018} INFO - iteration 16, current learner extra_tree
[flaml.automl: 07-26 15:19:23] {1178} INFO -  at 3.6s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:23] {1018} INFO - iteration 17, current learner lgbm
[flaml.automl: 07-26 15:19:24] {1178} INFO -  at 3.8s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:24] {1018} INFO - iteration 18, current learner extra_tree
[flaml.automl: 07-26 15:19:24] {1178} INFO -  at 4.0s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:24] {1018} INFO - iteration 19, current learner rf
[flaml.automl: 07-26 15:19:24] {1178} INFO -  at 4.2s,	best rf's error=0.0238,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:24] {1018} INFO - iteration 20, current learner rf
[flaml.automl: 07-26 15:19:24] {1178} INFO -  at 4.5s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:24] {1018} INFO - iteration 21, current learner rf
[flaml.automl: 07-26 15:19:25] {1178} INFO -  at 4.7s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:25] {1018} INFO - iteration 22, current learner xgboost
[flaml.automl: 07-26 15:19:25] {1178} INFO -  at 4.9s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:25] {1018} INFO - iteration 23, current learner xgboost
[flaml.automl: 07-26 15:19:25] {1178} INFO -  at 5.1s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:25] {1018} INFO - iteration 24, current learner extra_tree
[flaml.automl: 07-26 15:19:25] {1178} INFO -  at 5.4s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:25] {1018} INFO - iteration 25, current learner extra_tree
[flaml.automl: 07-26 15:19:25] {1178} INFO -  at 5.6s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:25] {1018} INFO - iteration 26, current learner rf
[flaml.automl: 07-26 15:19:26] {1178} INFO -  at 5.9s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:26] {1018} INFO - iteration 27, current learner rf
[flaml.automl: 07-26 15:19:26] {1178} INFO -  at 6.1s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:26] {1018} INFO - iteration 28, current learner extra_tree
[flaml.automl: 07-26 15:19:26] {1178} INFO -  at 6.3s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:26] {1018} INFO - iteration 29, current learner extra_tree
[flaml.automl: 07-26 15:19:26] {1178} INFO -  at 6.5s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:26] {1018} INFO - iteration 30, current learner lgbm
[flaml.automl: 07-26 15:19:27] {1178} INFO -  at 6.7s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:27] {1018} INFO - iteration 31, current learner lgbm
[flaml.automl: 07-26 15:19:27] {1178} INFO -  at 6.8s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:27] {1018} INFO - iteration 32, current learner lgbm
[flaml.automl: 07-26 15:19:27] {1178} INFO -  at 7.0s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:27] {1018} INFO - iteration 33, current learner extra_tree
[flaml.automl: 07-26 15:19:27] {1178} INFO -  at 7.2s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:27] {1018} INFO - iteration 34, current learner lgbm
[flaml.automl: 07-26 15:19:27] {1178} INFO -  at 7.5s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:27] {1018} INFO - iteration 35, current learner rf
[flaml.automl: 07-26 15:19:28] {1178} INFO -  at 7.7s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:28] {1018} INFO - iteration 36, current learner extra_tree
[flaml.automl: 07-26 15:19:28] {1178} INFO -  at 7.9s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:28] {1018} INFO - iteration 37, current learner rf
[flaml.automl: 07-26 15:19:28] {1178} INFO -  at 8.2s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:28] {1018} INFO - iteration 38, current learner rf
[flaml.automl: 07-26 15:19:28] {1178} INFO -  at 8.4s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:28] {1018} INFO - iteration 39, current learner xgboost
[flaml.automl: 07-26 15:19:28] {1178} INFO -  at 8.6s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:28] {1018} INFO - iteration 40, current learner rf
[flaml.automl: 07-26 15:19:29] {1178} INFO -  at 8.8s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:29] {1018} INFO - iteration 41, current learner extra_tree
[flaml.automl: 07-26 15:19:29] {1178} INFO -  at 9.0s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:29] {1018} INFO - iteration 42, current learner lgbm
[flaml.automl: 07-26 15:19:29] {1178} INFO -  at 9.2s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:29] {1018} INFO - iteration 43, current learner rf
[flaml.automl: 07-26 15:19:29] {1178} INFO -  at 9.4s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:29] {1018} INFO - iteration 44, current learner extra_tree
[flaml.automl: 07-26 15:19:30] {1178} INFO -  at 9.7s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:30] {1018} INFO - iteration 45, current learner rf
[flaml.automl: 07-26 15:19:30] {1178} INFO -  at 9.9s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:30] {1018} INFO - iteration 46, current learner xgboost
[flaml.automl: 07-26 15:19:30] {1178} INFO -  at 10.1s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:30] {1018} INFO - iteration 47, current learner extra_tree
[flaml.automl: 07-26 15:19:30] {1178} INFO -  at 10.4s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:30] {1018} INFO - iteration 48, current learner lgbm
[flaml.automl: 07-26 15:19:30] {1178} INFO -  at 10.5s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:30] {1018} INFO - iteration 49, current learner xgboost
[flaml.automl: 07-26 15:19:31] {1178} INFO -  at 10.7s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:31] {1018} INFO - iteration 50, current learner extra_tree
[flaml.automl: 07-26 15:19:31] {1178} INFO -  at 11.0s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:31] {1018} INFO - iteration 51, current learner xgboost
[flaml.automl: 07-26 15:19:31] {1178} INFO -  at 11.2s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:31] {1018} INFO - iteration 52, current learner xgboost
[flaml.automl: 07-26 15:19:31] {1178} INFO -  at 11.3s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:31] {1018} INFO - iteration 53, current learner lgbm
[flaml.automl: 07-26 15:19:31] {1178} INFO -  at 11.5s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:31] {1018} INFO - iteration 54, current learner lgbm
[flaml.automl: 07-26 15:19:32] {1178} INFO -  at 11.7s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:32] {1018} INFO - iteration 55, current learner rf
[flaml.automl: 07-26 15:19:32] {1178} INFO -  at 12.0s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:32] {1018} INFO - iteration 56, current learner lgbm
[flaml.automl: 07-26 15:19:32] {1178} INFO -  at 12.2s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:32] {1018} INFO - iteration 57, current learner xgboost
[flaml.automl: 07-26 15:19:32] {1178} INFO -  at 12.4s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:32] {1018} INFO - iteration 58, current learner xgboost
[flaml.automl: 07-26 15:19:32] {1178} INFO -  at 12.6s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:32] {1018} INFO - iteration 59, current learner lgbm
[flaml.automl: 07-26 15:19:33] {1178} INFO -  at 12.8s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:33] {1018} INFO - iteration 60, current learner rf
[flaml.automl: 07-26 15:19:33] {1178} INFO -  at 13.0s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:33] {1018} INFO - iteration 61, current learner lgbm
[flaml.automl: 07-26 15:19:33] {1178} INFO -  at 13.2s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:33] {1018} INFO - iteration 62, current learner rf
[flaml.automl: 07-26 15:19:33] {1178} INFO -  at 13.5s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:33] {1018} INFO - iteration 63, current learner rf
[flaml.automl: 07-26 15:19:34] {1178} INFO -  at 13.7s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:34] {1018} INFO - iteration 64, current learner lgbm
[flaml.automl: 07-26 15:19:34] {1178} INFO -  at 13.9s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:34] {1018} INFO - iteration 65, current learner xgboost
[flaml.automl: 07-26 15:19:34] {1178} INFO -  at 14.2s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:34] {1018} INFO - iteration 66, current learner rf
[flaml.automl: 07-26 15:19:34] {1178} INFO -  at 14.4s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:34] {1018} INFO - iteration 67, current learner xgboost
[flaml.automl: 07-26 15:19:34] {1178} INFO -  at 14.6s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:34] {1018} INFO - iteration 68, current learner lgbm
[flaml.automl: 07-26 15:19:35] {1178} INFO -  at 14.8s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:35] {1018} INFO - iteration 69, current learner rf
[flaml.automl: 07-26 15:19:35] {1178} INFO -  at 15.1s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:35] {1018} INFO - iteration 70, current learner rf
[flaml.automl: 07-26 15:19:35] {1178} INFO -  at 15.5s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:35] {1018} INFO - iteration 71, current learner rf
[flaml.automl: 07-26 15:19:36] {1178} INFO -  at 15.8s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:36] {1018} INFO - iteration 72, current learner extra_tree
[flaml.automl: 07-26 15:19:36] {1178} INFO -  at 16.1s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:36] {1018} INFO - iteration 73, current learner rf
[flaml.automl: 07-26 15:19:36] {1178} INFO -  at 16.5s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:36] {1018} INFO - iteration 74, current learner extra_tree
[flaml.automl: 07-26 15:19:37] {1178} INFO -  at 16.7s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:37] {1018} INFO - iteration 75, current learner lgbm
[flaml.automl: 07-26 15:19:37] {1178} INFO -  at 16.9s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:37] {1018} INFO - iteration 76, current learner lgbm
[flaml.automl: 07-26 15:19:37] {1178} INFO -  at 17.0s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:37] {1018} INFO - iteration 77, current learner extra_tree
[flaml.automl: 07-26 15:19:37] {1178} INFO -  at 17.4s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:37] {1018} INFO - iteration 78, current learner xgboost
[flaml.automl: 07-26 15:19:37] {1178} INFO -  at 17.5s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:37] {1018} INFO - iteration 79, current learner lgbm
[flaml.automl: 07-26 15:19:38] {1178} INFO -  at 17.7s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:38] {1018} INFO - iteration 80, current learner extra_tree
[flaml.automl: 07-26 15:19:38] {1178} INFO -  at 18.1s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:38] {1018} INFO - iteration 81, current learner xgboost
[flaml.automl: 07-26 15:19:38] {1178} INFO -  at 18.3s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:38] {1018} INFO - iteration 82, current learner extra_tree
[flaml.automl: 07-26 15:19:38] {1178} INFO -  at 18.6s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:38] {1018} INFO - iteration 83, current learner rf
[flaml.automl: 07-26 15:19:39] {1178} INFO -  at 18.9s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:39] {1018} INFO - iteration 84, current learner extra_tree
[flaml.automl: 07-26 15:19:39] {1178} INFO -  at 19.2s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:39] {1018} INFO - iteration 85, current learner rf
[flaml.automl: 07-26 15:19:39] {1178} INFO -  at 19.6s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:39] {1018} INFO - iteration 86, current learner rf
[flaml.automl: 07-26 15:19:40] {1178} INFO -  at 19.9s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:40] {1018} INFO - iteration 87, current learner xgboost
[flaml.automl: 07-26 15:19:40] {1178} INFO -  at 20.1s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:40] {1018} INFO - iteration 88, current learner catboost
[flaml.automl: 07-26 15:19:43] {1178} INFO -  at 22.9s,	best catboost's error=0.0476,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:43] {1018} INFO - iteration 89, current learner xgboost
[flaml.automl: 07-26 15:19:43] {1178} INFO -  at 23.1s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:43] {1018} INFO - iteration 90, current learner catboost
[flaml.automl: 07-26 15:19:45] {1178} INFO -  at 25.2s,	best catboost's error=0.0476,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:45] {1018} INFO - iteration 91, current learner rf
[flaml.automl: 07-26 15:19:45] {1178} INFO -  at 25.6s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:45] {1018} INFO - iteration 92, current learner catboost
[flaml.automl: 07-26 15:19:47] {1178} INFO -  at 27.5s,	best catboost's error=0.0238,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:47] {1018} INFO - iteration 93, current learner extra_tree
[flaml.automl: 07-26 15:19:48] {1178} INFO -  at 27.8s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:48] {1018} INFO - iteration 94, current learner extra_tree
[flaml.automl: 07-26 15:19:48] {1178} INFO -  at 28.1s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:48] {1018} INFO - iteration 95, current learner lgbm
[flaml.automl: 07-26 15:19:48] {1178} INFO -  at 28.3s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:48] {1018} INFO - iteration 96, current learner catboost
[flaml.automl: 07-26 15:19:50] {1178} INFO -  at 30.1s,	best catboost's error=0.0238,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:50] {1018} INFO - iteration 97, current learner xgboost
[flaml.automl: 07-26 15:19:50] {1178} INFO -  at 30.3s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:50] {1018} INFO - iteration 98, current learner rf
[flaml.automl: 07-26 15:19:50] {1178} INFO -  at 30.6s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:50] {1018} INFO - iteration 99, current learner xgboost
[flaml.automl: 07-26 15:19:51] {1178} INFO -  at 30.8s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:51] {1018} INFO - iteration 100, current learner xgboost
[flaml.automl: 07-26 15:19:51] {1178} INFO -  at 31.0s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:51] {1018} INFO - iteration 101, current learner catboost
[flaml.automl: 07-26 15:19:53] {1178} INFO -  at 33.0s,	best catboost's error=0.0238,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:53] {1018} INFO - iteration 102, current learner rf
[flaml.automl: 07-26 15:19:53] {1178} INFO -  at 33.3s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:53] {1018} INFO - iteration 103, current learner lgbm
[flaml.automl: 07-26 15:19:53] {1178} INFO -  at 33.5s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:53] {1018} INFO - iteration 104, current learner extra_tree
[flaml.automl: 07-26 15:19:54] {1178} INFO -  at 33.9s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:54] {1018} INFO - iteration 105, current learner xgboost
[flaml.automl: 07-26 15:19:54] {1178} INFO -  at 34.1s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:54] {1018} INFO - iteration 106, current learner rf
[flaml.automl: 07-26 15:19:54] {1178} INFO -  at 34.4s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:54] {1018} INFO - iteration 107, current learner catboost
[flaml.automl: 07-26 15:19:55] {1178} INFO -  at 35.6s,	best catboost's error=0.0238,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:55] {1018} INFO - iteration 108, current learner xgboost
[flaml.automl: 07-26 15:19:56] {1178} INFO -  at 35.8s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:56] {1018} INFO - iteration 109, current learner catboost
[flaml.automl: 07-26 15:19:57] {1178} INFO -  at 37.1s,	best catboost's error=0.0238,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:57] {1018} INFO - iteration 110, current learner rf
[flaml.automl: 07-26 15:19:57] {1178} INFO -  at 37.3s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:57] {1018} INFO - iteration 111, current learner lgbm
[flaml.automl: 07-26 15:19:57] {1178} INFO -  at 37.5s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:57] {1018} INFO - iteration 112, current learner catboost
[flaml.automl: 07-26 15:19:59] {1178} INFO -  at 38.9s,	best catboost's error=0.0238,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:59] {1018} INFO - iteration 113, current learner extra_tree
[flaml.automl: 07-26 15:19:59] {1178} INFO -  at 39.2s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:19:59] {1018} INFO - iteration 114, current learner catboost
[flaml.automl: 07-26 15:20:01] {1178} INFO -  at 40.7s,	best catboost's error=0.0238,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:01] {1018} INFO - iteration 115, current learner lgbm
[flaml.automl: 07-26 15:20:01] {1178} INFO -  at 40.9s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:01] {1018} INFO - iteration 116, current learner lgbm
[flaml.automl: 07-26 15:20:01] {1178} INFO -  at 41.1s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:01] {1018} INFO - iteration 117, current learner catboost
[flaml.automl: 07-26 15:20:02] {1178} INFO -  at 42.3s,	best catboost's error=0.0238,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:02] {1018} INFO - iteration 118, current learner extra_tree
[flaml.automl: 07-26 15:20:02] {1178} INFO -  at 42.6s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:02] {1018} INFO - iteration 119, current learner lgbm
[flaml.automl: 07-26 15:20:03] {1178} INFO -  at 42.8s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:03] {1018} INFO - iteration 120, current learner extra_tree
[flaml.automl: 07-26 15:20:03] {1178} INFO -  at 43.1s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:03] {1018} INFO - iteration 121, current learner extra_tree
[flaml.automl: 07-26 15:20:03] {1178} INFO -  at 43.4s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:03] {1018} INFO - iteration 122, current learner catboost
[flaml.automl: 07-26 15:20:05] {1178} INFO -  at 44.7s,	best catboost's error=0.0238,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:05] {1018} INFO - iteration 123, current learner extra_tree
[flaml.automl: 07-26 15:20:05] {1178} INFO -  at 44.9s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:05] {1018} INFO - iteration 124, current learner lgbm
[flaml.automl: 07-26 15:20:05] {1178} INFO -  at 45.1s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:05] {1018} INFO - iteration 125, current learner lgbm
[flaml.automl: 07-26 15:20:05] {1178} INFO -  at 45.3s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:05] {1018} INFO - iteration 126, current learner lgbm
[flaml.automl: 07-26 15:20:05] {1178} INFO -  at 45.5s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:05] {1018} INFO - iteration 127, current learner lgbm
[flaml.automl: 07-26 15:20:06] {1178} INFO -  at 45.6s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:06] {1018} INFO - iteration 128, current learner xgboost
[flaml.automl: 07-26 15:20:06] {1178} INFO -  at 45.9s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:06] {1018} INFO - iteration 129, current learner extra_tree
[flaml.automl: 07-26 15:20:06] {1178} INFO -  at 46.2s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:06] {1018} INFO - iteration 130, current learner rf
[flaml.automl: 07-26 15:20:06] {1178} INFO -  at 46.6s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:06] {1018} INFO - iteration 131, current learner rf
[flaml.automl: 07-26 15:20:07] {1178} INFO -  at 46.9s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:07] {1018} INFO - iteration 132, current learner extra_tree
[flaml.automl: 07-26 15:20:07] {1178} INFO -  at 47.2s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:07] {1018} INFO - iteration 133, current learner lgbm
[flaml.automl: 07-26 15:20:07] {1178} INFO -  at 47.4s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:07] {1018} INFO - iteration 134, current learner xgboost
[flaml.automl: 07-26 15:20:07] {1178} INFO -  at 47.6s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:07] {1018} INFO - iteration 135, current learner rf
[flaml.automl: 07-26 15:20:08] {1178} INFO -  at 47.9s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:08] {1018} INFO - iteration 136, current learner catboost
[flaml.automl: 07-26 15:20:09] {1178} INFO -  at 49.4s,	best catboost's error=0.0238,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:09] {1018} INFO - iteration 137, current learner rf
[flaml.automl: 07-26 15:20:10] {1178} INFO -  at 49.7s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:10] {1018} INFO - iteration 138, current learner lgbm
[flaml.automl: 07-26 15:20:10] {1178} INFO -  at 49.9s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:10] {1018} INFO - iteration 139, current learner extra_tree
[flaml.automl: 07-26 15:20:10] {1178} INFO -  at 50.2s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:10] {1018} INFO - iteration 140, current learner lgbm
[flaml.automl: 07-26 15:20:10] {1178} INFO -  at 50.3s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:10] {1018} INFO - iteration 141, current learner extra_tree
[flaml.automl: 07-26 15:20:10] {1178} INFO -  at 50.6s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:10] {1018} INFO - iteration 142, current learner xgboost
[flaml.automl: 07-26 15:20:11] {1178} INFO -  at 50.8s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:11] {1018} INFO - iteration 143, current learner extra_tree
[flaml.automl: 07-26 15:20:11] {1178} INFO -  at 51.0s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:11] {1018} INFO - iteration 144, current learner extra_tree
[flaml.automl: 07-26 15:20:11] {1178} INFO -  at 51.3s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:11] {1018} INFO - iteration 145, current learner rf
[flaml.automl: 07-26 15:20:11] {1178} INFO -  at 51.6s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:11] {1018} INFO - iteration 146, current learner lgbm
[flaml.automl: 07-26 15:20:12] {1178} INFO -  at 51.8s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:12] {1018} INFO - iteration 147, current learner lgbm
[flaml.automl: 07-26 15:20:12] {1178} INFO -  at 52.0s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:12] {1018} INFO - iteration 148, current learner lgbm
[flaml.automl: 07-26 15:20:12] {1178} INFO -  at 52.1s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:12] {1018} INFO - iteration 149, current learner lgbm
[flaml.automl: 07-26 15:20:12] {1178} INFO -  at 52.3s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:12] {1018} INFO - iteration 150, current learner lgbm
[flaml.automl: 07-26 15:20:12] {1178} INFO -  at 52.5s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:12] {1018} INFO - iteration 151, current learner xgboost
[flaml.automl: 07-26 15:20:13] {1178} INFO -  at 52.7s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:13] {1018} INFO - iteration 152, current learner extra_tree
[flaml.automl: 07-26 15:20:13] {1178} INFO -  at 53.1s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:13] {1018} INFO - iteration 153, current learner rf
[flaml.automl: 07-26 15:20:13] {1178} INFO -  at 53.3s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:13] {1018} INFO - iteration 154, current learner lgbm
[flaml.automl: 07-26 15:20:13] {1178} INFO -  at 53.5s,	best lgbm's error=0.1190,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:13] {1018} INFO - iteration 155, current learner lgbm
[flaml.automl: 07-26 15:20:14] {1178} INFO -  at 53.7s,	best lgbm's error=0.0714,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:14] {1018} INFO - iteration 156, current learner catboost
[flaml.automl: 07-26 15:20:15] {1178} INFO -  at 55.0s,	best catboost's error=0.0238,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:15] {1018} INFO - iteration 157, current learner rf
[flaml.automl: 07-26 15:20:15] {1178} INFO -  at 55.4s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:15] {1018} INFO - iteration 158, current learner lgbm
[flaml.automl: 07-26 15:20:15] {1178} INFO -  at 55.6s,	best lgbm's error=0.0714,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:15] {1018} INFO - iteration 159, current learner xgboost
[flaml.automl: 07-26 15:20:16] {1178} INFO -  at 55.8s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:16] {1018} INFO - iteration 160, current learner extra_tree
[flaml.automl: 07-26 15:20:16] {1178} INFO -  at 56.1s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:16] {1018} INFO - iteration 161, current learner lgbm
[flaml.automl: 07-26 15:20:16] {1178} INFO -  at 56.3s,	best lgbm's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:16] {1018} INFO - iteration 162, current learner lgbm
[flaml.automl: 07-26 15:20:16] {1178} INFO -  at 56.5s,	best lgbm's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:16] {1018} INFO - iteration 163, current learner extra_tree
[flaml.automl: 07-26 15:20:17] {1178} INFO -  at 56.8s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:17] {1018} INFO - iteration 164, current learner lgbm
[flaml.automl: 07-26 15:20:17] {1178} INFO -  at 57.0s,	best lgbm's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:17] {1018} INFO - iteration 165, current learner lgbm
[flaml.automl: 07-26 15:20:17] {1178} INFO -  at 57.2s,	best lgbm's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:17] {1018} INFO - iteration 166, current learner lgbm
[flaml.automl: 07-26 15:20:17] {1178} INFO -  at 57.4s,	best lgbm's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:17] {1018} INFO - iteration 167, current learner lgbm
[flaml.automl: 07-26 15:20:17] {1178} INFO -  at 57.6s,	best lgbm's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:17] {1018} INFO - iteration 168, current learner rf
[flaml.automl: 07-26 15:20:18] {1178} INFO -  at 58.0s,	best rf's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:18] {1018} INFO - iteration 169, current learner lgbm
[flaml.automl: 07-26 15:20:18] {1178} INFO -  at 58.2s,	best lgbm's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:18] {1018} INFO - iteration 170, current learner lgbm
[flaml.automl: 07-26 15:20:18] {1178} INFO -  at 58.4s,	best lgbm's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:18] {1018} INFO - iteration 171, current learner lgbm
[flaml.automl: 07-26 15:20:18] {1178} INFO -  at 58.6s,	best lgbm's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:18] {1018} INFO - iteration 172, current learner lgbm
[flaml.automl: 07-26 15:20:19] {1178} INFO -  at 58.8s,	best lgbm's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:19] {1018} INFO - iteration 173, current learner xgboost
[flaml.automl: 07-26 15:20:19] {1178} INFO -  at 59.0s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:19] {1018} INFO - iteration 174, current learner lgbm
[flaml.automl: 07-26 15:20:19] {1178} INFO -  at 59.2s,	best lgbm's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:19] {1018} INFO - iteration 175, current learner lgbm
[flaml.automl: 07-26 15:20:19] {1178} INFO -  at 59.4s,	best lgbm's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:19] {1018} INFO - iteration 176, current learner extra_tree
[flaml.automl: 07-26 15:20:20] {1178} INFO -  at 59.8s,	best extra_tree's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:20] {1018} INFO - iteration 177, current learner xgboost
[flaml.automl: 07-26 15:20:20] {1178} INFO -  at 60.0s,	best xgboost's error=0.0000,	best xgboost's error=0.0000
[flaml.automl: 07-26 15:20:20] {1218} INFO - selected model: XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1.0,
              colsample_bynode=1, colsample_bytree=1.0, gamma=0, gpu_id=-1,
              grow_policy='lossguide', importance_type='gain',
              interaction_constraints='', learning_rate=0.25912534572860507,
              max_delta_step=0, max_depth=0, max_leaves=4,
              min_child_weight=0.2620811530815948, missing=nan,
              monotone_constraints='()', n_estimators=4, n_jobs=1,
              num_parallel_tree=1, random_state=0,
              reg_alpha=0.0013933617380144255, reg_lambda=0.18096917948292954,
              scale_pos_weight=1, subsample=0.9266743941610592,
              tree_method='hist', use_label_encoder=False,
              validate_parameters=1, verbosity=0)
[flaml.automl: 07-26 15:20:20] {1228} INFO - [('lgbm', <flaml.model.LGBMEstimator object at 0x7f9c4009bf28>), ('xgboost', <flaml.model.XGBoostSklearnEstimator object at 0x7f9c40106550>)]
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-22-202f46474852> in <module>
     14 }
     15 pipe = AutoML()
---> 16 pipe.fit(X, y, **automl_settings)

~/.local/lib/python3.6/site-packages/flaml/automl.py in fit(self, X_train, y_train, dataframe, label, metric, task, n_jobs, log_file_name, estimator_list, time_budget, max_iter, sample, ensemble, eval_method, log_type, model_history, split_ratio, n_splits, log_training_metric, mem_thres, pred_time_limit, train_time_limit, X_val, y_val, sample_weight_val, groups, verbose, retrain_full, split_type, learner_selector, hpo_method, **fit_kwargs)
    965             self._save_model_history = model_history
    966             self._state.n_jobs = n_jobs
--> 967             self._search()
    968             logger.info("fit succeeded")
    969         if verbose == 0:

~/.local/lib/python3.6/site-packages/flaml/automl.py in _search(self)
   1242                         'sample_weight'] = self._sample_weight_full
   1243                 stacker.fit(self._X_train_all, self._y_train_all,
-> 1244                             **self._state.fit_kwargs)
   1245                 logger.info(f'ensemble: {stacker}')
   1246                 self._trained_estimator = stacker

~/.local/lib/python3.6/site-packages/sklearn/ensemble/_stacking.py in fit(self, X, y, sample_weight)
    437         self._le = LabelEncoder().fit(y)
    438         self.classes_ = self._le.classes_
--> 439         return super().fit(X, self._le.transform(y), sample_weight)
    440 
    441     @if_delegate_has_method(delegate='final_estimator_')

~/.local/lib/python3.6/site-packages/sklearn/ensemble/_stacking.py in fit(self, X, y, sample_weight)
    195         X_meta = self._concatenate_predictions(X, predictions)
    196         _fit_single_estimator(self.final_estimator_, X_meta, y,
--> 197                               sample_weight=sample_weight)
    198 
    199         return self

~/.local/lib/python3.6/site-packages/sklearn/ensemble/_base.py in _fit_single_estimator(estimator, X, y, sample_weight, message_clsname, message)
     37     else:
     38         with _print_elapsed_time(message_clsname, message):
---> 39             estimator.fit(X, y)
     40     return estimator
     41 

~/.local/lib/python3.6/site-packages/flaml/model.py in fit(self, X_train, y_train, budget, **kwargs)
    478         if issparse(X_train):
    479             self.params['tree_method'] = 'auto'
--> 480         return super().fit(X_train, y_train, budget, **kwargs)
    481 
    482 

~/.local/lib/python3.6/site-packages/flaml/model.py in fit(self, X_train, y_train, budget, **kwargs)
    302                 / self._time_per_iter + 1))
    303         if self.params["n_estimators"] > 0:
--> 304             self._fit(X_train, y_train, **kwargs)
    305         self.params["n_estimators"] = n_iter
    306         train_time = time.time() - start_time

~/.local/lib/python3.6/site-packages/flaml/model.py in _fit(self, X_train, y_train, **kwargs)
     83         X_train = self._preprocess(X_train)
     84         model = self.estimator_class(**self.params)
---> 85         model.fit(X_train, y_train, **kwargs)
     86         train_time = time.time() - current_time
     87         self._model = model

~/.local/lib/python3.6/site-packages/xgboost/core.py in inner_f(*args, **kwargs)
    434         for k, arg in zip(sig.parameters, args):
    435             kwargs[k] = arg
--> 436         return f(**kwargs)
    437 
    438     return inner_f

~/.local/lib/python3.6/site-packages/xgboost/sklearn.py in fit(self, X, y, sample_weight, base_margin, eval_set, eval_metric, early_stopping_rounds, verbose, xgb_model, sample_weight_eval_set, base_margin_eval_set, feature_weights, callbacks)
   1171             eval_qid=None,
   1172             create_dmatrix=lambda **kwargs: DMatrix(nthread=self.n_jobs, **kwargs),
-> 1173             label_transform=label_transform,
   1174         )
   1175 

~/.local/lib/python3.6/site-packages/xgboost/sklearn.py in _wrap_evaluation_matrices(missing, X, y, group, qid, sample_weight, base_margin, feature_weights, eval_set, sample_weight_eval_set, base_margin_eval_set, eval_group, eval_qid, create_dmatrix, label_transform)
    242         base_margin=base_margin,
    243         feature_weights=feature_weights,
--> 244         missing=missing,
    245     )
    246 

~/.local/lib/python3.6/site-packages/xgboost/sklearn.py in <lambda>(**kwargs)
   1170             eval_group=None,
   1171             eval_qid=None,
-> 1172             create_dmatrix=lambda **kwargs: DMatrix(nthread=self.n_jobs, **kwargs),
   1173             label_transform=label_transform,
   1174         )

~/.local/lib/python3.6/site-packages/xgboost/core.py in inner_f(*args, **kwargs)
    434         for k, arg in zip(sig.parameters, args):
    435             kwargs[k] = arg
--> 436         return f(**kwargs)
    437 
    438     return inner_f

~/.local/lib/python3.6/site-packages/xgboost/core.py in __init__(self, data, label, weight, base_margin, missing, silent, feature_names, feature_types, nthread, group, qid, label_lower_bound, label_upper_bound, feature_weights, enable_categorical)
    545             feature_names=feature_names,
    546             feature_types=feature_types,
--> 547             enable_categorical=enable_categorical,
    548         )
    549         assert handle is not None

~/.local/lib/python3.6/site-packages/xgboost/data.py in dispatch_data_backend(data, missing, threads, feature_names, feature_types, enable_categorical)
    563     if _is_numpy_array(data):
    564         return _from_numpy_array(data, missing, threads, feature_names,
--> 565                                  feature_types)
    566     if _is_uri(data):
    567         return _from_uri(data, missing, feature_names, feature_types)

~/.local/lib/python3.6/site-packages/xgboost/data.py in _from_numpy_array(data, missing, nthread, feature_names, feature_types)
    159 
    160     """
--> 161     flatten: np.ndarray = _transform_np_array(data)
    162     handle = ctypes.c_void_p()
    163     _check_call(_LIB.XGDMatrixCreateFromMat_omp(

~/.local/lib/python3.6/site-packages/xgboost/data.py in _transform_np_array(data)
    140     # explicitly tell np.array to try and avoid copying)
    141     flatten = np.array(data.reshape(data.size), copy=False,
--> 142                        dtype=np.float32)
    143     flatten = _maybe_np_slice(flatten, np.float32)
    144     _check_complex(data)

ValueError: could not convert string to float: 'a'


@sonichi
Copy link
Collaborator

sonichi commented Jul 27, 2021

Thanks @stepthom. I'm able to reproduce it. Investigating.

@sonichi
Copy link
Collaborator

sonichi commented Jul 27, 2021

I found the problem: the _concatenate_predictions function in _stacking.py from sklearn converts pandas dataframe into numpy array. When fitting the models with numpy array, we assumed it's numeric and did not preprocess the data. Working on a fix.

@sonichi sonichi mentioned this issue Jul 27, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants