Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'int' object has no attribute 'calc_ders_range' #1312

Open
fonnesbeck opened this issue May 28, 2020 · 1 comment
Open

AttributeError: 'int' object has no attribute 'calc_ders_range' #1312

fonnesbeck opened this issue May 28, 2020 · 1 comment

Comments

@fonnesbeck
Copy link

Problem:

After running a long cross-validation on a CatBoostRegressor model, CatBoost raises an Attribute error when it tries to fit a model using the optimized hyperparameters:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
_catboost.pyx in _catboost._ObjectiveCalcDersRange()

AttributeError: 'int' object has no attribute 'calc_ders_range'

During handling of the above exception, another exception occurred:

CatBoostError                             Traceback (most recent call last)
<ipython-input-46-bd110940ccc0> in <module>
      2 hyperopt_iterations = 30
      3 
----> 4 model, params = train_best_model(
      5     X, y,
      6     const_params,

<ipython-input-45-34ac36c7d385> in train_best_model(X, y, const_params, max_evals)
     20 
     21     model = CatBoostRegressor(**hyper_params)
---> 22     model.fit(dataset, verbose=False)
     23 
     24     return model, hyper_params

~/anaconda3/envs/draft/lib/python3.8/site-packages/catboost/core.py in fit(self, X, y, cat_features, sample_weight, baseline, use_best_model, eval_set, verbose, logging_level, plot, column_description, verbose_eval, metric_period, silent, early_stopping_rounds, save_snapshot, snapshot_file, snapshot_interval, init_model)
   4657             self._check_is_regressor_loss(params['loss_function'])
   4658 
-> 4659         return self._fit(X, y, cat_features, None, None, sample_weight, None, None, None, None, baseline,
   4660                          use_best_model, eval_set, verbose, logging_level, plot, column_description,
   4661                          verbose_eval, metric_period, silent, early_stopping_rounds,

~/anaconda3/envs/draft/lib/python3.8/site-packages/catboost/core.py in _fit(self, X, y, cat_features, text_features, pairs, sample_weight, group_id, group_weight, subgroup_id, pairs_weight, baseline, use_best_model, eval_set, verbose, logging_level, plot, column_description, verbose_eval, metric_period, silent, early_stopping_rounds, save_snapshot, snapshot_file, snapshot_interval, init_model)
   1736 
   1737         with log_fixup(), plot_wrapper(plot, [_get_train_dir(self.get_params())]):
-> 1738             self._train(
   1739                 train_pool,
   1740                 train_params["eval_sets"],

~/anaconda3/envs/draft/lib/python3.8/site-packages/catboost/core.py in _train(self, train_pool, test_pool, params, allow_clear_pool, init_model)
   1228 
   1229     def _train(self, train_pool, test_pool, params, allow_clear_pool, init_model):
-> 1230         self._object._train(train_pool, test_pool, params, allow_clear_pool, init_model._object if init_model else None)
   1231         self._set_trained_model_attributes()
   1232 

_catboost.pyx in _catboost._CatBoost._train()

_catboost.pyx in _catboost._CatBoost._train()

CatBoostError: catboost/python-package/catboost/helpers.cpp:42: Traceback (most recent call last):
  File "_catboost.pyx", line 1850, in _catboost._ObjectiveCalcDersRange
AttributeError: 'int' object has no attribute 'calc_ders_range'

Has anyone come across this error before?

catboost version: 0.23.2
Operating System: Debian Linux
CPU: AMD

@fonnesbeck
Copy link
Author

Some more information: I am using CV to select the best Tweedie variance_power parameter;

'loss_function': hyperopt.hp.choice('loss_function', ['Tweedie:variance_power=1.1',
                                                             'Tweedie:variance_power=1.3',
                                                             'Tweedie:variance_power=1.5',
                                                             'Tweedie:variance_power=1.7'])

which works fine until the best hyperparameters are extracted. The resulting dictionary looks like this:

{'loss_function': 1, 'task_type': 'CPU', 'custom_metric': ['RMSE', 'R2'], 'eval_metric': 'RMSE', 'random_seed': 42}

so, an integer is returned as the best loss function rather than the string of the appropriate function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants