New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use customized evaluation function but still evaluate on rmse #3598
Comments
If you don't specify the parameter And yes, currently, you cannot have colon inside the name of the custom evaluation function. We need to document this fact. |
@hcho3 |
@Yiyiyimu We'll have to modify C++ codebase to disable default evaluation metric. In particular, we need to modify these lines Lines 406 to 408 in 7c82dc9
which adds a default metric if eval_metric is not specified. Is there a need for specifically disabling default metric?
|
@hcho3 Besides, I didn't find other people meeting with this problem, is it because of the newest version? And could you tell me what can I do right now. |
This behavior is consistent with previous versions. For now, you can comment out the quoted lines.
This takes some work, since C++ code currently has no way of knowing whether feval is specified or not. |
@Yiyiyimu A work-around is to add new option |
@hcho3 I cloned and installed the newest version, but it's seems not working. The code and results are listed below. dtrain=xgb.DMatrix(X_train,label=y_green_train)
dtest=xgb.DMatrix(X_test,label=y_green_test)
param = {'max_depth': 3, 'eta': 0.1, 'silent': 1,'min_child_weight':1,'disable_default_eval_metric':1,
'subsample':0.8,'colsample_bytree':0.6,'gamma':0.2,'alpha': 0, 'lambda':0.01}
num_round = 5
watchlist = [(dtrain,'train'), (dtest,'test')]
bst = xgb.train(param, dtrain, num_round, watchlist,feval=Prec)
-----------------
[0] train-rmse:0.454391 test-rmse:0.454253 train-MaxPrec:0.046266 test-MaxPrec:0.0271
[1] train-rmse:0.413867 test-rmse:0.413755 train-MaxPrec:0.046296 test-MaxPrec:0.02681
[2] train-rmse:0.377606 test-rmse:0.377239 train-MaxPrec:0.046296 test-MaxPrec:0.02681
[3] train-rmse:0.344661 test-rmse:0.344605 train-MaxPrec:0.046296 test-MaxPrec:0.02681
[4] train-rmse:0.316434 test-rmse:0.316334 train-MaxPrec:0.046388 test-MaxPrec:0.02681 I still can't find which place is wrong, and sorry to disturb again. |
@Yiyiyimu Did you re-compile XGBoost? Run |
@hcho3 Sorry, I'm not sure what does |
@Yiyiyimu I changed the native code, on which the Python package |
@hcho3 Thank you! I followed the document to compile xgboost, but it's still the same, evaluated on rmse. Is there something else I can show to you to let you know where could be the problem? |
Looks like your Python is picking up an older version of XGBoost. Try setting
to make sure that the latest master version is being used. |
Hi,
First of all, thank you so much for making xgboost available, it is so great!
The problem is, I try to use customized evaluation function in xgboost, but there is a built-in function listed at the front of the customized one. I also tried the the sample function in custom_objective.py, the result is the same, extra rmse would be at the front, so I'm not sure where is wrong.
I'm still fresh to this, so I only can find in training.py, the output of msg = bst_eval_set.decode() is already contains the extra rmse. Maybe that would be of help.
The code is
But the result is
Besides, this is a bug I think, that if there is a ':' in the return of customized function,
like
return 'MaxPrec', (precision_recall_curve(labels,preds, pos_label=1))[0][0]
, it would reportwhich I think the code thinks there should be a number behind each ':', but there is a default colon so it is unnecessary. Maybe you should mark that when introducing custom evaluation function.
Thank you for your help!
Working environment:
Windows 7_64
python 3.6.3
conda 4.5.9
xgboost 0.80
The text was updated successfully, but these errors were encountered: