-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for mlr::WrappedModel
meta-model type
#46
Comments
You're training XGBoost models using the The list of supported model classes is given in the README file of the JPMML-R library: As you can see, the Maybe it's possible to extract the |
mlr::WrappedModel
meta-model type
Thank you for clearing that up! |
Reopening - I have the |
I found a very convincing solution to the problem! |
The disadvantage in unwrap using getLearnerModel(model, more.unwrap = TRUE) is, if we have predict.type as probability, we wont get the good:bad probability percentages, we get only either 0 or 1 Is there any way to convert to pmml the mlr wrapped models |
@apremgeorge You can keep your original |
Thanks for the reply So I use getLearnerModel, this works, but the object randomForest gives only response, not the truth,prob.0,prob.1,response as given by rf_mod Thanks for any help |
@apremgeorge This issue tracker is about the |
The issue is "no applicable method for object of class "c('FilterModel', 'BaseWrapperModel', 'WrappedModel')" in mlr wrapper |
Thank you |
This would indeed be nice to have. I would expect it to be straightforward in most cases, just extracting the mlr model's I wanted to point out one place where additional work would be needed. In mlr v2.16 or mlr3 with an xgboost binary classifier, in order to properly generate metrics for early stopping, the labels get switched before fitting the xgboost model: mlr-org/mlr#2644 E.g.,
|
I've been wondering how R people train XGBoost models (Python people have excellent Scikit-Learn wrapper classes). Seems like the |
This issue was fixed in jpmml/jpmml-r@496248c There's an updated R2PMML package version 0.24.1 available in GitHub. @bmreiniger The MLR+XGBoost example that you shared in #46 (comment) works now; pay attention to the mlr_model <- train(xgb_learner, task = task)
xgb_fmap <- r2pmml::genFMap(iris[, names(iris) != 'Species'])
r2pmml(mlr_model, "iris.pmml", fmap = xgb_fmap)
r2pmml(mlr_model, "iris-inverted.pmml", invert_levels = TRUE, fmap = xgb_fmap) @bmreiniger The above example is about a dataset that contains only continuous features. How do you approach a mix of continuous plus categorical features in the MLR package? I'd like to expand the MLR+XGBoost integration, but it would be easier if there were some pointers about how it's normally done. |
Hi @vruusmann, I'm unsing the mlr-package for fitting a xgboost as described from i3Jesterhead above but when I tried to apply the solution postet here I realized that the funtion genFMap is no longer available in the package. Is there any equivalent solution that works without this function? Thanks very much for your help! |
@LSym2 The function More code examples here (not related to mlr, though): |
Hi @vruusmann, ok thanks very much, it worked now! |
Hi,
I am trying to convert a xgboost classification model from the MLR library in R to a pmml-file.
When trying to convert the trained model I get the following error message.
Can you make any sense of the error message?
The Java Version should not be the problem btw..
Thanks in advance!
The text was updated successfully, but these errors were encountered: