Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for feature selection #15

Closed
geoHeil opened this issue Nov 6, 2016 · 5 comments
Closed

Support for feature selection #15

geoHeil opened this issue Nov 6, 2016 · 5 comments

Comments

@geoHeil
Copy link

geoHeil commented Nov 6, 2016

Is it possible to directly use a regular sklearn pipeline with sklearn2pmml?

How can I specify the label encoding and split in target / X in for PMML?
How can I perform the selection of continuous / factor fields? would I need to hard code them?

# preprocessing, takes yX as a whole concatenated df -- > as some filters occur which affectBoth
prep_pipe = Pipeline([
    ('clean', Preprocessor()),
    ('enrich', Enricher()),
])

prep_pipe.fit(bigDf)
bigDf = prep_pipe.transform(bigDf)
X, y = transformToXy(bigDf)

CONTINUOUS_FIELDS = X.select_dtypes(include=['number']).columns
FACTOR_FIELDS = X.select_dtypes(include=['category']).columns
X = labelEncodeCategoricalData(X)

prediction_pipe = Pipeline([
    ('features', FeatureUnion([
        ('continuous', Pipeline([
            ('extract', ColumnExtractor(CONTINUOUS_FIELDS)),
        ])),
        ('factors', Pipeline([
            ('extract', ColumnExtractor(FACTOR_FIELDS)),
            ('oneht', OneHotEncoder()),
        ]))
    ], n_jobs=1)),
    ('clf', XGBClassifier())
])

prediction_pipe.fit(X, y)

sklearn2pmml(prediction_pipe, prep_pipe, "xgbPipeline.pmml", with_repr = True)
@vruusmann
Copy link
Member

vruusmann commented Nov 6, 2016

Is it possible to directly use a regular sklearn pipeline with sklearn2pmml?

Not supported at the moment. See jpmml/jpmml-sklearn#3

Very likely, will not be supported in the context of "feature preparation and pre-processing" ever. You should adopt the DataFrameMapper approach for that.

How can I specify the label encoding for PMML?

JPMML-SkLearn supports LabelBinarizer and LabelEncoder transforms:

my_mapper = DataFrameMapper([
  ([<list of my categorical column names>], LabelBinarizer()),
  ('MyTargetFieldName', None)
])

How can I specify split in target / X for PMML?

By JPMML-SkLearn conventions, if you are convertinng a supervised estimator object (eg. some classification or regression model), then DataFrameMapper rows [0, n_rows - 2] are considered to be active fields, and the last row [n_rows - 1] is considered to be the target field. Also, the target field must be mapped to None transform.

If you are converting an unsupervised estimator object (eg. some clustering model), then all DataFrameMapper rows [0, n_rows - 1] are considered to be active fields.

How can I perform the selection of continuous / factor fields? would I need to hard code them?

Currently there is no feature selection support. So, you would need to perform feature selection "manually" in your SkLearn code, and then construct an appropriate DataFrameMapper object based on those selection results.

However, built-in support for feature selection seems like a desirable functionality, so perhaps the sklearn2pmml() function should take an additional parameter that would allow you to specify some sort of "filtered connection" between the "left-side" DataFrameMapper object and the "right-side" estimator object. Something like this:

def sklearn2pmml(estimator, mapper, mapper_to_estimator_connection = None, pmml)
  pass

The responsibility of the mapper_to_estimator_connection is to deal with the situation where the number of DataFrameMapper output columns is different (typically greater) than the number of estimator input columns. It would operate on the transformed feature space. For example, it would able to successfully deal with "column expansions" as performed by the OneHotEncoder transformer.

In the beginning, it could be a single SkLearn feature selection transformer. Later on, if the concept is successfully validated, it could be a list transformers or even a pipeline of transformers.

@vruusmann
Copy link
Member

What kind of feature selector classes do you need?

Specifically, class ColumnExtractor seems to be provided some third-party Python package:
http://zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html

@geoHeil
Copy link
Author

geoHeil commented Nov 6, 2016

Thank you very much for the quick response.
Indeed a ColumnExtractor is good enough to feed only certain columns into the one-hot encoding.

@geoHeil geoHeil closed this as completed Nov 6, 2016
@vruusmann
Copy link
Member

I'm reopening this issue, because there needs to be support for feature selection in PMML conversion workflows.

If the feature selection happens in transformed feature space (eg. after categorical columns have been expanded using OneHotEncoder transformer), then it's virtually impossible to backtrack it in Scikit-Learn application code. However, it's pretty straightforward in JPMML-SkLearn.

@vruusmann vruusmann reopened this Nov 6, 2016
@vruusmann vruusmann changed the title [question] how to use regular sklearn pipeline? Support for feature selection Nov 6, 2016
vruusmann added a commit to jpmml/jpmml-sklearn that referenced this issue Dec 20, 2016
@vruusmann
Copy link
Member

JPMML-SkLearn version 1.2.0 added limited support for the sklearn.pipeline.Pipeline estimator type.

By definition, the pipeline contains a list of zero or more transformation steps followed by a final estimator step. You cannot do arbitrary feature transformation work there (as it should be encapsulated into the DataFrameMapper object). But you can do feature selection work. For example, selecting a subset of "best" features:

pipeline = Pipeline([
  ("selector", SelectKBest(k = 5)),
  ("estimator", ...)
])

sklearn2pmml(pipeline, mapper, "Workflow.pmml")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants