Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for VotingClassifier #209

Closed
joaquinvanschoren opened this issue Mar 13, 2017 · 2 comments
Closed

Support for VotingClassifier #209

joaquinvanschoren opened this issue Mar 13, 2017 · 2 comments
Assignees
Milestone

Comments

@joaquinvanschoren
Copy link
Sponsor Contributor

The python API doesn't seem to properly support the VotingClassifier. This is likely because it is a classifier with a list of other classifiers (with their own names). These sub-classifiers have their own names, and they are not correctly translated to a flow. Here's an example:

from openml import tasks, runs
task = oml.tasks.get_task(145677)

from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, GradientBoostingClassifier, VotingClassifier

rf = RandomForestClassifier(n_estimators=10,max_features=0.1,criterion="entropy", n_jobs=-1)
et = ExtraTreesClassifier(n_estimators=10, random_state=444, n_jobs=-1)
gb = GradientBoostingClassifier(learning_rate=0.05, subsample=0.5, max_depth=6, n_estimators=10, random_state=555)

estimators1=[('rf',rf),('et',et),('gb',gb)]
estimators2=[('estimators',rf),('voting',et),('weights',gb)]
clf = VotingClassifier(estimators=estimators1,voting='soft',weights=[1,1,1],n_jobs=-1)
run = runs.run_task(task, clf)
p = run.publish()

This will throw the error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-8-04d5e964224e> in <module>()
     13 clf = VotingClassifier(estimators=estimators1,voting='soft',weights=[1,1,1],n_jobs=-1)
     14 run = runs.run_task(task, clf)
---> 15 p = run.publish()

/Users/joa/anaconda/lib/python3.5/site-packages/openml/runs/run.py in publish(self)
    139 
    140         predictions = arff.dumps(self._generate_arff_dict())
--> 141         description_xml = self._create_description_xml()
    142 
    143         file_elements = {'predictions': ("predictions.arff", predictions),

/Users/joa/anaconda/lib/python3.5/site-packages/openml/runs/run.py in _create_description_xml(self)
    165         downloaded_flow = openml.flows.get_flow(self.flow_id)
    166 
--> 167         openml_param_settings = _parse_parameters(self.model, downloaded_flow)
    168 
    169         # as a tag, it must be of the form ([a-zA-Z0-9_\-\.])+

/Users/joa/anaconda/lib/python3.5/site-packages/openml/runs/run.py in _parse_parameters(model, flow)
    226                 pass
    227             else:
--> 228                 raise ValueError("parameter %s not in flow description of flow %s" %(param,flow.name))
    229 
    230     return openml_param_settings

ValueError: parameter et not in flow description of flow sklearn.ensemble.voting_classifier.VotingClassifier(rf=sklearn.ensemble.forest.RandomForestClassifier,et=sklearn.ensemble.forest.ExtraTreesClassifier,gb=sklearn.ensemble.gradient_boosting.GradientBoostingClassifier)

Note: you can 'trick' the API by naming the sub-classifiers after the hyperparameters of the VotingClassifier (i.e. replace estimators1 with estimators2 in the code above), but this likely won't store the flow correctly.

@janvanrijn janvanrijn added this to the pip-release milestone Mar 16, 2017
@janvanrijn janvanrijn self-assigned this Mar 16, 2017
@janvanrijn
Copy link
Member

I just authored a fix for this problem, and Voting Classifiers are now supported.

However, with this two new questions pop up:

  • You can indeed trick the API, and this worries me quite a bit. Will try to push a fix for this. I will open an issue for this.
  • Apparently, there are quite a few estimator-objects that allow users to define self-named 'parameters' (by lack of a better word). Due to the way the python-api is setup, we need to identify these all individually. Currently, we have the Pipeline (steps), FeatureUnion and Voting Classifier, but probably there are more?

@janvanrijn
Copy link
Member

janvanrijn commented Mar 20, 2017

So I added an additional check that should ensure that the serializer can no longer be tricked. In branch issue209. I will ask Matthias to check my PR for this.

@amueller The following is inconsistent Sklearn behaviour, no?
The following code raises an exception (IMO for a good reason)
steps = [ ('Imputer', sklearn.preprocessing.Imputer(strategy='median')), ('OneHotEncoder', sklearn.preprocessing.OneHotEncoder(sparse=False, handle_unknown='ignore')), ('steps', sklearn.ensemble.BaggingClassifier(base_estimator=sklearn.tree.DecisionTreeClassifier)) ]
Whereas this is apparently allowed:
sklearn.ensemble.VotingClassifier( estimators=[('estimators', sklearn.ensemble.RandomForestClassifier()), ('whatevs', sklearn.ensemble.ExtraTreesClassifier())])
What is the difference?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants