Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Categorical split for decision tree #3346

Closed

Conversation

@MatthieuBizien
Copy link

@MatthieuBizien MatthieuBizien commented Jul 5, 2014

Contrary to many algorithms that can only use dummy variables, decision trees can behave differently for categorical data. The leaves of a node will partition the categories. We could expect a better accuracy in some cases, and to limit the number of dummy columns. This is the default behavior of R randomForest package.

I am currently implementing that in sklearn, using the Cython classes. I propose to ask a categorical_features option to the decision trees classes (DecisionTreeClassifier, DecisionTreeRegressor, ExtraTreeClassifier, ExtraTreeRegressor). This option could be, like in other modules of sklearn, 'None', 'all', a mask or a list of features.

Each feature could have up to 32 classes, because we will have to test all the combinaisons, so 2**31 cases. This limit allows us to use a binary representation of a split. The same limit exists in R, I think for the same reasons.

This is a work in progress, and not ready to be merged. I prefer to release it early, so I could have feedbacks.

@coveralls
Copy link

@coveralls coveralls commented Jul 5, 2014

Coverage Status

Coverage decreased (-0.04%) when pulling cc3eb48 on MatthieuBizien:categorical_split into 1775095 on scikit-learn:master.

@gallamine
Copy link

@gallamine gallamine commented Jul 9, 2014

Each feature could have up to 32 classes, because we will have to test all the combinaisons, so 2**31 cases.

Can you explain this part a bit more? You're testing all permutations of subsets of the data?

@MatthieuBizien
Copy link
Author

@MatthieuBizien MatthieuBizien commented Jul 10, 2014

Yes, I have to test all permutations. Because of the symmetry of the problem, without any loss of generality, we can assume the first class is in the left leaf, so we have to test 2**31 cases in the worst case (i.e. 32 classes), and not 2**32.

2**31 is a lot, but it is still computable, and it is the worst case, when the user provide 32 classes. If the number of classes is less important, or if the tree had already been split on this feature, the complexity would be less important. I assume that for most real world cases, the number of classes will be small.

We can imagine some heuristics if we have many classes (it is "just" discrete optimization), but it is, I think, too soon.

@gallamine
Copy link

@gallamine gallamine commented Jul 10, 2014

Do you have any thoughts on how you'd handle the case when the user provides more than 32 categories? I'm thinking of my own work where almost everything has more than 32 categories (e.g. country or postal codes)

@MatthieuBizien
Copy link
Author

@MatthieuBizien MatthieuBizien commented Jul 10, 2014

At the beginning, I think it is easier not to handle that case, to raise an exception and to ask the user to use dummy variables. When this pull request will be working and merged, it will be possible to start working on heuristics for finding the best split, without testing all combinaisons. I am not a specialist of discrete optimization, but I am sure there are efficient algorithms for that. The underlying structure will also need to be different because we will no longer be able to store a split in an int32.

@jnothman
Copy link
Member

@jnothman jnothman commented Jul 10, 2014

Contrary to many algorithms, that can only use dummy variables, decision
trees can behave differently for categorical data.

The expressive power of the tree is identical whether or not these are
handled specially. As far as I can tell the difference by introducing such
a feature is that it can drastically affect max_depth criteria, etc.

On 11 July 2014 06:57, MatthieuBizien notifications@github.com wrote:

At the beginning, I thing it is easier not to handle that case, to raise
an exception and to ask the user to use dummy variables. When this pull
request will be working and merged, it will be possible to start working on
heuristics for finding the best split, without testing all combinaisons. I
am not a specialist of discrete optimization, but I am sure there are
efficient algorithms for that. The underlying structure will also need to
be different, because we will no longer be able to store a split in an
int32.


Reply to this email directly or view it on GitHub
#3346 (comment)
.

@glouppe
Copy link
Member

@glouppe glouppe commented Jul 11, 2014

The expressive power of the tree is identical whether or not these are
handled specially.

Not exactly. By assuming numerical features, we assume that categorical features are ordered, which restricts the sets of candidate splits and therefore the expressive power of the tree (for a finite learning set).

@glouppe
Copy link
Member

@glouppe glouppe commented Jul 11, 2014

Thanks for you contribution @MatthieuBizien !

A few comments though before you proceed further:

  • The API for this has already been subject to debate. We have never settled to something that pleases everyone. I would like to hear some core developers opinion on the proposed API? As I understand, the interface is here similar to what we already have for OneHotEncoder. CC: @ogrisel @larsmans @jnothman @GaelVaroquaux
  • In terms of algorithms.
    i) 2**31 is way too large. In R, they restrict the number of combinations to 2**8. If the number of categories is larger, then 2**8 combinations are sampled at random.
    ii) In binary classification or in regression, there exists an optimal linear algorithm for finding the best split. It basically boils down to replace the categories by their probability, use these probabilities as a new ordered feature and apply the usual algorithm for finding the best split. You can find details about this in Section 3.6.3.2 of http://orbi.ulg.ac.be/handle/2268/170309
@glouppe
Copy link
Member

@glouppe glouppe commented Jul 11, 2014

In terms of internal interface, this may also be the opportunity to try to factor out code from Splitters. What is your opinion on this @arjoly ?

@MatthieuBizien
Copy link
Author

@MatthieuBizien MatthieuBizien commented Jul 11, 2014

@glouppe You're welcome. Thanks for your advices in term of algorithm, I will use that.

@arjoly
Copy link
Member

@arjoly arjoly commented Jul 11, 2014

In terms of internal interface, this may also be the opportunity to try to factor out code from Splitters. What is your opinion on this @arjoly ?

Yeah, this would a great opportunity. This could already be done outside this pull request.

@jnothman
Copy link
Member

@jnothman jnothman commented Jul 12, 2014

Not exactly. By assuming numerical features, we assume that categorical
features are ordered, which restrict the sets of candidate splits and
therefore the expressive power of the tree.

(But assuming infinite depth is allowed, the expressiveness is identical.)

On 11 July 2014 20:51, MatthieuBizien notifications@github.com wrote:

@glouppe https://github.com/glouppe You're welcome. Thanks for your
advices in term of algorithm, I will use that.


Reply to this email directly or view it on GitHub
#3346 (comment)
.

@amueller
Copy link
Member

@amueller amueller commented Jul 12, 2014

On 07/12/2014 11:26 AM, jnothman wrote:

Not exactly. By assuming numerical features, we assume that categorical
features are ordered, which restrict the sets of candidate splits and
therefore the expressive power of the tree.

(But assuming infinite depth is allowed, the expressiveness is
identical.)

Yes. But even then, the resulting decision surface would most likely not
be the same.

@mblondel
Copy link
Member

@mblondel mblondel commented Jul 14, 2014

I'm enthusiastic about this feature. One usecase is to do hyper-parameter optimization (as in hyperopt) over categorical hyper-parameters.

@GaelVaroquaux GaelVaroquaux changed the title Categorical split for decision tree [WIP] Categorical split for decision tree Jul 15, 2014
@ogrisel
Copy link
Member

@ogrisel ogrisel commented Aug 13, 2014

Note that pandas 0.15 will have a native data type for categories encoding:

http://pandas-docs.github.io/pandas-docs-travis/whatsnew.html#categoricals-in-series-dataframe

We could make the decision trees able to deal with dataframe features natively. That would make it more natural to use for the user: no need to pass a feature mask.

However that would require some refactoring to support lazy, per-column __array__ conversion instead of doing it globally for the whole datafreame in the check_X_y call.

@ogrisel
Copy link
Member

@ogrisel ogrisel commented Aug 13, 2014

Yes. But even then, the resulting decision surface would most likely not be the same.

Also it would make the graphical export of a single decision tree much easier to understand. Many users are interested by the structure of the learned trees when applied to categorical data.

@pprett
Copy link
Member

@pprett pprett commented Aug 13, 2014

totally - the same applies to partial dependence plots as well

2014-08-13 16:25 GMT+02:00 Olivier Grisel notifications@github.com:

Yes. But even then, the resulting decision surface would most likely not
be the same.

Also it would make the graphical export of a single decision tree much
easier to understand. Many users are interested by the structure of the
learned trees when applied to categorical data.


Reply to this email directly or view it on GitHub
#3346 (comment)
.

Peter Prettenhofer

@larsmans larsmans force-pushed the scikit-learn:master branch from 58a55ad to 4b82379 Aug 25, 2014
@MechCoder MechCoder force-pushed the scikit-learn:master branch from 6deaea0 to 3f49cee Nov 3, 2014
@spitz-dan-l
Copy link

@spitz-dan-l spitz-dan-l commented Apr 28, 2015

Hello,

It seems like there hasn't been development on this PR in awhile. Is there any idea of how far it is from completion? I would love to use it.

@mjbommar
Copy link
Contributor

@mjbommar mjbommar commented Apr 28, 2015

^ +1

Thanks,
Michael J. Bommarito II, CEO
Bommarito Consulting, LLC
Web: http://www.bommaritollc.com
Mobile: +1 (646) 450-3387

On Tue, Apr 28, 2015 at 10:52 AM, spitz-dan-l notifications@github.com
wrote:

Hello,

It seems like there hasn't been development on this PR in awhile. Is there
any idea of how far it is from completion? I would love to use it.


Reply to this email directly or view it on GitHub
#3346 (comment)
.

@amueller
Copy link
Member

@amueller amueller commented Apr 28, 2015

I think this is a somewhat significant addition, and it doesn't look like anyone worked on it recently. I think most sklearn people are excited about it, but no-one had the time to work on it. Help welcome.

@MatthieuBizien
Copy link
Author

@MatthieuBizien MatthieuBizien commented Apr 29, 2015

I don't have time to work on it for the moment. It wasn't so for away from completeness, but there had been some major changes in the master code.

@dedan
Copy link

@dedan dedan commented May 5, 2015

+1 for this. Especially in combination with the pandas categorial data type

@amueller
Copy link
Member

@amueller amueller commented May 5, 2015

I am 90% certain that the input will not be the pandas categorical data type, at least in the first iteration. I'm sure @GaelVaroquaux has opinions about this ^^

@GaelVaroquaux
Copy link
Member

@GaelVaroquaux GaelVaroquaux commented May 7, 2015

@amueller
Copy link
Member

@amueller amueller commented May 7, 2015

Some people might argue that a dataframe is a much better common denominator, as mixed datatypes are the norm, and homogeneous datatypes are a special case that only appears in some obscure imaging techiques ;)

@elzurdo
Copy link

@elzurdo elzurdo commented Mar 31, 2016

Hi,
I was wondering if there was any progress on the issue of telling a Decision Tree (or Ensemble) which features are categorical so it can split differently than numerical?

@jnothman
Copy link
Member

@jnothman jnothman commented Mar 31, 2016

#4899 is the latest news

On 31 March 2016 at 22:06, Eyal notifications@github.com wrote:

Hi,
I was wondering if there was any progress on the issue of telling a
Decision Tree (or Ensemble) which features are categorical so it can split
differently than numerical?


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#3346 (comment)

@denson
Copy link

@denson denson commented Aug 5, 2017

I have been working on fairly large (millions of rows) problems with features that have large numbers (1000's) of categories. H2O is working pretty well but I would greatly prefer to stick with scikit-learn only.

I believe that H2O is using the algorithm described in A Streaming Parallel Decision Tree Algorithm. I am thinking it would probably be easier to add SPDT rather than modify random forest.

@jimmywan
Copy link
Contributor

@jimmywan jimmywan commented Sep 11, 2017

Should this be closed in favor of continuing the discussion in #4899 ?

@jnothman
Copy link
Member

@jnothman jnothman commented Sep 11, 2017

I suppose so.

@jnothman jnothman closed this Sep 11, 2017
@jcharite-via
Copy link

@jcharite-via jcharite-via commented Nov 2, 2017

Just checking in to see if there is any progress on this issue?

@scikit-learn scikit-learn deleted a comment from woodrujm Mar 13, 2019
@fcoclavero
Copy link

@fcoclavero fcoclavero commented Aug 20, 2019

Any update on this issue?

@MatthieuBizien
Copy link
Author

@MatthieuBizien MatthieuBizien commented Jun 4, 2020

Hi everyone, I was not able to work on that PR for a long time, and when I went back there were a lot of things had have changed in the Scikit-learn codebase. I also have less available time than when I started the PR, so I have to stop here. You can use catboost, wait for #12866 (or try to continue where I left 😉).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet