Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DecisionTreeClassifier has random behaviour with splitter="best"? #2386

Closed
ndawe opened this issue Aug 23, 2013 · 6 comments
Closed

DecisionTreeClassifier has random behaviour with splitter="best"? #2386

ndawe opened this issue Aug 23, 2013 · 6 comments

Comments

@ndawe
Copy link
Member

@ndawe ndawe commented Aug 23, 2013

See below:

from sklearn.datasets import make_classification
from sklearn.grid_search import GridSearchCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.cross_validation import KFold


X, y = make_classification(random_state=0)

est_parameters = {
    "max_depth": range(2, 7)}

def best_est(X, y):
    cv = KFold(y.shape[0], n_folds=2, random_state=0)
    # default tree uses gini and best split
    est = DecisionTreeClassifier()
    grid_search = GridSearchCV(est, est_parameters, cv=cv)
    grid_search.fit(X, y)
    return grid_search.best_score_, grid_search.best_params_

for i in xrange(10):
    print(best_est(X, y))
    # why is the best tree different each time
    # unless I set the random_state in DecisionTreeClassifier() ?

output:

(0.71999999999999997, {'max_depth': 3})
(0.68000000000000005, {'max_depth': 3})
(0.73999999999999999, {'max_depth': 6})
(0.70999999999999996, {'max_depth': 3})
(0.70999999999999996, {'max_depth': 3})
(0.70999999999999996, {'max_depth': 3})
(0.72999999999999998, {'max_depth': 3})
(0.69999999999999996, {'max_depth': 6})
(0.69999999999999996, {'max_depth': 3})
(0.70999999999999996, {'max_depth': 5})

ping @glouppe

@arjoly

This comment has been minimized.

Copy link
Member

@arjoly arjoly commented Aug 23, 2013

During the search for the best split, features are shuffled at each node.
Thus the order of feature evaluation might change from one run to the other.
If there is a tie between several splits, the chosen
split might differ if the random state isn't set.

If I set the random_state to 0, I got

(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
(0.67000000000000004, {'max_depth': 3})
@ndawe

This comment has been minimized.

Copy link
Member Author

@ndawe ndawe commented Aug 23, 2013

Yeah, I get deterministic behaviour too (and the same output as you) when setting the random state. I am a bit surprised to see any random behaviour at all in the default DecisionTreeClassifier. It could be that there are tied variables. Good point. Maybe the variables should not be shuffled? Any thoughts on that?

@ndawe

This comment has been minimized.

Copy link
Member Author

@ndawe ndawe commented Aug 23, 2013

Hmm... or maybe it is a nice feature to shuffle the variables at each node and I am just used to a more deterministic implementation elsewhere... If this really is where the random behaviour is manifesting itself, then I think I'm fine with it. I suppose shuffling the features reduces the greediness of the algorithm somewhat.

@pprett

This comment has been minimized.

Copy link
Member

@pprett pprett commented Aug 23, 2013

I think Arnaud is right - the order in which variables are searched is
random -- we could also set a different tie selection; instead of selecting
the first encountered variable in case of a tie, we could select the lowest
feature id...

2013/8/23 Noel Dawe notifications@github.com

Yeah, I get deterministic behaviour too (and the same output as you) when
setting the random state. I am a bit surprised to see any random behaviour
at all in the default DecisionTreeClassifier. It could be that there are
tied variables. Good point. Maybe the variables should not be shuffled? Any
thoughts on that?


Reply to this email directly or view it on GitHubhttps://github.com//issues/2386#issuecomment-23147955
.

Peter Prettenhofer

@glouppe

This comment has been minimized.

Copy link
Member

@glouppe glouppe commented Aug 23, 2013

Yes Arnaud is right, this comes from ties on the features to split on (this happens more often than one may think).

@ndawe

This comment has been minimized.

Copy link
Member Author

@ndawe ndawe commented Aug 23, 2013

OK, thanks for the confirmation! I'm happy with the feature shuffling. Closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.