Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logit output of RandomForest #33

Closed
Howard-ll opened this issue Jul 5, 2021 · 4 comments
Closed

logit output of RandomForest #33

Howard-ll opened this issue Jul 5, 2021 · 4 comments

Comments

@Howard-ll
Copy link

Differently from GBT, it seems that RandomForest does not have logit output

The logits are available in v0.1.7, but the signature is different from sklearn:

Trains a Gradient Boosted Trees that returns logits (assuming the dataset is a binary classification)

model = tfdf.keras.GradientBoostedTreesModel(apply_link_function=False)

@achoum
Copy link
Collaborator

achoum commented Jul 6, 2021

Unlike Gradient Boosted Trees learning algorithm, the Random Forest algorithm works with a "voting mechanism". In the case of classification, each tree casts a vote for one class (or multiple classes depending on the hyper-parameters). Therefore, the algorithm does not rely on any link functions / logits. This is why the argument apply_link_function does not exist for the Random Forest model.

I rarely saw logits being used with Random Forests. Out of curiosity, do you mind detailing your setup :) ?

If a logit is what you need (i.e. the inverse of the logistic function), you can always compute it from the probabilities (be careful with numerical precision and proba=0 case).

@Howard-ll
Copy link
Author

Howard-ll commented Jul 6, 2021

Thanks for the answer. This request is about the number of unique output values.

I am trying to replace a library with the tfdf. As you can see below screen capture, predict, predict_proba & predict_log_proba give me different output values. I am talking about the number of unique output values.
For my project, I need predict_proba or predict_log_proba. I understand that tfdf predict is similar to predict_proba. This is good. However, if I could get more number of unique output values, that would be really great. As you can see picture 2 & 3 below, predict_proba of sklearn has bigger number of unique output values while tfdf has just many 0s. If this feature can be supported, that will be just great to me because I do need it for my tasks.

In terms of number of unique output values, computing logits from probabilities may be of no use. Because the number of unique output values will be the same after all

** 1) screen capture **
Capture

** 2) Library S output range **
image

** 3) tfdf output range **
image

@Howard-ll
Copy link
Author

Need to try more data-sets but Random Forest has a good enough number of unique output values on my current test data-sets. Thanks!

@achoum
Copy link
Collaborator

achoum commented Jul 9, 2021

Some quick remarks.

  • The number of unique values would be the same for the logits and for the probabilities.
  • Some hyper-parameters and the diversity of the dataset examples will impact the number of unique prediction values. For example, increasing the number and depth of the trees (I see that you train RF tree to only max depth 4) will help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants