Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

variable importance option #52

Closed
Howard-ll opened this issue Aug 16, 2021 · 3 comments
Closed

variable importance option #52

Howard-ll opened this issue Aug 16, 2021 · 3 comments
Labels
enhancement New feature or request

Comments

@Howard-ll
Copy link

Hello!

First of all, I highly appreciate your efforts for TFDF
Found that there are multiple options for variable importance such as NUM_AS_ROOT

variable_importance = model.make_inspector().variable_importances()['NUM_AS_ROOT']

  1. Could you let me know which option I should use to get similar importance list as sklearn?
  2. Where can I get the detailed descriptions on those options? (How to use, what they mean)

Thank you!

@tsachiblauamat
Copy link

I ran the feature significant and compared the results to Sklearn output. not only the results are different, but also the results that I'm getting using this implementation doesn't make any sense(using the info that I have about my data).
for example a feature that is constant is one of the most significant features(it got the heights value).

maybe I don't know how to read the output properly?
("data:0.33" (1; #27), 235)
this means that feature number 27 got score of 235?

Tsachi

@janpfeifer
Copy link
Contributor

Thanks @Howard-ll , we are happy to hear the tools are useful!

There are various definitions of "feature importance" -- they are all metrics about the model/dataset, but there is not an absolute "truth" or best one.

Now we should have a clear documentation page with the list of all feature importances we support -- with pointers to the papers that define some of them -> making this as an "enhancement" for us to work on.

@janpfeifer janpfeifer added the enhancement New feature or request label Aug 18, 2021
@achoum
Copy link
Collaborator

achoum commented Sep 23, 2021

The list of features importances and their definition is given here in the Yggdrasil user manual.

model.make_inspector().variable_importances() returns a list of tuple ([py_tree.dataspec.SimpleColumnSpec](https://www.tensorflow.org/decision_forests/api_docs/python/tfdf/inspector/SimpleColumnSpec), float) (see doc).

"data:0.33" (1; #27) is the representation of a SimpleColumnSpec object with the format `"{feature name}" (type idx, #column_idx)" (see here). The last displayed value is the variable importance value (235 in this case).

Note that different variable importances have different semantics. Unless specified, the greater the value, the most important the feature. NUM_AS_ROOT is an exemple of exception (see its definition).

Could you let me know which option I should use to get similar importance list as sklearn?

Sklean's "mean decrease in impurity" is likely close to the SUM_SCORE variable importance in.
Similarly, Sklean's "based on feature permutation" is likely close to the MEAN_DECREASE_IN_ACCURACY.

Where can I get the detailed descriptions on those options? (How to use, what they mean)

The Variable importance section of the user documentation and the model specific documentation (for example, Random Forest).

The variable_importances() method is used in both the beginner and advanced colabs. However, we have not yet published any example of use of feature importance (e.g. for feature selection).

@achoum achoum closed this as completed Nov 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants