You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In theory, feature importance for neural networks is a tricky question and in general there's no nice tricks/clear winning strategy as with trees that I know about (but I know very little about the subject).
There's some tricks ranging from advanced plotting to checking the derivative wrt to inputs or maybe training with dropout and shutting of certain inputs and evaluating on testset. Maybe this question can get you started.
In practice, with my experiences a more informal analysis can be helpful using wtte-rnn by just eyeballing plots of sequences of predictions and features lined up and (depending on your problem/data) you can sometimes see quite clearly that some data will make an impact on the state of the rnn. This wont help feature selection of course.
Is it possible to extract "feature importance" from an WTTE-RNN model?
The text was updated successfully, but these errors were encountered: