-
Notifications
You must be signed in to change notification settings - Fork 464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[model] Deep model parameter interpretation #883
Conversation
Model Benchmark
Model TrainingPeytonManningYosemiteTempsAirPassengers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fyi
Codecov Report
@@ Coverage Diff @@
## main #883 +/- ##
==========================================
+ Coverage 90.26% 90.29% +0.03%
==========================================
Files 21 21
Lines 4737 4752 +15
==========================================
+ Hits 4276 4291 +15
Misses 461 461
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great looking!
final task: find and update notebooks
@karl-richter Can you resolve merge conflict and fix flake8? Thx |
Model Benchmark
|
@ourownstory You asked about some notebook updates - is this still open or can we merge this PR? |
Most likely outdated, added a comment regarding current status.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - very clean and structured code, was good to review even without knowing all the specifics :)
Two minor remarks, see comments.
Status quo
When the NN uses hidden layers, only the model weights of the first layer are interpreted when using the
plot_parameters()
function. This could be misleading to a user who wants to interprete the model.Change
Instead of using the weights of the first layer, we use a model attribution method to calculate the attributions of the lags w.r.t. each forecast. We use pytorch's captum library for saliency calculation.