New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PERF Discard old layers immediately when making neural net predictions #17661
Conversation
Yes, it makes sense. My concern is that Let's see what other reviewers think about this improvement in general. |
That sounds reasonable to me. I have updated the pull request to rename |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @alexhenrie , some minor suggestsions but LGTM
for i in range(self.n_layers_ - 1): | ||
activation = safe_sparse_dot(activation, self.coefs_[i]) | ||
activation += self.intercepts_[i] | ||
if i != self.n_layers_ - 2: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit but for consistency with the other version:
if i != self.n_layers_ - 2: | |
if (i + 1) != (self.n_layers_ - 1): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original version uses i + 1
several times, but this simplified version only uses i
, so it's more clear here to not have the extra addition in the if statement.
@@ -115,6 +115,36 @@ def _forward_pass(self, activations): | |||
|
|||
return activations | |||
|
|||
def _forward_pass_fast(self, X): | |||
"""Predict using the trained model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a short description indicating how this differs from _forward_pass
e.g.
This is the same as _forward_pass but does not record the activations of all layers and only returns the last layer's activation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added the text you suggested. Thanks!
@alexhenrie There are some linting issues.. |
The linting issues are now fixed. |
Thanks @alexhenrie ! |
Thank you! |
This function does not need to hold any layer in memory except the current one, and discarding the previous layers immediately after use makes neural net predictions about 3% faster.
Test program:
Before:
After: