Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PERF Use np.maximum instead of np.clip in relu function #17609

Merged
merged 1 commit into from Jun 16, 2020

Conversation

alexhenrie
Copy link
Contributor

@alexhenrie alexhenrie commented Jun 16, 2020

Neural net training is about 2% faster without checking in relu whether any value is greater than the maximum possible value for the data type (which is always false, so there's no need to check).

Test program:

from sklearn.datasets import load_digits
from sklearn.neural_network import MLPClassifier
from neurtu import delayed, Benchmark

digits = load_digits(return_X_y=True)
X = digits[0][:,:10]
y = digits[0][:,11]

clf = MLPClassifier(solver='lbfgs', alpha=1e-5,
                    hidden_layer_sizes=(5, 2), random_state=1, max_iter=1000)

train = delayed(clf).fit(X, y)
print(Benchmark(wall_time=True, cpu_time=True, repeat=10)(train))

Before:

      wall_time  cpu_time                                                                                                     
mean   2.116878  2.108431
max    2.126080  2.115298
std    0.004060  0.005196

After:

      wall_time  cpu_time                                                                                                     
mean   2.080088  2.069779
max    2.088962  2.074299
std    0.004015  0.002257

@alexhenrie alexhenrie changed the title PERF Use np.minimum instead of np.clip in relu function PERF Use np.maximum instead of np.clip in relu function Jun 16, 2020
@thomasjpfan
Copy link
Member

How does this scale with greater hidden_layer_sizes?

@alexhenrie
Copy link
Contributor Author

alexhenrie commented Jun 16, 2020

np.maximum is faster in all cases, although you're right that the difference is not noticeable after increasing hidden_layer_sizes from (5, 2) to (10, 4). This change also makes the code simpler and easier to understand, so I think it should be accepted as "code maintenance" even if the performance improvement is deemed insignificant.

@@ -72,7 +72,7 @@ def relu(X):
X_new : {array-like, sparse matrix}, shape (n_samples, n_features)
The transformed data.
"""
np.clip(X, 0, np.finfo(X.dtype).max, out=X)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm surprised this wasn't np.clip(X, 0, out=X)

@jnothman jnothman merged commit e11dd51 into scikit-learn:master Jun 16, 2020
@jnothman
Copy link
Member

Thanks @alexhenrie

@alexhenrie
Copy link
Contributor Author

Thank you!

@alexhenrie alexhenrie deleted the minimum branch June 17, 2020 19:03
rubywerman pushed a commit to MLH-Fellowship/scikit-learn that referenced this pull request Jun 24, 2020
viclafargue pushed a commit to viclafargue/scikit-learn that referenced this pull request Jun 26, 2020
jayzed82 pushed a commit to jayzed82/scikit-learn that referenced this pull request Oct 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants