New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vectorized the _activate function #41
Conversation
…fixed a bug in the unit tests where the weights were initialized with the wrong shape
Hi Austin,
Thanks for your contribution!
Please report the training time on the Iris and handwritten digits example
with and without your change.
…On Thu, Aug 15, 2019, 5:37 AM Austin Tripp ***@***.***> wrote:
Great library, but I noticed that the training code for your SOMs is not
vectorized. You use the fast_norm function a lot, which may be faster
than linalg.norm for 1D arrays, but iterating over every spot in the SOM
is *a lot* slower than just calling linalg.norm.
This pull request replaces fast_norm with linalg.norm in 2 places where I
saw iteration over the whole SOM. Some simple testing with a 100x100 SOM
showed ~40x speedup on my laptop.
After making the changes, the unit tests failed, which I believe is caused
by incorrectly setting up the testing weights as a 2D array rather than a
3D array. So I changed that too.
------------------------------
You can view, comment on, or merge this pull request online at:
#41
Commit Summary
- Vectorized the _activate function and weight normalization code, and
fixed a bug in the unit tests where the weights were initialized with the
wrong shape
File Changes
- *M* minisom.py
<https://github.com/JustGlowing/minisom/pull/41/files#diff-0> (15)
Patch Links:
- https://github.com/JustGlowing/minisom/pull/41.patch
- https://github.com/JustGlowing/minisom/pull/41.diff
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#41?email_source=notifications&email_token=ABFTNGPPTFAT3WVMF4DR7NDQETMPLA5CNFSM4IL3AMM2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HFLKWFA>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABFTNGOVBZOIBZWG4R4W5PTQETMPLANCNFSM4IL3AMMQ>
.
|
Hi @JustGlowing , Measuring the training time using the number of iterations per second: Using the |
I tried to run the unit tests and I got this:
I have the latest version of numpy. |
Are you sure you have the latest version of numpy? Looking at the docs, the keyword keepdims was introduced in version 1.10.0... |
I managed to let the tests pass, thank for your amazing contribution! |
Happy to contribute to a great library :) |
You'll be mentioned in the notes of the following release ;)
…On Fri, Aug 16, 2019, 9:46 AM Austin Tripp ***@***.***> wrote:
I managed to let the tests pass, thank for your amazing contribution!
Happy to contribute to a great library :)
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#41?email_source=notifications&email_token=ABFTNGIRHX54KUFCY44MEKDQEZSPPA5CNFSM4IL3AMM2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4OB5NY#issuecomment-521936567>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABFTNGML3ENHKYBMQELRBCDQEZSPPANCNFSM4IL3AMMQ>
.
|
hey @AustinT I tried to recognize you publicly but I'm not sure you have a twitter account: https://twitter.com/JustGlowing/status/1169607904896458758 You have also been mentioned in the release note. |
Thanks, I don't have Twitter but appreciate the acknowledgement :)
…On Thu, 5 Sep 2019 at 23:49, Giuseppe Vettigli ***@***.***> wrote:
hey @AustinT <https://github.com/AustinT>
I tried to recognize you publicly but I'm not sure you have a twitter
account: https://twitter.com/JustGlowing/status/1169607904896458758
You have also been mentioned in the release note.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#41?email_source=notifications&email_token=AFNTKE3UBVRD236PVCITGO3QIEE6ZA5CNFSM4IL3AMM2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD57FQMQ#issuecomment-528373810>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFNTKE642VWUUP6HBUKGWRLQIEE6ZANCNFSM4IL3AMMQ>
.
|
Vectorized the _activate function
hi there, I'm thinking about changing the license of Minisom to MIT (or GPL). This will allow Minisom to have a paper on the Journal of Open Source software. This WILL NOT affect the ownership of your contribution and does not imply that I will profit from Minisom. Your contribution was very welcome and I'd like to have your approval. |
Hi, I'm more than happy for you to change the license: thanks for asking! |
Vectorized the _activate function
Great library, but I noticed that the training code for your SOMs is not vectorized. You use the
fast_norm
function a lot, which may be faster thanlinalg.norm
for 1D arrays, but iterating over every spot in the SOM is a lot slower than just callinglinalg.norm
.This pull request replaces
fast_norm
withlinalg.norm
in 2 places where I saw iteration over the whole SOM. Some simple testing with a 100x100 SOM showed ~40x speedup on my laptop.After making the changes, the unit tests failed, which I believe is caused by incorrectly setting up the testing weights as a 2D array rather than a 3D array. So I changed that too, and now the unit tests pass. I also did a few rough tests of my own, and the results of
self.winner(x)
and the training seem to be the same as before.