-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SOE is a lot slower than previously #46
Comments
Hi @Conzel,
Change 1 would, in theory, explain x10 slowdown and it would be interesting to see if change 2 explains the other x2 slowdown. The idea of change 1 was, that "new users" don't end up with poor results. Experienced users can use the |
Hey @dekuenstle, sorry, had a typo in the initial message. Should've been 0.02 seconds for the old commit. I repeated the setup with
Seems that n_init causes the 10x slowdown; but the quadruplet change also causes a ~15x slowdown. Combined its quite a lot. Increasing For completions sake, the code snippet, a bit more systematic.
|
Thank you, I will look into this ASAP. Probably we will get the old performance if we use separate loss functions for triplets and quadruplets. |
While experimenting, I noticed unusually slow behaviour of SOE, which I didn't notice previously.
After some experimenting, it seems to be the changes made in commit 82fb66c, which introduced some changes to the SOE loss and initialisation mechanisms.
I have used the following code to test my hypothesis:
This runs at around 4 seconds on the current main (cdbacb3). If commit 82fb66c is reverted (or we can checkout to the commit immediately before it), the runtime drops to 0.2 seconds. This is especially noticeable with larger amounts of data (the slowdown might scale with the number of triplets?)
Tested on an 2020 M1 Macbook Air. cblearn was used with the CPU version, not the Pytorch one.
The text was updated successfully, but these errors were encountered: