Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SOE is a lot slower than previously #46

Closed
Conzel opened this issue May 3, 2022 · 3 comments
Closed

SOE is a lot slower than previously #46

Conzel opened this issue May 3, 2022 · 3 comments

Comments

@Conzel
Copy link
Contributor

Conzel commented May 3, 2022

While experimenting, I noticed unusually slow behaviour of SOE, which I didn't notice previously.
After some experimenting, it seems to be the changes made in commit 82fb66c, which introduced some changes to the SOE loss and initialisation mechanisms.

I have used the following code to test my hypothesis:

from cblearn.embedding import SOE
from cblearn.datasets import make_random_triplets
import numpy as np
import time

np.random.seed(42)
x = np.random.random((100, 2))
t, r = make_random_triplets(x, "list-boolean", 2000)
soe = SOE(n_components=2, random_state=2)

start = time.time()
soe.fit_transform(t, r)
print(f"{time.time() - start:.2f} seconds to fit 2000 triplets with SOE.")

This runs at around 4 seconds on the current main (cdbacb3). If commit 82fb66c is reverted (or we can checkout to the commit immediately before it), the runtime drops to 0.2 seconds. This is especially noticeable with larger amounts of data (the slowdown might scale with the number of triplets?)

Tested on an 2020 M1 Macbook Air. cblearn was used with the CPU version, not the Pytorch one.

@dekuenstle
Copy link
Collaborator

dekuenstle commented May 4, 2022

Hi @Conzel,
thanks for your investigation.
Unfortunately, there were two changes in this commit:

  1. The default optimization repeats 10-times from random initialization. This should prevent poor results when ending in local minima (especially with 1d/2d embeddings).
  2. The loss/stress function was generalized for quadruplets.

Change 1 would, in theory, explain x10 slowdown and it would be interesting to see if change 2 explains the other x2 slowdown.
To separate the two changes, could you please rerun your analysis in the current version with soe = SOE(n_components=2, random_state=2, n_init=1)?

The idea of change 1 was, that "new users" don't end up with poor results. Experienced users can use the n_init params. What are your thoughts about this design choice?

@Conzel
Copy link
Contributor Author

Conzel commented May 4, 2022

Hey @dekuenstle,

sorry, had a typo in the initial message. Should've been 0.02 seconds for the old commit.

I repeated the setup with n_init=1 and a few more triplets (n = 10 000).

Commit n_init=1 n_init=10
Pre-SOE-changes 0.08 -
Post-SOE-changes 1.12 11.82

Seems that n_init causes the 10x slowdown; but the quadruplet change also causes a ~15x slowdown. Combined its quite a lot.

Increasing n_init by default on its own seems reasonable, maybe one could add a small note in the docs of SOE 👍

For completions sake, the code snippet, a bit more systematic.

from cblearn.embedding import SOE
from cblearn.datasets import make_random_triplets
import numpy as np
import time

np.random.seed(42)
x = np.random.random((100, 2))
t, r = make_random_triplets(x, "list-boolean", 10000)
soe = SOE(n_components=2, random_state=2, n_init=1) # remove for old commit

times = []
for i in range(5):
    start = time.time()
    soe.fit_transform(t, r)
    times.append(time.time() - start)
print(f"{sum(times)/5:.2f} seconds to fit 10000 triplets with SOE.")

@dekuenstle
Copy link
Collaborator

Thank you, I will look into this ASAP. Probably we will get the old performance if we use separate loss functions for triplets and quadruplets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants