Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Addressing LFDA sign indeterminacy #326

Merged

Conversation

mvargas33
Copy link
Contributor

Resolves #211

I've looked for this issue and was able to replicate the random behavior when calculating the eigenvectors. Effectively it may happen that some eigenvectors might be signed-flipped, if you execute the code at different times.

None of the scipy functions we use to calculate the eigenvectors in LFDA supports something like a 'random_state': scipy.linalg.eigh, scipy.linalg.eig and scipy.sparse.linalg.eigsh

There are two alternatives:

  1. We implement some kind of "convention" to force/determine the behavior. For instance, "Every first value of an eigenvector must be positive, otherwise flip the sign of the entire vector". (Flipping sign of an eigenvector, doesn't affect the eigenvector property)
  2. Add a Note in documentation just like sklearn does for SVD

Number 1 decreases performance a bit, and number 2 seems more reasonable. Consider the message I made a draft.

Copy link
Contributor

@perimosocordiae perimosocordiae left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that a documentation fix is the way to go here. Thanks!

@perimosocordiae perimosocordiae merged commit 10b6d25 into scikit-learn-contrib:master Sep 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

lfda not deterministic
2 participants