!fix(sqrt_inverse_matrix_semi_positive): Fix the eigh, delete preferred_linalg_library option#192
Conversation
refactor: Changed to use `TorchTestCase` functions and fixed pylance warnings
preferred_linalg_library option
Codecov Report✅ All modified and coverable lines are covered by tests.
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
preferred_linalg_library optionsqrt_inverse_matrix_semi_positive): Fix the eigh, delete preferred_linalg_library option
There was a problem hiding this comment.
Pull request overview
This PR fixes the sqrt_inverse_matrix_semi_positive function to handle ill-conditioned matrices that cause torch.linalg.eigh to fail. When a LinAlgError occurs due to numerical issues, the function now adds a small identity matrix perturbation (1e-6) to make the matrix positive definite and retries. The PR also removes the preferred_linalg_library parameter from this function, as the decision of which linear algebra library to use should be made at a higher execution level rather than within individual functions.
Changes:
- Modified error handling in
sqrt_inverse_matrix_semi_positiveto add identity perturbation when eigh fails - Removed
preferred_linalg_libraryparameter from the function signature and its usage - Updated all tests to remove the
preferred_linalg_libraryparameter and improved test code quality with better assertion methods
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 8 comments.
| File | Description |
|---|---|
| src/gromo/utils/tools.py | Removed preferred_linalg_library parameter, added LinAlgError recovery with identity matrix perturbation |
| tests/test_tools.py | Updated tests to remove preferred_linalg_library usage, improved assertion methods, moved manual_seed to setUp, removed obsolete mocked error tests |
| tests/torch_unittest.py | Added setUp method with torch.manual_seed(0) for consistent test behavior, minor f-string formatting fixes |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
fix: Use the correct `dtype` for the additive identity
| # Sometimes, due to numerical issues, we get an error: | ||
| # The algorithm failed to converge because the input matrix is | ||
| # ill-conditioned or has too many repeated eigenvalues | ||
| matrix += 1e-6 * torch.eye( |
There was a problem hiding this comment.
Why is the factor the same for all float precision? Shouldn't 1e-6 value depend on float16, float32 or float64 type?
|
@TheoRudkiewicz for future reference, there is a nice way of handling dynamical error correction, depending on the type shown in the SPDLearn lib with the get_epsilon function. |
In cases where S is not postive definite,
torch.linalg.eighmay fail:torch._C._LinAlgError: linalg.eigh: The algorithm failed to converge because the input matrix is ill-conditioned or has too many repeated eigenvalues (error code: 329464).In case this happens I add a small perturbation to make it positive definite. I could not cover those lines because I did not suceed to trigger the bug on purpose.
I also remove the
preferred_linalg_libraryoption as we do not use it and it does not make sense to use it at this level. We should switch at the full execution level. (It's what I do in my hydra-script)