-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/338 matrix inverse #875
Conversation
Codecov Report
@@ Coverage Diff @@
## master #875 +/- ##
==========================================
+ Coverage 95.47% 95.49% +0.02%
==========================================
Files 64 64
Lines 9741 9793 +52
==========================================
+ Hits 9300 9352 +52
Misses 441 441
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
so ive done a bit of testing on this branch. ht.random.seed(42)
split = 1
a = ht.random.random((20, 20), dtype=ht.float64, split=split)
ainv = ht.linalg.inv(a)
i = ht.eye(a.shape, split=split, dtype=a.dtype)
print(ht.max((a @ ainv) - i))
print(ht.allclose(a @ ainv, i)) The above code give this:
However, if i use split=0...
Is this some artifact of the algorithm? |
…lytics/heat into feature/338-matrix-inverse
@coquelin77 This is some weird behaviour of the indexing methods of DNDarray and the division operation . It's now the same precision on both splits. |
Description
An implementation of the matrix inverse using Gauss-Jordan elimination when the matrix is distributed.
torch.linalg.inv
is called on the local tensors otherwise.Issue/s resolved: #338
Changes proposed:
Type of change
Due Diligence
Does this change modify the behaviour of other functions? If so, which?
no