-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update mp.matrix and linalg functions following np.ndarray APIs #754
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- We can't just drop public attributes for public classes. These should be deprecated first.
- Could you please factor out the cholesky_solve() bugfix to a separate pr?
- There are test failures (as well as problems with scipy installation). Can't you use numpy.linalg for tests?
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I don't think I can approve this and I don't see how this could be improved. Count me -1.
This pr mix code refactoring (using the shape attribute where rows/cols were used) with actual changes. It makes the returned type either matrix or ndarray. And it seems that we can't, in general, use single code path for matrix and ndarray data internally.
Lets keep this for a while, maybe someone else could review. @oscarbenjamin?
The to_numpy()
helper could be added in a separate pr. Maybe the shape attribute too.
I don't quite understand what the model is here. Is the intention that it should be possible to use mpmath functions like det with a numpy array as input? It looks as if the code here does not convert the numpy arrays to mpmath and just computes everything using a pure Python implementation operating with the numpy scalar types. I don't see what would be the purpose of using mpmath in that scenario rather than just using e.g. numpy's det function. |
That was my guess as well.
Actually, no code need to convert numpy arrays to mpmath. Using numpy arrays internally is another thing. That does make sense for me in the way it was outlined by Fredrik: #217 (comment) |
If someone is using mpmath's det rather than numpy's det then they presumably want the benefit of mpmath's multiprecision support. That can only work if the arithmetic uses mpmath types though. Otherwise if mpmath's det function uses arithmetic with the numpy scalar types then I don't see why anyone would want to use it instead of just using numpy's det function. Or is the intention that someone would have a numpy array with dtype=object and mpmath mpfs as the entries in the array? |
Only this case does make sense for me. But elements of the input numpy array should be converted to appropriate mpf's (i.e., array should be recreated). |
The intention is to use numpy array with the mpf and mpc object . As the reason I stated in issue #753 , it is a useful feature to allow numpy array to manage a group of mpf objects. This allows us to use numpy ufuncs and other features for high precision math. However, functions in mpmath linear algebra modules are not compatible with numpy array. This make the use of numpy-mpf objects inconvenient. This PR updates the structure of mpmath.matrix class and the implementation of relevant linalg functions. This solves the compatibility issue. It does not use numpy internally. Just to ensure the relevant functions do not fail when processing numpy arrays. |
Exactly. The intention is to use numpy array with |
But this is - a numpy feature, not mpmath.
Could you provide some concrete example? Numpy arrays should be accepted by the matrix constructor. Maybe some mpmath functions just don't check argument types. |
Numpy does support to use the mpf object as its dtype. The issue is that the mpmath.matrix class uses a different API convention than numpu array. This causes the linalg functions failed to process numpy arrays. Here is a simple example
|
Really? I don't think so.
Great! This concrete example shows where is the problem: mpmath/mpmath/matrices/eigen_symmetric.py Lines 556 to 560 in ea3ec6b
Apparently, the code does assume that the input is a matrix instance. We could just do automatic conversion if the input type is wrong. I don't understand why we should instead introduce the shape attribute, etc |
I don't understand why this library is so resistant to numpy. Numpy is almost the de facto standard for matrix and array objects. Leveraging the structure of numpy arrays, or a similar array structure, such as the attribute |
The question is how exactly numpy and mpmath should interoperate. There are many different ways that that might be expected to work. This PR supposes one possibility but it is not clear why that is better than others. |
I don't think this is the case.
I don't see a big issue if you just add these attributes to the matrix type. The question is why do we need all these changes like rows -> shape[0] across the whole codebase. Let's take your example again. Right now people could run it as: >>> from mpmath import mp
>>> import numpy as np
>>> a = np.array([[mp.mpf(1), mp.mpf(0)], [mp.mpf(0), mp.mpf(1)]])
>>> mp.eigh(mp.matrix(a))
(matrix(
[['1.0'],
['1.0']]), matrix(
[['1.0', '0.0'],
['0.0', '1.0']])) I presume, an "inconvenience" for users here is the need for an explicit conversion. As it was noted above, we could check inputs and do conversion to appropriate mpmath's types: current code usually just silently assume that arguments are mpmath's types. That's one solution. Another approach: what if we could handle both mp.matrix and np.matrix transparently? Perhaps, this is doable, if we add some missing attributes. For example in case of the expm(), the following diff seems to be "working": diff --git a/mpmath/matrices/calculus.py b/mpmath/matrices/calculus.py
index a3c7bb0..d415bb7 100644
--- a/mpmath/matrices/calculus.py
+++ b/mpmath/matrices/calculus.py
@@ -119,7 +119,6 @@ def expm(ctx, A, method='taylor'):
finally:
ctx.prec = prec
return res
- A = ctx.matrix(A)
prec = ctx.prec
j = int(max(1, ctx.mag(ctx.mnorm(A,'inf'))))
j += int(0.5*prec**0.5) >>> import numpy as np
>>> from mpmath import mp
>>> mp.expm(mp.matrix([[1, 2], [3, 4]]))
matrix(
[['51.968956198705', '74.7365645670032'],
['112.104846850505', '164.07380304921']])
>>> mp.expm(np.matrix([[1, 2], [3, 4]], dtype=object))
matrix([[mpf('51.677495573354904'), mpf('74.311779826707408')],
[mpf('111.46766974006113'), mpf('163.14516531341602')]],
dtype=object)
>>> with mp.workprec(1000):
... m = mp.expm(np.matrix([[1, 2], [3, 4]], dtype=object))
...
>>> m
matrix([[mpf('51.968956179359909'), mpf('74.736564538809021')],
[mpf('112.10484680821353'), mpf('164.07380298757344')]],
dtype=object) As you could see, hardly this does make sense in general (maybe for the fp context). To get accurate result, you should manually increase precision for the whole computation (temporary increase of ctx.prec in the expm() will not affect np.matrix'es). Your solution is something in between. It makes an illusion, that functions handle np.array's transparently, but it's not the case. Sometimes you do conversion to ctx.matrix and vice versa, sometimes not. Why do you think this code complication does make sense for the mpmath? |
There is no attribute
Creating np.array with
This is a compromise since using numpy array in mpmath is rejected. Numpy array is designed better than mpmath.matrix in many aspects. The ideal solution is to deprecate mp.matrix and use numpy array as the matrix container. If the library can accept this, code can be much simpler. |
You are right, my example should be corrected here. But the conclusion holds, computation will lose precision: >>> mp.expm(mp.matrix([[1, 2], [3, 4]]))
matrix(
[['51.968956198705', '74.7365645670032'],
['112.104846850505', '164.07380304921']])
>>> mp.expm(np.matrix(mp.matrix([[1, 2], [3, 4]])).reshape(2,2))
matrix([[mpf('51.677495573354904'), mpf('74.311779826707408')],
[mpf('111.46766974006113'), mpf('163.14516531341602')]],
dtype=object)
>>> with mp.extraprec(1000):
... m = mp.expm(np.matrix(mp.matrix([[1, 2], [3, 4]])).reshape(2,2))
...
>>> m
matrix([[mpf('51.968956193868668'), mpf('74.736564559954574')],
[mpf('112.10484683993186'), mpf('164.07380303380053')]],
dtype=object) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems, this patch not covered by tests. See failed CI jobs.
The missing lines in coverage are all related to numpy. Unless allowing installing numpy in tests, these code cannot be tested. |
It's allowed. Optional dependencies for CI include numpy. |
Feel free to reopen this, if you are ready to continue this work. |
Following the discussion in Better integration with numpy #753, the mp.matrix class and relevant linalg and arithmetic functions are updated, following the array API conventions in numpy. mp.matrix can be converted to numpy array using the
.to_numpy()
method, e.g.The mp-dtype numpy arrays can be passed as arguments to the following functions
When these functions are called with numpy arrays, their return objects are also encoded in numpy array.
This PR also fixes a bug in
cholesky_solve
for complex matrix.