-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: re-enabled old sv lapack routine as defaults #7879
Conversation
Lets see if it passes tests through ci, it does so locally. |
Please make, as a first step, a PR that does not add any new options but only reverts to the gesv+gecon approach. I don't think we want to add them to 1.0.0 at the least --- it's already branched and only bugfixes (such as the one concerning the performance) should go in. After that, we can discuss about the possibility to have the added options. |
Also there are symmetric but not hermitian complex matrices and "T" is used for these matrices for right inversion. Thus zsysv and zhesv solves different problems. |
@ilayn for the |
@pv But the change to the other routines has a regression on the precision of the algorithm. The only way to circumvent this is by using the |
The *sv routines were used for many years without problems, so I doubt
the precision issue is very important for most users.
|
The issue is that there is not much time to hash out the design and any
other issues in time for the release, so I'm -1 on adding new features
to 1.0.0 at this stage.
|
Ok, I will amend, this forces me to remove the |
@zerothi there's nothing wrong with this PR for |
ok, will do. Will then just add a deprecation warning for the the transposed keyword in this pr to make it follow the PR i will create for 1.0.x. |
Note that it's not possible to remove the `transposed` keyword in 1.0.0
since we'd have to deprecate it first.
It's possible to implement it via calling getrf+getrs instead of gesv.
That's actually what gesv does,
http://www.netlib.no/netlib/lapack/double/dgesv.f
|
As I said, I won't remove it, simply deprecate it? |
What I can do is: for
for Or would you rather that |
"Luckily", there's a bug for complex so there are no backward
compatibility issues for what transposed means in that case:
```
>> scipy.linalg.solve(np.eye(3).astype(complex), np.ones(3),
transposed=True)
** On entry to ZGESVXUMEpsilonFull parameter number 2 had an illegal
value
ValueError: LAPACK reported an illegal value in 2-th argument.
```
For 1.0.0 probably best to just explicitly raise an error for the
complex transposed case.
For 1.1.x we might want to reuse keyword argument name,
`transposed={True, 'C', 'T', 'N'}`, so that it's not necessary to
deprecate the argumetn name. The behavior in the complex case for
transposed=True can remain an error since it's ambiguous which one was
meant --- or it should be made to be 'C' to remain consistent with
assume_a='her'.
|
@pv i think the transposed keyword was passing 'H' in the old scipy, whereas it only accepts 'C'. Hence the argument error. So yes, a bug in the previous solve for complex cases. As for 1.1. Since the I will hardly think many people are aware of its existence (it just entered in 0.19) and we can retain it in 1.1 as a backfall (same as I much prefer fixing non-ideal keyword names, rather than continuing with ambiguous names? So, for master, I know what to do, for 1.0.x, please suggest your preferred way of route, still transposed={False, 'N', 'T', 'C'}, or as I suggested: form={'none', 'trans', 'conj'}? |
I'm having hard time following the discussion but I think I'm a bit closer to what @pv mentioned about reverting it to old version (if it is really the problem, since we also didn't have any complaints regarding the speed until now). Because there is a time pressure for 1.0 or not touching it and performing a full surgery until 1.0.1 or 1.1. Yesterday, I've done some tests and indeed the time is mostly spent on refinement steps. Keyword names are of course open for improvement or renaming. But transpose issue is simple: complex matrices can also have T keyword for complex generic matrices and if sym is given for complex matrices it shouldn't switch to hermitian solution. Otherwise it is not correct. I didn't test line by line but just mentioning it if that's the case. But |
I'll check about the bug as soon as I have some time. |
@ilayn yes, I agree to the |
This fixes the speed regression in scipy#7847. Since the default change of sv to svx the solve routine suffered a huge performance penalty for anything but low order NRHS. This commit fixes that issue by enabling the use of either 1) the svx or 2) the sv routines. With a default on the latter. However, the main funtionality by using svx was the easy check of the condition number to assert a non-singular matrix. This commit adds a call to the con routines to extract the appropriate condition number. This forces the addition of: - lamch (machine precision extraction) - gecon (condition number calculation from LU factorization) - lange (1/I-norm of matrix) This commit adds to arguments (and deprecates one) by: - refine, bool, decides whether the svx (true) or sv (false) routines should be used. - form = 'none', 'trans', 'conj' which refers to the form of the solution step. This is only valid if assume_a='gen'. A ValueError will be issued if not 'gen'. This keyword deprecates the "transposed" boolean value which only allowed 'T' (real) and 'C' (complex) but not 'T' for complex. A fix for the deprecated sym_pos keyword which wasn't used in the previous version. This has sadly resulted in two delocalized tests failing because sym_pos, see the OptimizeWarning for scipy/optimize/tests/test_linprog.py. A couple of additional tests have been added, mainly to check the arguments. Signed-off-by: Nick Papior <nickpapior@gmail.com>
890c99a
to
c71c26e
Compare
The solve routine, when using the sym_pos keyword, can when the optimization approaches result be inaccurate. Hence, the fallback should be allowed in the tests. I.e. added a catch of OptimizeWarnings. Signed-off-by: Nick Papior <nickpapior@gmail.com>
c71c26e
to
c7370f3
Compare
I have now squashed some testing commits into two commits and here are some final remarks that needs either approval or change, but at least requires some comments from you:
Could you comment on the above things? |
For the expm case, the bad conditioning possibly doesn't matter for the
algorithm, so the warning can be ignored. This is easiest to do in the
test itself, by
```
from scipy._lib._numpy_compat import suppress_warnings
...
def test_triangularity_perturbation(self):
...
with suppress_warnings() as sup:
sup.filter(RuntimeWarning,
"scipy.linalg.solve\nIll-conditioned.*")
A_expm_logm_perturbed = expm(A_logm_perturbed)
...
```
|
Ok, I can add that, but what about 1.0.X? |
Also, the expm test was written well before the change to gesvx, so I
believe the test works as intended regardless of the bad conditioning.
|
Ah, yes, of course. I see. I will do that! |
Signed-off-by: Nick Papior <nickpapior@gmail.com>
This fixes the speed regression in scipy#7847. Since the default change of sv to svx the solve routine suffered a huge performance penalty for anything but low order NRHS. This commit fixes that issue by converting to the sx routines, with condition number checking. However, the main funtionality by using svx was the easy check of the condition number to assert a non-singular matrix. This commit adds a call to the con routines to extract the appropriate condition number. This forces the addition of: - lamch (machine precision extraction) - gecon (condition number calculation from LU factorization) - lange (1/I-norm of matrix) This commit adds a ValueError issued for complex matrices an transposed=True (which was a bug in prior versions). The complex transposed case could be solved, but it is ambiguous whether it should be the transposed or Hermitian transposed. Hence, a valueerror is raised. See master (scipy#7879) for a fix. A couple of additional tests have been added, mainly to check the arguments. Signed-off-by: Nick Papior <nickpapior@gmail.com>
Signed-off-by: Nick Papior <nickpapior@gmail.com>
This fixes the speed regression in gh-7847. Since the default change of sv to svx the solve routine suffered a huge performance penalty for anything but low order NRHS. This commit fixes that issue by converting to the sx routines, with condition number checking. However, the main funtionality by using svx was the easy check of the condition number to assert a non-singular matrix. This commit adds a call to the con routines to extract the appropriate condition number. This forces the addition of: - lamch (machine precision extraction) - gecon (condition number calculation from LU factorization) - lange (1/I-norm of matrix) This commit adds a ValueError issued for complex matrices an transposed=True (which was a bug in prior versions). The complex transposed case could be solved, but it is ambiguous whether it should be the transposed or Hermitian transposed. Hence, a valueerror is raised. See master (#7879) for a fix. A couple of additional tests have been added, mainly to check the arguments.
This fixes the speed regression in scipygh-7847. Since the default change of sv to svx the solve routine suffered a huge performance penalty for anything but low order NRHS. This commit fixes that issue by converting to the sx routines, with condition number checking. However, the main funtionality by using svx was the easy check of the condition number to assert a non-singular matrix. This commit adds a call to the con routines to extract the appropriate condition number. This forces the addition of: - lamch (machine precision extraction) - gecon (condition number calculation from LU factorization) - lange (1/I-norm of matrix) This commit adds a ValueError issued for complex matrices an transposed=True (which was a bug in prior versions). The complex transposed case could be solved, but it is ambiguous whether it should be the transposed or Hermitian transposed. Hence, a valueerror is raised. See master (scipy#7879) for a fix. A couple of additional tests have been added, mainly to check the arguments. (cherry picked from commit 8e56f42)
@zerothi I think after the cherrypicking the commits this is also now redundant right? |
Agreed, this PR was trying to retain both |
This fixes the speed regression in #7847. Since the default
change of sv to svx the solve routine suffered a huge performance
penalty for anything but low order NRHS.
This commit fixes that issue by enabling the use of either 1) the
svx or 2) the sv routines. With a default on the latter.
However, the main funtionality by using svx was the easy check
of the condition number to assert a non-singular matrix.
This commit adds a call to the con routines to extract the
appropriate condition number. This force the addition of:
This commit adds to arguments (and deprecates one) by:
routines should be used.
of the solution step. This is only valid if refine=True
A ValueError will be issued if not none and refine=False.
This keyword deprecates the "transposed" boolean value which
only allowed 'T' (real) and 'C' (complex) but not 'T' for
complex.
A fix for the deprecated sym_pos keyword which wasn't used in the previous
version.
A couple of additional tests have been added, mainly to check the arguments.