Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Matrix cleanup stage 5 #12521

Merged
merged 8 commits into from May 11, 2017
Merged

Matrix cleanup stage 5 #12521

merged 8 commits into from May 11, 2017

Conversation

siefkenj
Copy link
Contributor

@siefkenj siefkenj commented Apr 9, 2017

Added a MatrixReductions class which houses row/column
operations and row reduction. The new default algorithm
waits to normalize pivot rows until the end of the reduction.
This gives a tremendous speed improvement, especially for
matrices with symbols.

Also added are new methods:

  • echelon_form which returns echelon form of a matrix (often all that is needed to check a matrix rank or solve an equation).
  • is_echelon property to tell if a matrix is in echelon form.
  • elementary_row_op a nice interface to all the elementary row operations in one place
  • elementary_col_op a nice interface to all the elementary column operations in one place
  • row_del is now part of MatrixShaping
  • col_del is now part of MatrixShaping

permuteBkwd and permuteFwd are deprecated.

	Added a `MatrixReductions` class which houses row/column
	operations and row reduction.  The new default algorithm
	waits to normalize pivot rows until the end of the reduction.
	This gives a tremendous speed improvement.
@siefkenj
Copy link
Contributor Author

siefkenj commented Apr 9, 2017

Benchmarks on matrices with integer entries. First is the old row reduction algorithm (which normalized first) and second is the new one.

Size 2 Matrix([[3, 4], [42, 51]])
1000 loops, best of 3: 314 µs per loop
10000 loops, best of 3: 56.7 µs per loop

Size 3 Matrix([[37, 57, 20], [49, 34, 70], [36, 74, 38]])
1000 loops, best of 3: 733 µs per loop
10000 loops, best of 3: 140 µs per loop

Size 4 Matrix([[62, 96, 60, 54], [27, 66, 13, 15], [66, 39, 70, 51], [4, 42, 16, 53]])
1000 loops, best of 3: 1.37 ms per loop
1000 loops, best of 3: 295 µs per loop

Size 5 Matrix([[10, 96, 28, 83, 80], [20, 80, 32, 18, 97], [17, 26, 65, 9, 82], [47, 75, 31, 64, 90], [40, 40, 48, 62, 98]])
100 loops, best of 3: 2.21 ms per loop
1000 loops, best of 3: 551 µs per loop

Size 6 Matrix([[89, 85, 44, 37, 82, 93], [13, 20, 91, 96, 90, 53], [80, 69, 95, 38, 73, 38], [45, 4, 86, 9, 3, 91], [53, 56, 88, 93, 19, 89], [18, 62, 46, 18, 7, 92]])
100 loops, best of 3: 3.54 ms per loop
1000 loops, best of 3: 920 µs per loop

Size 7 Matrix([[69, 89, 93, 3, 87, 13, 99], [20, 78, 47, 95, 77, 63, 60], [33, 33, 97, 33, 13, 23, 83], [57, 15, 42, 0, 10, 36, 36], [3, 49, 57, 12, 46, 40, 18], [60, 72, 91, 48, 44, 3, 97], [82, 65, 1, 17, 15, 79, 69]])
100 loops, best of 3: 5.26 ms per loop
1000 loops, best of 3: 1.45 ms per loop

Benchmarks on matrices with symbols. First is the old row reduction algorithm (which normalized first) and second is the new one.

Size 3 Matrix([[x, 57, 20], [49, 34, x], [36, 74, 38]])
10 loops, best of 3: 30.8 ms per loop
10 loops, best of 3: 28.6 ms per loop

Size 4 Matrix([[x, 96, 60, 54], [27, 66, x, 15], [66, 39, 70, 51], [4, 42, 16, 53]])
10 loops, best of 3: 110 ms per loop
10 loops, best of 3: 76.2 ms per loop

Size 5 Matrix([[x, 96, 28, 83, 80], [20, 80, x, 18, 97], [17, 26, 65, 9, 82], [47, 75, 31, 64, 90], [40, 40, 48, 62, 98]])
1 loops, best of 3: 498 ms per loop
1 loops, best of 3: 317 ms per loop

Size 6 Matrix([[x, 85, 44, 37, 82, 93], [13, 20, x, 96, 90, 53], [80, 69, 95, 38, 73, 38], [45, 4, 86, 9, 3, 91], [53, 56, 88, 93, 19, 89], [18, 62, 46, 18, 7, 92]])
1 loops, best of 3: 2.38 s per loop
1 loops, best of 3: 1.19 s per loop

The code that was run was

from sympy import *
mms = [Matrix(0, 0, []), Matrix([[99]]), Matrix([
[ 3,  4],
[42, 51]]), Matrix([
[37, 57, 20],
[49, 34, 70],
[36, 74, 38]]), Matrix([
[62, 96, 60, 54],
[27, 66, 13, 15],
[66, 39, 70, 51],
[ 4, 42, 16, 53]]), Matrix([
[10, 96, 28, 83, 80],
[20, 80, 32, 18, 97],
[17, 26, 65,  9, 82],
[47, 75, 31, 64, 90],
[40, 40, 48, 62, 98]]), Matrix([
[89, 85, 44, 37, 82, 93],
[13, 20, 91, 96, 90, 53],
[80, 69, 95, 38, 73, 38],
[45,  4, 86,  9,  3, 91],
[53, 56, 88, 93, 19, 89],
[18, 62, 46, 18,  7, 92]]), Matrix([
[69, 89, 93,  3, 87, 13, 99],
[20, 78, 47, 95, 77, 63, 60],
[33, 33, 97, 33, 13, 23, 83],
[57, 15, 42,  0, 10, 36, 36],
[ 3, 49, 57, 12, 46, 40, 18],
[60, 72, 91, 48, 44,  3, 97],
[82, 65,  1, 17, 15, 79, 69]]), Matrix([
[66, 90, 13, 78, 98, 91, 49, 96],
[78, 95, 69, 64, 15, 24, 36,  8],
[49, 84, 48, 18, 64, 28, 16, 82],
[23, 66,  7, 73, 17, 42, 44, 12],
[89, 71, 82, 49, 30, 16, 33, 20],
[54, 17, 83, 53, 48, 92, 89, 79],
[98, 19, 24, 35, 45, 59, 91,  9],
[66, 69, 40, 89, 13, 50, 67, 87]]), Matrix([
[34, 81, 13, 15, 32, 32, 56, 41,  9],
[49, 15, 49, 56, 96, 52, 30,  5, 18],
[66, 33, 82,  7, 91, 90, 61, 12, 88],
[57, 46, 39, 48, 89, 45, 14, 82, 84],
[ 7, 33, 42, 86, 31, 60, 60, 34, 11],
[94, 26, 52, 14, 43, 53, 20, 94, 76],
[81, 79, 43,  1, 29, 41, 36, 90, 58],
[22, 57, 20, 46, 13,  3, 57, 53, 30],
[49, 24, 44, 90, 32, 36, 83,  2, 46]]), Matrix([
[81, 51, 45, 55, 92, 50, 42, 65, 43, 10],
[53, 77, 58, 13, 48, 10, 64, 29, 37, 36],
[51, 68,  0, 59, 96, 58, 25,  5, 41, 30],
[40, 49, 64, 39, 44, 13,  5, 56, 12, 96],
[12, 42, 13, 83, 94, 25, 52, 34, 50, 11],
[34, 60, 81, 75,  7, 92, 46, 94, 90, 34],
[84, 95, 18, 50, 96, 13, 51, 39, 65, 53],
[38, 17, 52, 84, 90, 10, 41, 40, 34, 75],
[87,  8, 80, 86, 76,  5, 44, 18, 31, 66],
[69, 19, 47, 49, 63, 52, 76, 46, 31,  0]])]

for i in range(2, 8):
    m = mms[i]
    print("\nSize", i, m)
    %timeit m.rref(lambda x: x.is_zero, lambda x:x)
    %timeit _eval_rref(m, lambda x: x.is_zero, lambda x:x)

for i in range(3, 8):
    m = mms[i]
    m = m.copy()
    m[0, 0] = Symbol('x')
    m[1, 2] = Symbol('x')
    print("\nSize", i, m)
    %timeit m.rref(lambda x: x.is_zero, lambda x:x)
    %timeit _eval_rref(m, lambda x: x.is_zero, lambda x:x)

The 7x7 symbolic case was too slow to finish row reducing with the old algorithm, but with the new one it takes ~4 second on my machine.

@jksuom
Copy link
Member

jksuom commented Apr 9, 2017

This may not have much to do with this PR, but would it be possible to make the matrix module so general that the entries could belong to any ring, or at least to rings that are SymPy Domains?

@siefkenj
Copy link
Contributor Author

@jksuom A lot of that should work already. Sympy calls the element's multiply and add functions. The only thing where care would need to be taken is with inverting and such. You can pass a custom iszerofunc to tell if an element is zero or not for row reduction procedures, etc.. It shouldn't be hard to make a subclass of a matrix over a particular ring that auto-sets everything that needs to be set!

@jksuom
Copy link
Member

jksuom commented Apr 10, 2017

It also seems to me that most things should work. Only some methods that use division may have to check its availability and raise an exception if necessary. But one issue with the old implementation is that matrix elements are routinely sympified. Therefore elements of polys domains cannot be used in matrices. Perhaps sympification should be made optional.

@siefkenj
Copy link
Contributor Author

@jksuom every method that simplifies elements should allow you to pass in a custom simplify function.

@jksuom
Copy link
Member

jksuom commented Apr 10, 2017

MatrixBase currently has this line:

    _sympify = staticmethod(sympify)

where sympify is imported from core. At some places also _sympify is imported. In addition to those, there should be the possibility of skipping sympification, preferable controlled by a keyword.

@asmeurer
Copy link
Member

I think we would also want to structure it like the polys where the computations happen at a low level on basic types, with minimal type checking, and the higher level provides the abstractions and type checking. That way we get the best performance possible.

And worth noting that domains would break the ability to do this:

>>> Matrix([1, 1.0])
Matrix([
[  1],
[1.0]])

since the elements would have to be unified to a single domain (either all floats or all rationals). It's not just an issue of avoiding sympify, the Matrix itself needs to know that its elements are from a given domain.

@siefkenj
Copy link
Contributor Author

For immutable matrices, the Matrix constructor should be able to detect if all the elements are from the same domain and could automatically subclass to a homogeneously-typed matrix. I was already thinking there should be a FloatMatrix that would solve a lot of the numerical bugs that are filed against the matrix module and could dispatch to faster functions for their operations.

@asmeurer
Copy link
Member

FloatMatrix would be a very special case. Aside from separating floats from rationals, the domains let you do things like assure that every element of a matrix is an element of some rational function field, with guaranteed zero equivalence testing (no need to call simplify, just put it over a common denominator and see if the numerator is 0). And even more advanced, but still simpler than full simplify, zero equivalence checking is possible for fields with algebraic extensions.

The polys also get performance by letting low-level operations take place on raw Python or gmpy integers or rationals, rather than SymPy Integer/Rational, which are much slower in comparison (especially to gmpy).

@siefkenj
Copy link
Contributor Author

For everything I've rewritten so far, iszerofunc is used before anything else is tried. If iszerofunc returns True or False and never None, no simplification is ever attempted.

@asmeurer Are you suggesting that the class be a DomainMatrix rather than several classes like FloatMatrix, RingMatrix, etc?

@asmeurer
Copy link
Member

Here is a suggested design:

There are three levels, the top level looks and acts like it does now. If you access the elements you get SymPy objects. Below that you have the DomainMatrix, which knows that the elements are from a polys domain. That would be a wrapper for the lowest level, which is just the pure representation (list of lists or dict or whatever).

By the way, I don't think we should worry about non-field domains, for now at least. It's much simpler to always assume the domain is a field (so that you are dealing with a vector space, rather than an R-module).

@siefkenj
Copy link
Contributor Author

Interesting. That seems to be upsidedown from how I've been structuring it. I've been making all methods work in as much generality as possible and leaving it to subclasses to override those methods to meet their requirements. It sounds to me like you're suggesting having a Matrix whose args is a DomainMatrix and operations dispatched to methods of DomainMatrix.

@asmeurer
Copy link
Member

The idea is to optimize for performance, especially for those operations that are used in algorithms that need to be fast (mainly basic addition and multiplication, and row reduction; other stuff is less important). The solve_lin_sys function in the polys module aims to partly do this now. The typical usage is in heurisch, and it is where that algorithm tends to spend most of its time.

@siefkenj
Copy link
Contributor Author

I noticed that in the tests. huerisch seems to be quite fragile as well. Row reducing and then normalizing every row left things in the wrong form (even though it is at least 2x as fast) when compared with naive row reduction where the rows are normalized along the way. Hence the introduction of the normalize_last keyword in this PR.

@asmeurer
Copy link
Member

What do you mean the wrong form? Different answers from heurisch are acceptable so long as they are still correct.

@siefkenj
Copy link
Contributor Author

siefkenj commented Apr 10, 2017

From the tests, the example was

mm = Matrix([
[    x**3/24 + 7/8, -x**3/6 + 1/2, -1, 0, x + 1],
[                0,             0,  0, 0,     0],
[x**4/24 + 13*x/12, -x**4/6 - x/3, -x, x,     0]])

which when row reduced in the naive way

mm.rref(simplify=cancel,normalize_last=False)[0] ==
Matrix([
[1, (-x**3/6 + 1/2)/(x**3/24 + 7/8), 1/(x**3/24 + 7/8), 0,  (x + 1)/(x**3/24 + 7/8)],
[0, 1, (-x + (x**4/24 + 13*x/12)/(x**3/24 + 7/8))/(-x**4/6 - x/3 - (-x**3/6 + 1/2)*(x**4/24 + 13*x/12)/(x**3/24 + 7/8)), x/(-x**4/6 - x/3 - (-x**3/6 + 1/2)*(x**4/24 + 13*x/12)/(x**3/24 + 7/8)), -(x + 1)*(x**4/24 + 13*x/12)/((x**3/24 + 7/8)*(-x**4/6 - x/3 - (-x**3/6 + 1/2)*(x**4/24 + 13*x/12)/(x**3/24 + 7/8)))],
[0, 0, 0, 0, 0]])

but when the rows are normalized last, you get

mm.rref(simplify=cancel,normalize_last=True)[0] ==
Matrix([
[1, (-x**3/6 + 1/2)/(x**3/24 + 7/8), -1/(x**3/24 + 7/8), 0, (x + 1)/(x**3/24 + 7/8)],
[0, 1, (x**4/24 - x*(x**3/24 + 7/8) + 13*x/12)/(-(-x**3/6 + 1/2)*(x**4/24 + 13*x/12) + (x**3/24 + 7/8)*(-x**4/6 - x/3)), x*(x**3/24 + 7/8)/(-(-x**3/6 + 1/2)*(x**4/24 + 13*x/12) + (x**3/24 + 7/8)*(-x**4/6 - x/3)), -(x + 1)*(x**4/24 + 13*x/12)/(-(-x**3/6 + 1/2)*(x**4/24 + 13*x/12) + (x**3/24 + 7/8)*(-x**4/6 - x/3))],
[0, 0, 0, 0, 0]])

Applying simplify to both these matrices shows that they are the same, however the difference is enough to break constant_system() in prde.py.

@asmeurer
Copy link
Member

If simplify=cancel, shouldn't the terms in the answer be cancelled?

@siefkenj
Copy link
Contributor Author

siefkenj commented Apr 11, 2017

@asmeurer not necessarily. simplify is only called if iszerofunc returns None for all the entries (technically, iszerofunc returns None or True and there is at least one None). If iszerofunc returns False for anything, it is used as a pivot and other elements aren't simplified.

Even so,

 cancel((-x + (x**4/24 + 13*x/12)/(x**3/24 + S(7)/8))/(-x**4/6 - x/3 - (-x**3/6 + S(1)/2)*(x**4/24 + 13*x/12)/(x**3/24 + S(7)/8))) == -S(1)/4

but the same entry for the other matrix

cancel((x**4/24 - x*(x**3/24 + S(7)/8) + 13*x/12)/(-(-x**3/6 + S(1)/2)*(x**4/24 + 13*x/12) + (x**3/24 + S(7)/8)*(-x**4/6 - x/3))) == 30/((x**3 - 3)*(x**3 + 26) - (x**3 + 2)*(x**3 + 21))

even though

sympy.simplify((x**4/24 - x*(x**3/24 + S(7)/8) + 13*x/12)/(-(-x**3/6 + S(1)/2)*(x**4/24 + 13*x/12) + (x**3/24 + S(7)/8)*(-x**4/6 - x/3))) == -S(1)/4

@asmeurer
Copy link
Member

Well that's a bug right there. cancel should always return a rational function with expanded numerator and denominator, with common factors cancelled out.

@asmeurer
Copy link
Member

Also I think the expression should be -1/4, not 1/4. Was that just a typo on your part, or is it another thing that should be looked into?

@asmeurer
Copy link
Member

I opened #12531 for this.

@siefkenj
Copy link
Contributor Author

It was a typeo. Should be -1/4.

@asmeurer
Copy link
Member

That's good. A wrong result would be far more worrying than a improperly simplified result.

def _eval_col_del(self, col):
def entry(i, j):
i = i - 1 if i > col else i
return self[i, j]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a nitpick, but how about return self[i-1, j] if i > col else self[i, j] instead of possibly setting setting i equal to itself when i <= col is True?


def _row_reduce(self, iszerofunc, simpfunc, normalize_last=True,
normalize=True, zero_above=True):
"""Row reduce `self` and return a typle (rref_matrix,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: "typle"->"tuple"

zero if `iszerofunc` returns `None`
normalize_last : indicates where all row reduction should
happen in a fraction-free manner and then the rows are
normalized (so that the pivots are 1), or wheither
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: "wheither"->"whether"

# in the process. Let's not let them go to waste
for (offset, val) in newly_determined:
offset += piv_row
mat[offset*cols + piv_col] = val
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this line of code can be omitted. This newly simplified entry is under the pivot position, so it will get zeroed when a multiple of the pivot row is added to the row containing this entry.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mathematically, that is true, but when the cross multiplication happens, it is better to do it with a simplified expression than with an unsimplified expression.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Durr, you do use the simplified values I thought were just getting overwritten. I'm fired. Sorry about that.

if iszerofunc(val):
continue

cross_cancel(pivot_val, row, val, piv_row)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like you can get the same result with fewer calculations. When forming the linear combination of the pivot row with a row beneath it, it looks like you're explicitly computing the entry in the pivot column of the result, but this entry is zero by construction. When the pivot column is not zero, it looks like you're computing the entries to the left of the pivot in the linear combination of rows, but these are also zero by construction.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about that. It adds algorithmic complexity to only work with values on the right side of the matrix. I assumed multiplying by zero and adding zero were fast operations in sympy and so there would be little overhead. I didn't actually benchmark this though...

Eventually, integer/float-only matrices will be row reduced with mpmath, which is way faster, so if multiplying by zero really is fast, I don't think the speedup gained from avoiding a few multiplications would be worth it. Or, did I misunderstand your comment entirely?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't appreciate that in this variant of the elimination algorithm, one of the coefficients need not be one in the linear combination of rows that produces a zero in the desired position. Comment withdrawn.

@siefkenj
Copy link
Contributor Author

@moorepants @jksuom would one of you mind reviewing this PR?

@@ -111,6 +114,11 @@ def __len__(self):
class MatrixShaping(MatrixRequired):
"""Provides basic matrix shaping and extracting of submatrices"""

def _eval_col_del(self, col):
def entry(i, j):
return self[i, j] if i <= col else self[i - 1, j]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does i denote the column index here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. And my test suite also let this one slide :-(.

if op == "n->kn":
col = col if col is not None else col1
if col is None or k is None:
raise ValueError("For a {0} operation 'n->kn' you must proved the "
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

proved -> provide?

algorithms start on the left, having complexity right-shifted
speeds things up.

returns a tuple (mat, perm) where perm is a permutation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

returns -> Returns

speeds things up.

returns a tuple (mat, perm) where perm is a permutation
of the columns to perform to shift the complex columns righ, and mat
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

righ -> right

return mat

def elementary_col_op(self, op="n->kn", col=None, k=None, col1=None, col2=None):
"""Perfoms the elementary row operation `op`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name refers to columns. Should there be two methods, one for columns and another for rows?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are indeed. Copy and paste error in the doc string.

@jksuom
Copy link
Member

jksuom commented May 1, 2017

The matrix file is so large that I find it hard to form an overall view of its contents. Especially the MatrixBase class is very big. Am I correct in assuming that it should eventually be replaced by the new classes and reduced, or possibly totally removed? If not, then maybe it should be moved to another file.

It seems that my review will now primarily be restricted to trivial matters like typos. (Was the name _eval_is_hermetian chosen intentionally?) Otherwise I think this looks good.

@siefkenj
Copy link
Contributor Author

siefkenj commented May 1, 2017

@jksuom The MatrixBase is indeed large. I am splitting out most of it's methods into separate classes that will be inherited from. Eventually MatrixBase should have very few methods. I wasn't planning on splitting up the files though. matrices.py will still be large, though I can look into this if it is too large.

hermetian should be hermitian! (I struggle with spelling...)

The new tests in test_commonmatrix.py should cover all behavior that I moved to subclasses, and @twhunt has looked through my algorithms. After you both have reviewed, I think it should be good for a merge.

@jksuom
Copy link
Member

jksuom commented May 11, 2017

This looks good to me. Thanks.

@jksuom jksuom merged commit 5ea9e02 into sympy:master May 11, 2017
@moorepants
Copy link
Member

Seeing some performance regressions from this, see #14673.

@oscarbenjamin
Copy link
Contributor

This PR cause a slowdown in solve_lin_sys as measured from the polys benchmarks:

$ git checkout 683e2c1b57e13e2971dd074e9578989082bbcd53
HEAD is now at 683e2c1b57 Matrix Cleanup Stage 5
$ time python -c 'from sympy.polys.benchmarks.bench_solvers import time_solve_lin_sys_165x165; time_solve_lin_sys_165x165()'
^CTraceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/Users/enojb/current/sympy/sympy/sympy/polys/benchmarks/bench_solvers.py", line 250, in time_solve_lin_sys_165x165
    sol = solve_lin_sys(eqs, R_165)
  File "/Users/enojb/current/sympy/sympy/sympy/polys/solvers.py", line 38, in solve_lin_sys
    echelon, pivots = matrix.rref(iszerofunc=lambda x: not x, simplify=lambda x: x)
  File "/Users/enojb/current/sympy/sympy/sympy/matrices/matrices.py", line 2609, in rref
    normalize_last=normalize_last)
  File "/Users/enojb/current/sympy/sympy/sympy/matrices/matrices.py", line 2266, in _eval_rref
    zero_above=True)
  File "/Users/enojb/current/sympy/sympy/sympy/matrices/matrices.py", line 2422, in _row_reduce
    cross_cancel(pivot_val, row, val, piv_row)
  File "/Users/enojb/current/sympy/sympy/sympy/matrices/matrices.py", line 2373, in cross_cancel
    mat[p] = a*mat[p] - b*mat[p + q]
KeyboardInterrupt

real	7m33.707s
user	7m21.181s
sys	0m10.454s
$ git checkout HEAD^
Previous HEAD position was 683e2c1b57 Matrix Cleanup Stage 5
HEAD is now at 31957814e9 Merge remote-tracking branch 'upstream/master' into commonmatrix5
$ time python -c 'from sympy.polys.benchmarks.bench_solvers import time_solve_lin_sys_165x165; time_solve_lin_sys_165x165()'

real	0m11.345s
user	0m10.422s
sys	0m0.227s

The benchmark is made much faster under the PR #18844:

$ git checkout pr_domain_matrix
...
$ time python -c 'from sympy.polys.benchmarks.bench_solvers import time_solve_lin_sys_165x165; time_solve_lin_sys_165x165()'

real	0m4.593s
user	0m4.007s
sys	0m0.370s

The PR introduces a DomainMatrix class and has solve_lin_sys use that with its own rref implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants