Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Trac #16014: Improvements to discriminant computation
Currently (i.e. on sage 6.1.1), the discriminant computation fails with a segmenttation fault when the coefficients are too complicated. For example: {{{ #!python PR.<b,t1,t2,x1,y1,x2,y2> = QQ[] PRmu.<mu> = PR[] E1 = diagonal_matrix(PR, [1, b^2, -b^2]) RotScale = matrix(PR, [[1, -t1, 0], [t1, 1, 0], [0, 0, 1]]) E2 = diagonal_matrix(PR, [1, 1, 1+t1^2])*RotScale.transpose()*E1*RotScale Transl = matrix(PR, [[1, 0, -x1], [0, 1, -y1], [0, 0, 1]]) E3 = Transl.transpose()*E2*Transl E4 = E3.subs(t1=t2, x1=x2, y1=y2) det(mu*E3 + E4).discriminant() }}} Apparently the Python interpreter reaches some limit on the depth of the call stack, since it seems to call `symtable_visit_expr` from `Python-2.7.6/Python/symtable.c:1191` over and over and over again. But this ticket here is not about fixing Python. (If you want a ticket for that, feel free to create one and provide a reference here.) Instead, one should observe that the above is only a discriminant of a third degree polynomial. As such, it has an easy and well-known formula for the discriminant. Plugging the complicated coefficients into this easy formula leads to the solution rather quickly (considering the immense size of said solution). So perhaps we should employ this approach in the `discriminant` method. I've [https://groups.google.com/d/topic/sage- support/fABfhHyioa0/discussion discussed this question on sage-support], and will probably continue the discussion. Right now, I'm unsure about when to choose a new method over the existing one. Should it be based on degree, using hardcoded versions of the well-known discriminants for lower degrees? Or should it rather be based on base ring, thus employing a computation over some simple generators instead of the full coefficients if the base ring is a polynomial ring or some other fairly complex object. Here is a code snippet to illustrate the latter idea: {{{ #!python def generic_discriminant(p): """ Compute discriminant of univariate (sparse or dense) polynomial. The discriminant is computed in two steps. First all non-zero coefficients of the polynomial are replaced with variables, and the discriminant is computed using these variables. Then the original coefficients are substituted into the result of this computation. This should be faster in many cases, particularly with big coefficients like complicated polynomials. In those cases, the matrix computation on simple generators is a lot faster, and the final substitution is cheaper than carrying all that information along at all times. """ pr = p.parent() if not pr.ngens() == 1: raise ValueError("Must be univariate") ex = p.exponents() pr1 = PolynomialRing(ZZ, "x", len(ex)) pr2.<y> = pr1[] py = sum(v*y^e for v, e in zip(pr1.gens(), ex)) d = py.discriminant() return d(p.coefficients()) }}} Personally I think I prefer a case distinction based on base ring, or a combination. If we do the case distinction based on base ring, then I'd also value some input as to what rings should make use of this new approach, and how to check for them. For example, I just noticed that #15061 discusses issues with discriminants of polynomials over power series. So I guess those might benefit from this approach as well. URL: http://trac.sagemath.org/16014 Reported by: gagern Ticket author(s): Martin von Gagern Reviewer(s): Peter Bruin
- Loading branch information