Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiplication of dense cyclotomic matrices should be faster #16116

Open
jplab opened this issue Apr 9, 2014 · 21 comments
Open

Multiplication of dense cyclotomic matrices should be faster #16116

jplab opened this issue Apr 9, 2014 · 21 comments

Comments

@jplab
Copy link

jplab commented Apr 9, 2014

This ticket organizes various improvements in order to get faster matrices over cyclotomic fields. The aim is to implement and compare three different ways to perform such computation:

  • libgap wrappers
  • generic matrix (matrix_generic_dense.Matrix_generic_dense)
  • the specialized inadapted class that is used by default now in Sage (matrix_cyclo_dense.Matrix_cyclo_dense)

Concrete tickets:


Description from several years ago...

The multiplication of matrices with a (universal) cyclotomic fields as base ring could be optimized as the following profiling shows:

sage: def make_matrix1(R,a,b):
....:     return matrix(R, 3, [[-1, 1, 2*a],
....:            [-4*a*b - 1, 4*a*b + 4*b^2, 4*a*b + 2*a],
....:            [-2*a, 2*a + 2*b, 2*a]])
sage: PR.<x,y> = PolynomialRing(QQ)
sage: I = Ideal(x^2 - 1/2*x - 1/4, y^3 - 1/2*y^2 - 1/2*y + 1/8)
sage: Q = PR.quotient(I)
sage: elmt = make_matrix1(Q, x, y)
sage: %timeit elmt^2
1000 loops, best of 3: 1.17 ms per loop

sage: UCF.<E> = UniversalCyclotomicField()
sage: ae = (E(10)+~E(10))/2  #same value as a
sage: be = (E(14)+~E(14))/2  #same value as b
sage: m = make_matrix1(UCF, ae, be)
sage: %timeit m^2
100 loops, best of 3: 8.13 ms per loop

sage: CF.<F> = CyclotomicField(2*5*7)
sage: af = (F^7+~F^7)/2 #same value as a
sage: bf = (F^5+~F^5)/2 #same value as b
sage: m2 = make_matrix1(CF, af, bf)
sage: %timeit m2^2
100 loops, best of 3: 4.99 ms per loop

The three matrices elmt, m and m2 are the same encoded into 3 different base rings. It would be natural to think that the cyclotomic field be the optimal field to do computations, but it does not seem to be the case in practice.

Here is a univariate example.

sage: def make_matrix2(R, a):
....:     return matrix(R, 3, [[-2*a, 1, 6*a+2],
....:               [-2*a, 2*a, 4*a+1],
....:               [0, 0, 1]])
sage: PR.<x> = PolynomialRing(QQ)
sage: I = Ideal(x^2 - 1/2*x - 1/4)
sage: Q = PR.quotient(I)
sage: elmt_uni = make_matrix2(Q, x)
sage: %timeit elmt_uni*elmt_uni
1000 loops, best of 3: 1.46 ms per loop

sage: CF.<F> = CyclotomicField(2*5)
sage: f5 = (F+~F)/2
sage: m = make_matrix2(CF, f5)
sage: type(m)
<type 'sage.matrix.matrix_cyclo_dense.Matrix_cyclo_dense'>
sage: m.parent()
Full MatrixSpace of 3 by 3 dense matrices over
Cyclotomic Field of order 10 and degree 4
sage: %timeit m*m
100 loops, best of 3: 1.98 ms per loop

Then, I disactivated the verification on cyclotomic fields on line 962 of the file /src/sage/matrix/matrix_space.py to get a matrix_generic_dense instead of matrix_cyclo_dense.

sage: CF.<F> = CyclotomicField(2*5)
sage: f5 = (F+~F)/2
sage: m = make_matrix2(CF, f5)
sage: m.parent()
Full MatrixSpace of 3 by 3 dense matrices over
Cyclotomic Field of order 10 and degree 4
sage: type(m)
<type 'sage.matrix.matrix_generic_dense.Matrix_generic_dense'>
sage: %timeit m*m
1000 loops, best of 3: 251 µs per loop

The gain is significant. Is there a known use cases where the specialized implementation is faster than the generic one? If yes, should we make some threshold test to choose between the two implementations?

CC: @sagetrac-sage-combinat @nthiery @stumpc5 @videlec @williamstein

Component: linear algebra

Keywords: cyclotomic field, matrix, multiplication, benchmark, days57, days88

Issue created by migration from https://trac.sagemath.org/ticket/16116

@jplab jplab added this to the sage-6.2 milestone Apr 9, 2014
@jplab

This comment has been minimized.

@jplab

This comment has been minimized.

@sagetrac-vbraun-spam sagetrac-vbraun-spam mannequin modified the milestones: sage-6.2, sage-6.3 May 6, 2014
@sagetrac-vbraun-spam sagetrac-vbraun-spam mannequin modified the milestones: sage-6.3, sage-6.4 Aug 10, 2014
@videlec
Copy link
Contributor

videlec commented Apr 11, 2015

comment:5

Hello,

With #18152, I got a 10x speedup

old version:

sage: %timeit m^2
100 loops, best of 3: 3.65 ms per loop

new version:

sage: %timeit m^2
The slowest run took 69.65 times longer than the fastest. This could mean that an intermediate result is being cached 
1000 loops, best of 3: 336 µs per loop

Vincent

@videlec
Copy link
Contributor

videlec commented Apr 11, 2015

comment:6

And using libgap directly is even faster

sage: M = m._libgap_()
sage: %timeit M^2
The slowest run took 9.57 times longer than the fastest. This could mean that an intermediate result is being cached 
1000 loops, best of 3: 183 µs per loop

So, as written in the bottom of the description in #18152, we should wrap GAP matrices to deal with dense cyclotomics matrices in Sage.

Vincent

@videlec
Copy link
Contributor

videlec commented Apr 11, 2015

comment:7

Hello,

I reformatted your example such that they fit in less lines (it can easily switched back to your original version if you do not like it).

I had a quick look at the code for dense cyclotomic matrices. The implementation is quite old and uses a lot of reduction mod p (even for multiplication). The code calls a lot of Python code like creating a finite field, creating a matrix space, etc which are relatively slow compared to a small matrix multiplication. Did you try multiplying larger matrices (i.e. 10x10 or 15x15)? On the other hand, I am pretty sure that some cleaning could be of great speed up. By cleaning I mean:

  • declare cdef variables wherever possible
  • let as minimum as possible import inside the methods
  • ...
    You can also do some profiling on the code (using "%prun" and "%crun"), see A tutorial on profiling in Sage #17689 which is not yet in the current development release.

Vincent

@videlec

This comment has been minimized.

@williamstein
Copy link
Contributor

comment:8

Replying to @videlec:

Hello,

I reformatted your example such that they fit in less lines (it can easily switched back to your original version if you do not like it).

I had a quick look at the code for dense cyclotomic matrices. The implementation is quite old and uses a lot of reduction mod p (even for multiplication). The code calls a lot of Python code like creating a finite field, creating a matrix space, etc which are relatively slow compared to a small matrix multiplication. Did you try multiplying larger matrices (i.e. 10x10 or 15x15)?

I designed and implemented the algorithm for dense cyclotomic matrices. We were optimizing for larger matrices... which in the context of modular forms means at least 100 rows (and often much, much more). GAP/pari on the other hand optimize for relatively tiny matrices. The asymptotically fast algorithms for large matrices are totally different than for small...

@videlec
Copy link
Contributor

videlec commented Apr 11, 2015

comment:9

Replying to @williamstein:

Replying to @videlec:

Hello,

I reformatted your example such that they fit in less lines (it can easily switched back to your original version if you do not like it).

I had a quick look at the code for dense cyclotomic matrices. The implementation is quite old and uses a lot of reduction mod p (even for multiplication). The code calls a lot of Python code like creating a finite field, creating a matrix space, etc which are relatively slow compared to a small matrix multiplication. Did you try multiplying larger matrices (i.e. 10x10 or 15x15)?

I designed and implemented the algorithm for dense cyclotomic matrices. We were optimizing for larger matrices... which in the context of modular forms means at least 100 rows (and often much, much more). GAP/pari on the other hand optimize for relatively tiny matrices. The asymptotically fast algorithms for large matrices are totally different than for small...

All right. I now understand better what I read! I see two possibilities.

  1. [easy one] We add a test in the matrix space constructor:
  • if the size is small -> use the generic implementation of dense matrices
  • if the size is large -> use your optimized version
    This requires a bit of benchmark.
  1. [better one] Wrap pari matrices for small sizes or add a another multiplication on the current datatype that is fast on small matrices.

Vincent

@jplab
Copy link
Author

jplab commented Apr 13, 2015

comment:10

Hi,

In the case I'm interested in, it is definitely for small sizes. Say up to 20-25 at the very very biggest. Most commonly it is going to be between 3 and 10. This is related to the tickets #15703, #16087.

How to proceed to add another multiplication to the matrices that could be used for small matrices?

Are there examples around with such a thing? I could look at it...

@williamstein
Copy link
Contributor

comment:11

Replying to @jplab:

Hi,

In the case I'm interested in, it is definitely for small sizes. Say up to 20-25 at the very very biggest. Most commonly it is going to be between 3 and 10. This is related to the tickets #15703, #16087.

How to proceed to add another multiplication to the matrices that could be used for small matrices?

Are there examples around with such a thing? I could look at it...

Matrices over ZZ used to have special code for small versus large. I think now that variation in algorithms is mainly hidden by calling FLINT. Look into it.

@videlec
Copy link
Contributor

videlec commented Apr 13, 2015

comment:12

Replying to @williamstein:

Replying to @jplab:

Hi,

In the case I'm interested in, it is definitely for small sizes. Say up to 20-25 at the very very biggest. Most commonly it is going to be between 3 and 10. This is related to the tickets #15703, #16087.

How to proceed to add another multiplication to the matrices that could be used for small matrices?

Are there examples around with such a thing? I could look at it...

Matrices over ZZ used to have special code for small versus large. I think now that variation in algorithms is mainly hidden by calling FLINT. Look into it.

William, are you sure that the representation you used in MatrixDense_cyclotomic is the thing we want for small sizes? If that so, I would rather implement something for any number fields. I do not see why it might be different. Did you have something in mind?

In the present case, I would rather modify MatrixSpace._get_matrix_class to choose the generic dense matrices for small sizes and see if something break.

@williamstein
Copy link
Contributor

comment:13

William, are you sure that the representation you used in MatrixDense_cyclotomic is the thing we want for small sizes?

No. In fact, I'm pretty sure they are not what you would want for small sizes.

@tscrim
Copy link
Collaborator

tscrim commented Sep 16, 2015

comment:14

We also have a related issue:

sage: R = CyclotomicField(12)
sage: M = matrix.random(R, 40,40)
sage: N = matrix.random(R, 3, 3)
sage: %time K = M.tensor_product(N)
CPU times: user 5.75 s, sys: 28.4 ms, total: 5.78 s
Wall time: 5.73 s
sage: R.defining_polynomial()
x^4 - x^2 + 1
sage: type(M)
<type 'sage.matrix.matrix_cyclo_dense.Matrix_cyclo_dense'>

sage: R = NumberField(x^4 - x^2 + 1, 'a')
sage: M = matrix.random(R, 40,40)
sage: N = matrix.random(R, 3, 3)
sage: %time K = M.tensor_product(N)
CPU times: user 225 ms, sys: 16.4 ms, total: 241 ms
Wall time: 232 ms
sage: type(M)
<type 'sage.matrix.matrix_generic_dense.Matrix_generic_dense'>

Where the issue is coming from having a scalar times a matrix. Here's some profiling info of doing it over the cyclotomic field:

   594202    1.806    0.000    3.620    0.000 number_field.py:9200(_element_constructor_)
   594202    0.816    0.000    1.171    0.000 number_field.py:6628(_coerce_non_number_field_element_in)
  4184691    0.569    0.000    0.569    0.000 {isinstance}

This is nowhere to be found when doing it over the number field. (For very small matrices this isn't a problem per se, but it still is visible when profiling.)

So my conclusion is that we are doing something wrong with how we handle multiplication with cyclotomics in the matrix versus our generic dense cases.

@tscrim
Copy link
Collaborator

tscrim commented Sep 16, 2015

comment:15

I should note that I get very different profiling when I reverse the orders of the tensor product, which from the naive implementation of the tensor product and thoughts about scalar multiplication surprises me:

sage: R = CyclotomicField(2)
sage: M = matrix.random(R, 40,40)
sage: N = matrix.random(R, 3, 3)
sage: %time K = N.tensor_product(M)
CPU times: user 337 ms, sys: 20.6 ms, total: 358 ms
Wall time: 335 ms
sage: %time K = M.tensor_product(N)
CPU times: user 3.99 s, sys: 32.5 ms, total: 4.02 s
Wall time: 3.97 s

There are quite a lot more function calls (~10x) to the _element_constructor_ in one ordering:

48023 for _element_constructor_
577312 function calls (577311 primitive calls) in 0.421 seconds

versus

594198  for _element_constructor_
7240514 function calls (7240513 primitive calls) in 4.992 seconds

@tscrim
Copy link
Collaborator

tscrim commented Sep 20, 2015

comment:16

The part which handles speeding up the tensor product is now #19258.

@tscrim tscrim modified the milestones: sage-6.4, sage-6.9 Sep 20, 2015
@tscrim
Copy link
Collaborator

tscrim commented Jan 3, 2016

comment:17

It seems that matrix multiplication over the universal cyclotomic field is on the same order as the polynomial ring (probably because it uses the generic matrix class):

sage: %timeit m * m
1000 loops, best of 3: 224 µs per loop
sage: %timeit elmt * elmt
1000 loops, best of 3: 207 µs per loop

However for UCF matrices, I'm thinking we might benefit from either using (lib)GAP's matrix multiplication or internally storing the GAP element and only converting it to a Sage UCF element as necessary. See #19821 for a use-case.

@tscrim tscrim modified the milestones: sage-6.9, sage-7.0 Jan 3, 2016
@videlec
Copy link
Contributor

videlec commented Jan 3, 2016

comment:18

At least on sage-7.0.beta2, wrapping GAP matrices for the examples mentioned in the ticket description will not bring any magic

sage: M = m._libgap_()
sage: %timeit A = M^2
1000 loops, best of 3: 181 µs per loop
sage: %timeit A = M^3
1000 loops, best of 3: 456 µs per loop

versus

sage: %timeit a = m^2
1000 loops, best of 3: 298 µs per loop
sage: %timeit a = m^3
1000 loops, best of 3: 690 µs per loop

We are below x2 speedup. But in this example the matrix is small and coefficients relatively dense (~25 nonzero coefficients). Though, the gain is significant with 10x10 dense matrices with small coefficients

sage: m1 = matrix(10, [E(randint(2,3)) for _ in range(100)]) 
sage: m2 = matrix(10, [E(randint(2,3)) for _ in range(100)])
sage: %timeit m1*m2
100 loops, best of 3: 4.51 ms per loop
sage: %timeit M1*M2
1000 loops, best of 3: 329 µs per loop

We might update the ticket description accordingly. Two concrete propositions are:

  • below a certain threshold (to be determined) use generic matrices for cyclotomic fields
  • wrap GAP matrices for UCF
    What do you think?

@tscrim
Copy link
Collaborator

tscrim commented Jan 3, 2016

comment:19

A 30-40% speed reduction is nothing to scoff at either. So from that data, I think for dense matrices over the UCF we should always just wrap GAP.

@videlec
Copy link
Contributor

videlec commented Aug 24, 2017

Changed keywords from cyclotomic field, matrix, multiplication, benchmark, days57 to cyclotomic field, matrix, multiplication, benchmark, days57, days88

@videlec videlec modified the milestones: sage-7.0, sage-8.1 Aug 24, 2017
@videlec

This comment has been minimized.

@videlec

This comment has been minimized.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants