Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update a missing important speed improvement for subword complexes #20943

Closed
stumpc5 opened this issue Jul 5, 2016 · 36 comments
Closed

Update a missing important speed improvement for subword complexes #20943

stumpc5 opened this issue Jul 5, 2016 · 36 comments

Comments

@stumpc5
Copy link
Contributor

stumpc5 commented Jul 5, 2016

The method _flip_c currently uses

nr_ref = len(W.long_element(as_word=True))

which drastically slows down computations (here: the construction of cluster complexes using subword complexes through the greedy flip algorithm).

CC: @tscrim @fchapoton @nthiery

Component: combinatorics

Keywords: reflection group, coxeter group, subword complex, days80

Author: Christian Stump

Branch/Commit: 6f2172b

Reviewer: Frédéric Chapoton

Issue created by migration from https://trac.sagemath.org/ticket/20943

@stumpc5 stumpc5 added this to the sage-7.3 milestone Jul 5, 2016
@stumpc5
Copy link
Contributor Author

stumpc5 commented Jul 5, 2016

comment:1

I will provide code in a minute; any suggestions how to properly do get the number of reflections in the fastest way for WeylGroups or CoxeterGroups? Currently, it uses the definition of the degrees, but that is too slow given that it should and can be fast.

@stumpc5
Copy link
Contributor Author

stumpc5 commented Jul 5, 2016

comment:2

For Weyl groups, I get

sage: W = WeylGroup(['E',7])
sage: %timeit len(W.long_element(as_word=True))
10 loops, best of 3: 98.6 ms per loop
sage: %timeit W.number_of_reflections()
1 loop, best of 3: 208 ms per loop

and for Coxeter groups, I get

sage: W = CoxeterGroup(['E',7])
sage: %timeit len(W.long_element(as_word=True))
1 loop, best of 3: 206 ms per loop
sage: %timeit W.number_of_reflections()
1 loop, best of 3: 378 ms per loop

so I leave the current implementation for those (though this is not as clean as it should be).

@stumpc5
Copy link
Contributor Author

stumpc5 commented Jul 5, 2016

Branch: u/stumpc5/20943

@stumpc5
Copy link
Contributor Author

stumpc5 commented Jul 5, 2016

comment:4
sage: W = ReflectionGroup(['A',6]); c = list(W.index_set())
sage: Q = c+W.w0.coxeter_sorting_word(c)
sage: %time S = SubwordComplex(Q,W.w0,algorithm="greedy")

with the fix, I get

CPU times: user 222 ms, sys: 16.1 ms, total: 238 ms
Wall time: 248 ms

and without the fix, I get

CPU times: user 1.35 s, sys: 37.7 ms, total: 1.39 s
Wall time: 1.4 s

For types E, it becomes obviously much worse.

@stumpc5
Copy link
Contributor Author

stumpc5 commented Jul 5, 2016

comment:5

I tried to push and to set the branch by hand, but I am not sure it actually worked...

@stumpc5

This comment has been minimized.

@fchapoton
Copy link
Contributor

comment:7

Sorry, but I do not understand the logic of your change.

What is this attribute __number_of_reflections ? Does it exist only in the complex reflection group implementation ?

One should also replace the default code for number_of_reflections by the faster

from sage.rings.all import ZZ
return ZZ.sum(self.degrees() - 1)

that avoid to use rank.


New commits:

dad4de4fixed the bux
aaf771eMerge branch 'develop' into u/stumpc5/20943

@fchapoton
Copy link
Contributor

Commit: aaf771e

@fchapoton
Copy link
Contributor

comment:8

the doctests are missing the # optional

@stumpc5
Copy link
Contributor Author

stumpc5 commented Jul 6, 2016

comment:9

Okay, I guess you are right. I should and will do it properly:

  1. make the method ReflectionGroup.number_of_reflections use the attribute ReflectionGroup._number_of_reflections instead of recomputing it,

  2. make this flip code use the method instead of the attribute for ReflectionGroup and also use this attribute for WeylGroup and CoxeterGroup, and finally

  3. make WeylGroup and CoxeterGroup compute the number of reflections in the fastest possible way.

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jul 6, 2016

Branch pushed to git repo; I updated commit sha1. New commits:

f17422aworking out the proper way to compute the number of reflections

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jul 6, 2016

Changed commit from aaf771e to f17422a

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jul 6, 2016

Changed commit from f17422a to 6e97757

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jul 6, 2016

Branch pushed to git repo; I updated commit sha1. New commits:

6e97757updated the new doctests to be optional

@fchapoton
Copy link
Contributor

comment:12

ok. Looks much better.

Have you re-done the timings to compare the new "number_of_reflections"
with the previously used len of the longest word ?

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jul 6, 2016

Changed commit from 6e97757 to 35a98d7

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jul 6, 2016

Branch pushed to git repo; I updated commit sha1. New commits:

35a98d7fixed the new doctests, all tests seem to pass now

@stumpc5
Copy link
Contributor Author

stumpc5 commented Jul 6, 2016

comment:14

The timings should still be the old as the change

-            return ZZ.sum(self.degrees()) - self.rank()
+            return ZZ.sum(deg-1 for deg in self.degrees())

should not make any difference... (just checked, no change there). It would be better if we either do a speed improvement of the degrees method for WeylGroup and CoxeterGroup, or make number_of_reflections there use the length of the longest element there instead.

@stumpc5
Copy link
Contributor Author

stumpc5 commented Jul 6, 2016

comment:15

But I think that should go into a different ticket.

@fchapoton
Copy link
Contributor

comment:16

Replying to @stumpc5:

But I think that should go into a different ticket.

ok, indeed.

@fchapoton
Copy link
Contributor

comment:17

An idea: why not make number_of_reflection a cached_method in all cases?

@stumpc5
Copy link
Contributor Author

stumpc5 commented Jul 6, 2016

comment:18

That is now #20956.

@stumpc5
Copy link
Contributor Author

stumpc5 commented Jul 6, 2016

comment:19

Replying to @fchapoton:

An idea: why not make number_of_reflection a cached_method in all cases?

Why not, we could also do that. Maybe that's even the "right" solution, as the caching doesn't take any memory while avoids recomputing a bigger computation again and again.

@stumpc5
Copy link
Contributor Author

stumpc5 commented Jul 6, 2016

comment:20

If you do not object, I would add this here to this ticket and make the other a duplicate.

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jul 6, 2016

Branch pushed to git repo; I updated commit sha1. New commits:

250aaeeadded cached methods to number of reflections/hyperplanes and rank

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jul 6, 2016

Changed commit from 35a98d7 to 250aaee

@fchapoton
Copy link
Contributor

comment:22

ok, then you can get rid of the attribute and the method in reflection_group_complex

@stumpc5
Copy link
Contributor Author

stumpc5 commented Jul 6, 2016

comment:23

Replying to @fchapoton:

ok, then you can get rid of the attribute and the method in reflection_group_complex

Out of curiosity: is it faster to use a cached method, or to use an attribute? Or is that difference never important?

@fchapoton
Copy link
Contributor

comment:24

I do not know, sorry. I guess both are fast enough for our purposes, but we can aim
for simplicity for the moment, maybe ?

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jul 6, 2016

Changed commit from 250aaee to 6f2172b

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Jul 6, 2016

Branch pushed to git repo; I updated commit sha1. New commits:

6f2172breplaced _number_of_reflections by number_of_reflections() everywhere

@fchapoton
Copy link
Contributor

comment:26

ok, let it be.

@fchapoton
Copy link
Contributor

Reviewer: Frédéric Chapoton

@stumpc5
Copy link
Contributor Author

stumpc5 commented Jul 6, 2016

comment:27

Replying to @fchapoton:

ok, let it be.

thanks!

@tscrim
Copy link
Collaborator

tscrim commented Jul 6, 2016

comment:28

Just FYI - calling a @cached_method is ~10% slower than a (Python) attribute lookup:

sage: class Foo(object):
....:     def __init__(self, x):
....:         self.x = x
....:         self._y = x
....:     @cached_method
....:     def y(self):
....:         return self._y
....:     
sage: F = Foo(5)
sage: %timeit F.x
The slowest run took 73.41 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 42.2 ns per loop
sage: %timeit F.y()
The slowest run took 2799.35 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 57.1 ns per loop

However, it is better code (IMO) to use the cached method mechanism, and it makes it easier to change/override it.

@vbraun
Copy link
Member

vbraun commented Jul 8, 2016

Changed branch from u/stumpc5/20943 to 6f2172b

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants