New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refresh similarity class #31520
Comments
Commit: |
Branch: u/chapoton/31520 |
New commits:
|
Reviewer: Samuel Lelièvre |
comment:2
Good improvements. Optional further steps, but feel free to consider them out of scope here. Reindex to clarify and avoid subtractions: - return prod(1 - q**(-i - 1) for i in range(n))
+ return prod(1 - q**i for i in range(-n, 0)) Use iterator rather than list for product: - return q**centralizer_algebra_dim(la)*prod([fq(m, q=q) for m in la.to_exp()])
+ return q**centralizer_algebra_dim(la)*prod(fq(m, q=q) for m in la.to_exp()) or maybe even use product with starting point: - return q**centralizer_algebra_dim(la)*prod([fq(m, q=q) for m in la.to_exp()])
+ return prod((fq(m, q=q) for m in la.to_exp()), q**centralizer_algebra_dim(la)) Use parentheses instead of end-of-line backslashes: - return isinstance(other, PrimarySimilarityClassType) and \
- self.degree() == other.degree() and \
- self.partition() == other.partition()
+ return (isinstance(other, PrimarySimilarityClassType)
+ and self.degree() == other.degree()
+ and self.partition() == other.partition()) Use iterator rather than list for products: - return prod([PT.centralizer_group_card(q=q) for PT in self])
+ return prod(PT.centralizer_group_card(q=q) for PT in self) - numerator = prod([prod([primitives(d+1, invertible=invertible, q=q)-i for i in range(list_of_degrees.count(d+1))]) for d in range(maximum_degree)])
+ numerator = prod(prod(primitives(d+1, invertible=invertible, q=q) - i
+ for i in range(list_of_degrees.count(d + 1)))
+ for d in range(maximum_degree)) In that last case, would a double comprehension be faster than a product of products? - numerator = prod([prod([primitives(d+1, invertible=invertible, q=q)-i for i in range(list_of_degrees.count(d+1))]) for d in range(maximum_degree)])
+ numerator = prod(primitives(d + 1, invertible=invertible, q=q) - i
+ for d in range(maximum_degree)
+ for i in range(list_of_degrees.count(d + 1))) (or maybe a falling factorial or Pochhammer symbol?) Iterator vs list for products and sums: - return prod([PT.statistic(func, q=q) for PT in self])
+ return prod(PT.statistic(func, q=q) for PT in self) - return sum([tau.statistic(stat, q=q)*tau.number_of_matrices(invertible=invertible, q=q) for tau in self])
+ return sum(tau.statistic(stat, q=q)
+ * tau.number_of_matrices(invertible=invertible, q=q)
+ for tau in self) - return sum([tau.statistic(stat, q=q)*tau.number_of_classes(invertible=invertible, q=q) for tau in self])
+ return sum(tau.statistic(stat, q=q)
+ * tau.number_of_classes(invertible=invertible, q=q)
+ for tau in self) - return sum([tau.statistic(stat, invertible=invertible, q=q) for tau in self])
+ return sum(tau.statistic(stat, invertible=invertible, q=q)
+ for tau in self) Line break: - yield (tau.centralizer_group_card(q=q), tau.number_of_classes(invertible=invertible, q=q))
+ yield (tau.centralizer_group_card(q=q),
+ tau.number_of_classes(invertible=invertible, q=q)) Iterator vs list in sums and products: - return prod([ext_orbits(PT, q=q, selftranspose=selftranspose) for PT in tau])
+ return prod(ext_orbits(PT, q=q, selftranspose=selftranspose)
+ for PT in tau) - return sum([tau.number_of_classes(invertible=invertible, q=q)*ext_orbits(tau, q=q, selftranspose=selftranspose) for tau in SimilarityClassTypes(n)])
+ return sum(tau.number_of_classes(invertible=invertible, q=q)
+ * ext_orbits(tau, q=q, selftranspose=selftranspose)
+ for tau in SimilarityClassTypes(n)) - size = prod([list(entry)[0] for entry in item])
- freq = prod([list(entry)[1] for entry in item])
+ size = prod(list(entry)[0] for entry in item)
+ freq = prod(list(entry)[1] for entry in item) Grab pair components at iteration: - for pair in ext_orbit_centralizers(tau, q=q, selftranspose=selftranspose):
- yield (q**tau.centralizer_algebra_dim()*pair[0], tau.number_of_classes(invertible=invertible, q=q)*pair[1])
+ for a, b in ext_orbit_centralizers(tau, q=q, selftranspose=selftranspose):
+ yield (q**tau.centralizer_algebra_dim()*a,
+ tau.number_of_classes(invertible=invertible, q=q)*b) |
comment:3
C'est sur, y a encore du boulot sur ce fichier. J'ai pas trop envie d'y passer plus de temps. L'objectif est surtout d'avoir un doctest plus raisonnable. |
comment:4
Alors restons-en là pour aujourd'hui. |
Changed branch from u/chapoton/31520 to |
using autopep8 to cleanup
and make the long time doctest much shorter
CC: @slel
Component: combinatorics
Author: Frédéric Chapoton
Branch/Commit:
39c3abf
Reviewer: Samuel Lelièvre
Issue created by migration from https://trac.sagemath.org/ticket/31520
The text was updated successfully, but these errors were encountered: