-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Further functionality for hyper complexes #3021
Further functionality for hyper complexes #3021
Conversation
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #3021 +/- ##
===========================================
+ Coverage 55.20% 80.87% +25.66%
===========================================
Files 514 546 +32
Lines 69230 72519 +3289
===========================================
+ Hits 38221 58647 +20426
+ Misses 31009 13872 -17137
|
A good place to start checking out the new features are the test files. I added a few comments there. Part of the code for the simplification is imported from #3002 . If its form here is convincing, I can refactor that PR. |
23bb3da
to
0dddd4c
Compare
mutable struct MultiIndicesOfDegree | ||
n::Int | ||
d::Int | ||
|
||
function MultiIndicesOfDegree(n::Int, d::Int) | ||
@assert n >= 0 "length of index must be non-negative" | ||
@assert d >= 0 "degree must be non-negative" | ||
return new(n, d) | ||
end | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@joschmitt : I needed an iterator over all multiindices I = (i_1, i_2, ..., i_n)
with i_k >= 0
and i_1 + i_2 + ... + i_n = d
here. I didn't want to go through your monomials_of_degree
things, because I would have to create a dummy ring for that and then extract the exponent vectors.
Question: Are you using such an iterator in the internals of your code? If not, do you think it would make sense to build your iterator on this one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in combinatorics this is called
(I didn't implement it like this back then because I hoped that some kind combinatorics expert would come round and do it properly... For the same reason, I also have my own multiset partitions somewhere in the invariant theory code...😞)
Caveat: Your algorithm appears to be different from what I implemented (for which I don't have a reference, maybe I just took what they have in Combinatorics.jl
, I don't remember), so I would like to compare these to approaches runtime-wise before we delete "my" implementation. I can also do this once this pull request is merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good! I also simply did a straightforward (and probably not very optimal) implementation. But as long as we don't have it, I think it's better to have some implementation rather than none.
Comparing the speed of either of the two is good! Finding a combinatorics expert to tweak our stuff would be even better. But that can also be done later since the user facing part will not change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is also a native julia version: Iterators.ProductIterator((1:2, 1:3, 1:4))
This is not quite the same, as this does not limit the sum of the values
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand how one gets indices which sum up to a given d
with this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
julia> n = 5
5
julia> d = 3
3
julia> I = identity_matrix(ZZ,d)
[1 0 0]
[0 1 0]
[0 0 1]
julia> A = vcat(I, -I)
[ 1 0 0]
[ 0 1 0]
[ 0 0 1]
[-1 0 0]
[ 0 -1 0]
[ 0 0 -1]
julia> b = vcat(n*ones(Int, d),zeros(Int,d))
6-element Vector{Int64}:
5
5
5
0
0
0
julia> C = matrix(ZZ,[ones(Int,d)])
[1 1 1]
julia> dd = [n]
1-element Vector{Int64}:
5
julia> P = polyhedron((A,b),(C,dd))
Polyhedron in ambient dimension 3
julia> lattice_points(P)
21-element SubObjectIterator{PointVector{ZZRingElem}}:
[0, 0, 5]
[0, 1, 4]
[0, 2, 3]
[0, 3, 2]
[0, 4, 1]
[0, 5, 0]
[1, 0, 4]
[1, 1, 3]
[1, 2, 2]
[1, 3, 1]
[1, 4, 0]
[2, 0, 3]
[2, 1, 2]
[2, 2, 1]
[2, 3, 0]
[3, 0, 2]
[3, 1, 1]
[3, 2, 0]
[4, 0, 1]
[4, 1, 0]
[5, 0, 0]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably it is fastest if you just compute the partitions
of your number of the right size(s) and then extend via orbit computation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I quickly edited the existing function monomials_of_degree
to not create monomials, but only return the exponent vectors, and compared the runtime:
julia> function via_polymake(a::Int, b::Int)
# a is the number of variables/indices, b the degree/sum
I = identity_matrix(ZZ, a)
A = vcat(I, -I)
bb = vcat(b*ones(Int, a), zeros(Int, a))
C = matrix(ZZ, [ones(Int, a)])
dd = [b]
P = polyhedron((A, bb), (C, dd))
return lattice_points(P)
end
via_polymake (generic function with 1 method)
julia> a = 5; b = 5; # a = number of variables, b = sum or degree
julia> R, x = polynomial_ring(QQ, a);
julia> @btime collect(monomials_of_degree(R, b)); # edited so it doesn't create monomials
2.742 μs (255 allocations: 16.97 KiB)
julia> @btime collect(Oscar.MultiIndicesOfDegree(a, b));
5.748 μs (449 allocations: 32.53 KiB)
julia> @btime via_polymake(a, b);
14.657 ms (16320 allocations: 136.42 KiB)
julia> a = 5; b = 10;
julia> @btime collect(monomials_of_degree(R, b));
21.440 μs (2005 allocations: 133.28 KiB)
julia> @btime collect(Oscar.MultiIndicesOfDegree(a, b));
35.163 μs (2939 allocations: 209.47 KiB)
julia> @btime via_polymake(a, b);
15.915 ms (61071 allocations: 486.05 KiB)
julia> a = 10; b = 10;
julia> R, x = polynomial_ring(QQ, a);
julia> @btime collect(monomials_of_degree(R, b));
2.284 ms (184760 allocations: 16.21 MiB)
julia> @btime collect(Oscar.MultiIndicesOfDegree(a, b));
5.424 ms (347856 allocations: 34.76 MiB)
julia> @btime via_polymake(a, b);
367.535 ms (13538535 allocations: 103.31 MiB)
julia> a = 15; b = 10;
julia> R, x = polynomial_ring(QQ, a);
julia> @btime collect(monomials_of_degree(R, b));
114.180 ms (3922516 allocations: 404.01 MiB)
julia> @btime collect(Oscar.MultiIndicesOfDegree(a, b));
410.142 ms (8706790 allocations: 1.10 GiB)
julia> @btime via_polymake(a, b);
16.894 s (660509300 allocations: 4.92 GiB)
julia> a = 15; b = 15;
julia> @btime collect(monomials_of_degree(R, b));
8.642 s (155117524 allocations: 15.60 GiB)
julia> @btime collect(Oscar.MultiIndicesOfDegree(a, b));
21.106 s (297865757 allocations: 36.89 GiB)
julia> @btime via_polymake(a, b);
# Killed; 32GB RAM were not enough :(
So, in my opinion we should use what is implemented in monomials_of_degree
for MultiIndicesOfDegree
(and make monomials_of_degree
use that iterator internally). I can do this once this pull request here is merged; I think I don't have the rights to push on other people's forks.
Besides the timings, for my application in invariant theory the code in monomials_of_degree
has the advantage that it is computing things one by one. So if I only need the first 3 monomials nothing else is computed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nicely done!
In principle you should be able to push directly to this PR if you check it out with gh pr checkout 3021
. If you run the tests for the modules and those in experimental/DoubleAndHyperComplexes/test/
before you push, then I think it's fine if you pushed your work here.
But it's also fine with me to do that later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fairly certain I can't do this because I only have 'triage rights', so I can't 'Push to (write) the person or team's assigned repositories' (see https://docs.github.com/en/organizations/managing-user-access-to-your-organizations-repositories/managing-repository-roles/repository-roles-for-an-organization ). I never tried though. But I don't want to complain about this; I'm not sure I want the responsibility :)
src/Modules/UngradedModules.jl
Outdated
# the two methods below are needed for the implementation of is_surjective | ||
function (==)(G::SubquoModule, F::FreeMod) | ||
return F == G | ||
end | ||
|
||
function (==)(F::FreeMod, G::SubquoModule) | ||
base_ring(F) === base_ring(G) || return false | ||
ambient_free_module(G) === F || return false | ||
if isdefined(G, :quo) | ||
iszero(G.quo) || return false | ||
end | ||
all(e -> e in G, gens(F)) || return false | ||
return true | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jankoboehm : Without these methods is_surjective
would fail for homomorphisms with a FreeMod
as codomain. Do you agree with adding these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine.
src/Modules/UngradedModules.jl
Outdated
function hom(F::FreeMod, M::ModuleFP{T}, V::Vector{<:ModuleFPElem{T}}; check::Bool=true) where T | ||
base_ring(F) === base_ring(M) || return FreeModuleHom(F, M, V, base_ring(M)) | ||
return FreeModuleHom(F, M, V) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jankoboehm : I added a dummy check
flag. It's not doing anything, but it allows to call the hom constructor for any kind of modules with the same signature now. I hope this is ok from your side?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is it doing for other signatures? Does it apply here?
src/Modules/UngradedModules.jl
Outdated
function *(h::ModuleFPHom{T1, T2, Nothing}, g::ModuleFPHom{T2, T3, <:Any}) where {T1, T2, T3} | ||
@assert codomain(h) === domain(g) | ||
return hom(domain(h), codomain(g), | ||
Vector{elem_type(codomain(g))}([g(h(x)) for x = gens(domain(h))]), | ||
base_ring_map(g) | ||
) | ||
|
||
end | ||
|
||
function *(h::ModuleFPHom{T1, T2, <:Any}, g::ModuleFPHom{T2, T3, Nothing}) where {T1, T2, T3} | ||
@assert codomain(h) === domain(g) | ||
return hom(domain(h), codomain(g), | ||
Vector{elem_type(codomain(g))}([g(h(x)) for x = gens(domain(h))]), | ||
base_ring_map(h) | ||
) | ||
|
||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These were necessary for composition of module homomorphisms where one has a non-trivial ring map and the other doesn't.
f3142f9
to
422cb90
Compare
This has now been rebased on #3041 (and will hence be rebased again once that is merged). The templates are now taken care of for the moment and hopefully the tests pass. |
tmp_res = _direct_sum(summands) | ||
@assert tmp_res isa Tuple{<:ChainType, <:Vector, <:Vector} "the output of `direct_sum` does not have the anticipated format; see the source code for details" | ||
# If you got here because of the error message above: | ||
# This is supposed to be generic code and it attempts to build the direct sum | ||
# of `summands` using the method of `direct_sum` for your specific type. | ||
# The convention in Oscar is that this should return a triple `(S, inj, pr)` consisting | ||
# of the actual direct sum `S`, a `Vector` of injections `inj` and a `Vector` of | ||
# projections `pr` | ||
# Unfortunately, it can not be assumed that the original method for `direct_sum` | ||
# produces this output and it is an ongoing effort to streamline this throughout | ||
# Oscar. Since such changes tend to happen with severe delay, if ever, we provide | ||
# a little workaround here. If this code does not run for your type of chains, | ||
# you may try two things: | ||
# | ||
# 1) Overwrite the method for `_direct_sum` below for your type and wrap the | ||
# original method for `direct_sum` so that the anticipated output is produced | ||
# | ||
# 2) If that does not work, you may also overwrite the whole method for production | ||
# of the direct sum here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fingolfin :
This is where we stopped our discussion yesterday. I hope that adding this check and comments resolves the problem. The point is: If the method for direct_sum
is implemented correctly (meaning it produces the anticipated output), then the code should just run. But we can not assume this and this provides an easy and transparent workaround.
Edit: A few lines below you can see that the default implementation of _direct_sum
just deflects to direct_sum
so that by default everything should work (but won't in 78.5% of all cases..). Same for _tensor_product
below.
tmp_res = _tensor_product(factors...) | ||
@assert tmp_res isa Tuple{<:ChainType, <:Map} "the output of `tensor_product` does not have the anticipated format; see the source code for details" | ||
# If you got here because of the error message above: | ||
# This is supposed to be generic code and it attempts to build the tensor product | ||
# of `factors` using the method of `tensor_product` for your specific type. | ||
# The convention in Oscar is that this should return a pair `(M, mult)` consisting | ||
# of the actual tensor product `M` and a multiplication map `mult` which takes a | ||
# `Tuple` of elements in the `factors` to its tensor product in `M`. | ||
# Unfortunately, it can not be assumed that the original method for `tensor_product` | ||
# produces this output and it is an ongoing effort to streamline this throughout | ||
# Oscar. Since such changes tend to happen with severe delay, if ever, we provide | ||
# a little workaround here. If this code does not run for your type of chains, | ||
# you may try two things: | ||
# | ||
# 1) Overwrite the method for `_tensor_product` below for your type and wrap the | ||
# original method for `tensor_product` so that the anticipated output is produced | ||
# | ||
# 2) If that does not work, you may also overwrite the whole method for production | ||
# of the tensor products here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fingolfin : same here.
24aabfb
to
60699b8
Compare
experimental/DoubleAndHyperComplexes/src/Objects/tensor_products.jl
Outdated
Show resolved
Hide resolved
inc = -1 | ||
end | ||
psi = Phi[Tuple(i+inc)] | ||
dom_map = map(domain(Phi), 1, I) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line really confused me until I realized that the standard Julia function map
is being overloaded to retrieve "maps" from hypercomplexes, free resolutions and even FreeMod
/ MatElem
(in src/Modules/UngradedModules/SubQuoHom.jl
). I think that's a really bad idea, and would urge y'all to reconsider this name (CC @jankoboehm @fieker @thofma).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took this from the ComplexOfMorphisms
. But I agree! Actually, this has already lead to quite confusing error messages, because sometimes it would try to use a method of the original map
on my input.
@fieker : Your call for the ComplexOfMorphisms
. I will then follow your lead.
experimental/DoubleAndHyperComplexes/src/Morphisms/cartan_eilenberg_resolutions.jl
Show resolved
Hide resolved
3ca781a
to
2b24478
Compare
|
||
res, aug = free_resolution(Oscar.SimpleFreeResolution, M) | ||
|
||
str, inc_str = Oscar.linear_strand(res) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here you get the linear strand together with its inclusion morphism. There is an optional keyword argument degree
to specify a different initial degree.
@test rank(str[0]) == 1 | ||
@test rank(str[1]) == 3 | ||
@test rank(str[2]) == 3 | ||
@test rank(str[3]) == 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You get a homogeneous Koszul complex as expected. In fact, everything is equal to the linear strand here.
|
||
res, aug = free_resolution(Oscar.SimpleFreeResolution, M) | ||
|
||
str, inc_str = Oscar.linear_strand(res) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This case is more interesting: A direct sum of the previous Koszul complex and a shift in degree of the same complex. The linear strand will only be the first one and the second will appear as the cokernel of the inclusion.
@test compose(inc_str[(2,)], map(res, 1, (2,))) == compose(map(str, 1, (2,)), inc_str[(1,)]) | ||
@test compose(inc_str[(3,)], map(res, 1, (3,))) == compose(map(str, 1, (3,)), inc_str[(2,)]) | ||
|
||
K, pr = cokernel(inc_str) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This call to cokernel
will dispatch to a special method which creates all cokernels again as free modules. It returns the resulting complex and the projection map. In particular, you can ask for the linear_strand
of K
in the next appropriate degree.
Now even printing of free resolutions, strands, and duals should look half way nice. We can probably improve that also later. |
@wdecker : You should now be able to call |
@wdecker : Methods for But compared to What remains to be done is to hook up the Schreyer resolution and the like. But that's a different topic which has been postponed: @jankoboehm asked to look into this in order to also learn about how the things work out here. In particular this will then probably also allow to compute resolutions only up to a certain step on request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in a conversation with @jankoboehm and @HechtiDerLachs
we concluded that this is ready for experimental.
This PR contains new functionality for hypercomplexes.
f : c -> d
. These produce morphismsc[i] -> d[i + offset]
on request. A template for how to implement your own morphism is provided.c[0] -> d[0]
to a map of complexesc -> d
(if possible). In particular, this implements liftings through projective resolutions.SimpleFreeResolution
usingfree_resolution(SimpleFreeResolution, M)
on your favorite moduleM
. This will provide you with a hypercomplex of dimension 1 which is quasi-isomorphic to your module considered as a complex in degree 0. The augmentation map is also returned as a morphism of hypercomplexes where the codomain is a complex with just one entryM
at0
. All of the exported functionality forFreeResolution
(the type used up to now) is now also available forSimpleFreeResolution
s, i.e.betti_table
andminimal_betti_table
.AbsSimpleComplex
(hypercomplexes of dimension 1)c
of free modules you can now callsimplify(c)
and you get a simplified complex where there are no units in the representing matrices of the (co-)boundary maps. These simplification come with maps in both directions inducing a homotopy equivalence.koszul_complex(KoszulComplex, ...)
, i.e. by specifying the desired type of the output to beKoszulComplex
.total_complex
.c
andd
you can callhom(c, d)
to produce a new hypercomplex of corresponding hom-modules. For the case where eitherc
ord
is a single module concentrated in degree 0, this is the same as the usual hom.C = hom(c, d)
as above, any elementv
in thek
-th homology of the total complex ofC
corresponds to a morphism of 1-dimensional complexesphi : tot(c) -> tot(d)
. We provide an interpretation map which produces that morphism fromv
. This reduces to various classical ext-computations for the particular cases.c
one can now take ranges and slices. For instancec[2, 4:7]
will produce a 1-dimensional complex with range4:7
which is precisely the slice ofc
at index2
in the first dimension in the range4:7
. This can also be used to consider ann
-dimensional hypercomplex as a hypercomplex of dimensionn+1
.c
can be shifted in it's homological degrees.ComplexOfMorphism
s to be rerouted to that generic implementation.See the various test files to get a taste of the new functionality. Other than that, the documentation is rather poor. But everything is similar to the double complexes which have some documentation already.