Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Further functionality for hyper complexes #3021

Merged

Conversation

HechtiDerLachs
Copy link
Collaborator

@HechtiDerLachs HechtiDerLachs commented Nov 15, 2023

This PR contains new functionality for hypercomplexes.

  1. Morphisms of hypercomplexes f : c -> d. These produce morphisms c[i] -> d[i + offset] on request. A template for how to implement your own morphism is provided.
  2. As a first example we implement a class which lifts a given map c[0] -> d[0] to a map of complexes c -> d (if possible). In particular, this implements liftings through projective resolutions.
  3. Free resolutions of modules: You can now ask for a free resolution of type SimpleFreeResolution using free_resolution(SimpleFreeResolution, M) on your favorite module M. This will provide you with a hypercomplex of dimension 1 which is quasi-isomorphic to your module considered as a complex in degree 0. The augmentation map is also returned as a morphism of hypercomplexes where the codomain is a complex with just one entry M at 0. All of the exported functionality for FreeResolution (the type used up to now) is now also available for SimpleFreeResolutions, i.e. betti_table and minimal_betti_table.
  4. For hypercomplexes of graded free modules over a standard-graded ring one can now take strands, i.e. slices of a fixed degree. These come with the natural inclusion map given as a morphism of complexes.
  5. For any AbsSimpleComplex (hypercomplexes of dimension 1) c of free modules you can now call simplify(c) and you get a simplified complex where there are no units in the representing matrices of the (co-)boundary maps. These simplification come with maps in both directions inducing a homotopy equivalence.
  6. Koszul complexes can now also be built lazy using koszul_complex(KoszulComplex, ...), i.e. by specifying the desired type of the output to be KoszulComplex.
  7. You can take the total complex of any reasonably bounded hypercomplex using total_complex.
  8. For any two hypercomplexes c and d you can call hom(c, d) to produce a new hypercomplex of corresponding hom-modules. For the case where either c or d is a single module concentrated in degree 0, this is the same as the usual hom.
  9. For C = hom(c, d) as above, any element v in the k-th homology of the total complex of C corresponds to a morphism of 1-dimensional complexes phi : tot(c) -> tot(d). We provide an interpretation map which produces that morphism from v. This reduces to various classical ext-computations for the particular cases.
  10. Views of complexes: From any complex c one can now take ranges and slices. For instance c[2, 4:7] will produce a 1-dimensional complex with range 4:7 which is precisely the slice of c at index 2 in the first dimension in the range 4:7. This can also be used to consider an n-dimensional hypercomplex as a hypercomplex of dimension n+1.
  11. Shifted complexes: Any hypercomplex c can be shifted in it's homological degrees.
  12. We provide a template for implementing your own new hypercomplex class.
  13. Tensor products of arbitrary hypercomplexes have also been implemented. We should think about whether or not we want the tensor products of ComplexOfMorphisms to be rerouted to that generic implementation.
  14. Linear strands of complexes according to Eisenbud: The Geometry of Syzygies, chapter 7. We provide methods to compute a linear strand and its inclusion and then also form the cokernel of this inclusion with the associated projection. The specialty is that this cokernel is again a complex of free modules and it is also realized as such so that one can proceed with linear strands inductively.

See the various test files to get a taste of the new functionality. Other than that, the documentation is rather poor. But everything is similar to the double complexes which have some documentation already.

Copy link

codecov bot commented Nov 15, 2023

Codecov Report

Merging #3021 (e1a5a1d) into master (13c3fc3) will increase coverage by 25.66%.
Report is 4 commits behind head on master.
The diff coverage is 90.36%.

Additional details and impacted files
@@             Coverage Diff             @@
##           master    #3021       +/-   ##
===========================================
+ Coverage   55.20%   80.87%   +25.66%     
===========================================
  Files         514      546       +32     
  Lines       69230    72519     +3289     
===========================================
+ Hits        38221    58647    +20426     
+ Misses      31009    13872    -17137     
Files Coverage Δ
...ubleAndHyperComplexes/src/Objects/vector_spaces.jl 90.24% <ø> (ø)
experimental/DoubleAndHyperComplexes/test/ext.jl 100.00% <100.00%> (ø)
...ubleAndHyperComplexes/test/simplified_complexes.jl 100.00% <100.00%> (ø)
...perimental/DoubleAndHyperComplexes/test/strands.jl 100.00% <100.00%> (ø)
...al/DoubleAndHyperComplexes/test/tensor_products.jl 100.00% <100.00%> (ø)
src/Combinatorics/OrderedMultiIndex.jl 98.46% <100.00%> (+23.97%) ⬆️
src/Modules/UngradedModules/FreeModuleHom.jl 89.20% <100.00%> (+3.49%) ⬆️
src/Modules/UngradedModules/SubquoModuleElem.jl 84.55% <100.00%> (+0.84%) ⬆️
src/Modules/missing_functionality.jl 87.03% <100.00%> (+34.86%) ⬆️
...oubleAndHyperComplexes/src/Objects/Constructors.jl 95.23% <95.83%> (ø)
... and 22 more

... and 325 files with indirect coverage changes

@HechtiDerLachs
Copy link
Collaborator Author

HechtiDerLachs commented Nov 16, 2023

A good place to start checking out the new features are the test files. I added a few comments there.

Part of the code for the simplification is imported from #3002 . If its form here is convincing, I can refactor that PR.

@HechtiDerLachs HechtiDerLachs changed the title Morphisms of complexes Further functionality for hyper complexes Nov 19, 2023
Comment on lines +186 to +195
mutable struct MultiIndicesOfDegree
n::Int
d::Int

function MultiIndicesOfDegree(n::Int, d::Int)
@assert n >= 0 "length of index must be non-negative"
@assert d >= 0 "degree must be non-negative"
return new(n, d)
end
end
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@joschmitt : I needed an iterator over all multiindices I = (i_1, i_2, ..., i_n) with i_k >= 0 and i_1 + i_2 + ... + i_n = d here. I didn't want to go through your monomials_of_degree things, because I would have to create a dummy ring for that and then extract the exponent vectors.

Question: Are you using such an iterator in the internals of your code? If not, do you think it would make sense to build your iterator on this one?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in combinatorics this is called $d$-multicombinations of $n$ and yes, it makes sense that there is such an iterator that produces the integer vectors and then one can for example build the monomials out of it.
(I didn't implement it like this back then because I hoped that some kind combinatorics expert would come round and do it properly... For the same reason, I also have my own multiset partitions somewhere in the invariant theory code...😞)

Caveat: Your algorithm appears to be different from what I implemented (for which I don't have a reference, maybe I just took what they have in Combinatorics.jl, I don't remember), so I would like to compare these to approaches runtime-wise before we delete "my" implementation. I can also do this once this pull request is merged.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good! I also simply did a straightforward (and probably not very optimal) implementation. But as long as we don't have it, I think it's better to have some implementation rather than none.

Comparing the speed of either of the two is good! Finding a combinatorics expert to tweak our stuff would be even better. But that can also be done later since the user facing part will not change.

Copy link
Contributor

@fieker fieker Nov 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is also a native julia version: Iterators.ProductIterator((1:2, 1:3, 1:4))
This is not quite the same, as this does not limit the sum of the values

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand how one gets indices which sum up to a given d with this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

julia> n = 5
5

julia> d = 3
3

julia> I = identity_matrix(ZZ,d)
[1   0   0]
[0   1   0]
[0   0   1]

julia> A = vcat(I, -I)
[ 1    0    0]
[ 0    1    0]
[ 0    0    1]
[-1    0    0]
[ 0   -1    0]
[ 0    0   -1]

julia> b = vcat(n*ones(Int, d),zeros(Int,d))
6-element Vector{Int64}:
 5
 5
 5
 0
 0
 0

julia> C = matrix(ZZ,[ones(Int,d)])
[1   1   1]

julia> dd = [n]
1-element Vector{Int64}:
 5

julia> P = polyhedron((A,b),(C,dd))
Polyhedron in ambient dimension 3

julia> lattice_points(P)
21-element SubObjectIterator{PointVector{ZZRingElem}}:
 [0, 0, 5]
 [0, 1, 4]
 [0, 2, 3]
 [0, 3, 2]
 [0, 4, 1]
 [0, 5, 0]
 [1, 0, 4]
 [1, 1, 3]
 [1, 2, 2]
 [1, 3, 1]
 [1, 4, 0]
 [2, 0, 3]
 [2, 1, 2]
 [2, 2, 1]
 [2, 3, 0]
 [3, 0, 2]
 [3, 1, 1]
 [3, 2, 0]
 [4, 0, 1]
 [4, 1, 0]
 [5, 0, 0]

Copy link
Member

@lkastner lkastner Dec 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably it is fastest if you just compute the partitions of your number of the right size(s) and then extend via orbit computation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I quickly edited the existing function monomials_of_degree to not create monomials, but only return the exponent vectors, and compared the runtime:

julia> function via_polymake(a::Int, b::Int)
          # a is the number of variables/indices, b the degree/sum
           I = identity_matrix(ZZ, a)
           A = vcat(I, -I)
           bb = vcat(b*ones(Int, a), zeros(Int, a))
           C = matrix(ZZ, [ones(Int, a)])
           dd = [b]
           P = polyhedron((A, bb), (C, dd))
           return lattice_points(P)
       end
via_polymake (generic function with 1 method)

julia> a = 5; b = 5; # a = number of variables, b = sum or degree

julia> R, x = polynomial_ring(QQ, a);

julia> @btime collect(monomials_of_degree(R, b)); # edited so it doesn't create monomials
  2.742 μs (255 allocations: 16.97 KiB)

julia> @btime collect(Oscar.MultiIndicesOfDegree(a, b));
  5.748 μs (449 allocations: 32.53 KiB)

julia> @btime via_polymake(a, b);
  14.657 ms (16320 allocations: 136.42 KiB)

julia> a = 5; b = 10;

julia> @btime collect(monomials_of_degree(R, b));
  21.440 μs (2005 allocations: 133.28 KiB)

julia> @btime collect(Oscar.MultiIndicesOfDegree(a, b));
  35.163 μs (2939 allocations: 209.47 KiB)

julia> @btime via_polymake(a, b);
  15.915 ms (61071 allocations: 486.05 KiB)

julia> a = 10; b = 10;

julia> R, x = polynomial_ring(QQ, a);

julia> @btime collect(monomials_of_degree(R, b));
  2.284 ms (184760 allocations: 16.21 MiB)

julia> @btime collect(Oscar.MultiIndicesOfDegree(a, b));
  5.424 ms (347856 allocations: 34.76 MiB)

julia> @btime via_polymake(a, b);
  367.535 ms (13538535 allocations: 103.31 MiB)

julia> a = 15; b = 10;

julia> R, x = polynomial_ring(QQ, a);

julia> @btime collect(monomials_of_degree(R, b));
  114.180 ms (3922516 allocations: 404.01 MiB)

julia> @btime collect(Oscar.MultiIndicesOfDegree(a, b));
  410.142 ms (8706790 allocations: 1.10 GiB)

julia> @btime via_polymake(a, b);
  16.894 s (660509300 allocations: 4.92 GiB)

julia> a = 15; b = 15;

julia> @btime collect(monomials_of_degree(R, b));
  8.642 s (155117524 allocations: 15.60 GiB)

julia> @btime collect(Oscar.MultiIndicesOfDegree(a, b));
  21.106 s (297865757 allocations: 36.89 GiB)

julia> @btime via_polymake(a, b);
# Killed; 32GB RAM were not enough :(

So, in my opinion we should use what is implemented in monomials_of_degree for MultiIndicesOfDegree (and make monomials_of_degree use that iterator internally). I can do this once this pull request here is merged; I think I don't have the rights to push on other people's forks.
Besides the timings, for my application in invariant theory the code in monomials_of_degree has the advantage that it is computing things one by one. So if I only need the first 3 monomials nothing else is computed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nicely done!

In principle you should be able to push directly to this PR if you check it out with gh pr checkout 3021. If you run the tests for the modules and those in experimental/DoubleAndHyperComplexes/test/ before you push, then I think it's fine if you pushed your work here.

But it's also fine with me to do that later.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fairly certain I can't do this because I only have 'triage rights', so I can't 'Push to (write) the person or team's assigned repositories' (see https://docs.github.com/en/organizations/managing-user-access-to-your-organizations-repositories/managing-repository-roles/repository-roles-for-an-organization ). I never tried though. But I don't want to complain about this; I'm not sure I want the responsibility :)

Comment on lines 183 to 196
# the two methods below are needed for the implementation of is_surjective
function (==)(G::SubquoModule, F::FreeMod)
return F == G
end

function (==)(F::FreeMod, G::SubquoModule)
base_ring(F) === base_ring(G) || return false
ambient_free_module(G) === F || return false
if isdefined(G, :quo)
iszero(G.quo) || return false
end
all(e -> e in G, gens(F)) || return false
return true
end
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jankoboehm : Without these methods is_surjective would fail for homomorphisms with a FreeMod as codomain. Do you agree with adding these?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine.

Comment on lines 1043 to 1046
function hom(F::FreeMod, M::ModuleFP{T}, V::Vector{<:ModuleFPElem{T}}; check::Bool=true) where T
base_ring(F) === base_ring(M) || return FreeModuleHom(F, M, V, base_ring(M))
return FreeModuleHom(F, M, V)
end
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jankoboehm : I added a dummy check flag. It's not doing anything, but it allows to call the hom constructor for any kind of modules with the same signature now. I hope this is ok from your side?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is it doing for other signatures? Does it apply here?

Comment on lines 6957 to 6973
function *(h::ModuleFPHom{T1, T2, Nothing}, g::ModuleFPHom{T2, T3, <:Any}) where {T1, T2, T3}
@assert codomain(h) === domain(g)
return hom(domain(h), codomain(g),
Vector{elem_type(codomain(g))}([g(h(x)) for x = gens(domain(h))]),
base_ring_map(g)
)

end

function *(h::ModuleFPHom{T1, T2, <:Any}, g::ModuleFPHom{T2, T3, Nothing}) where {T1, T2, T3}
@assert codomain(h) === domain(g)
return hom(domain(h), codomain(g),
Vector{elem_type(codomain(g))}([g(h(x)) for x = gens(domain(h))]),
base_ring_map(h)
)

end
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These were necessary for composition of module homomorphisms where one has a non-trivial ring map and the other doesn't.

@HechtiDerLachs
Copy link
Collaborator Author

This has now been rebased on #3041 (and will hence be rebased again once that is merged). The templates are now taken care of for the moment and hopefully the tests pass.

Comment on lines +55 to +73
tmp_res = _direct_sum(summands)
@assert tmp_res isa Tuple{<:ChainType, <:Vector, <:Vector} "the output of `direct_sum` does not have the anticipated format; see the source code for details"
# If you got here because of the error message above:
# This is supposed to be generic code and it attempts to build the direct sum
# of `summands` using the method of `direct_sum` for your specific type.
# The convention in Oscar is that this should return a triple `(S, inj, pr)` consisting
# of the actual direct sum `S`, a `Vector` of injections `inj` and a `Vector` of
# projections `pr`
# Unfortunately, it can not be assumed that the original method for `direct_sum`
# produces this output and it is an ongoing effort to streamline this throughout
# Oscar. Since such changes tend to happen with severe delay, if ever, we provide
# a little workaround here. If this code does not run for your type of chains,
# you may try two things:
#
# 1) Overwrite the method for `_direct_sum` below for your type and wrap the
# original method for `direct_sum` so that the anticipated output is produced
#
# 2) If that does not work, you may also overwrite the whole method for production
# of the direct sum here.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fingolfin :
This is where we stopped our discussion yesterday. I hope that adding this check and comments resolves the problem. The point is: If the method for direct_sum is implemented correctly (meaning it produces the anticipated output), then the code should just run. But we can not assume this and this provides an easy and transparent workaround.

Edit: A few lines below you can see that the default implementation of _direct_sum just deflects to direct_sum so that by default everything should work (but won't in 78.5% of all cases..). Same for _tensor_product below.

Comment on lines +225 to +244
tmp_res = _tensor_product(factors...)
@assert tmp_res isa Tuple{<:ChainType, <:Map} "the output of `tensor_product` does not have the anticipated format; see the source code for details"
# If you got here because of the error message above:
# This is supposed to be generic code and it attempts to build the tensor product
# of `factors` using the method of `tensor_product` for your specific type.
# The convention in Oscar is that this should return a pair `(M, mult)` consisting
# of the actual tensor product `M` and a multiplication map `mult` which takes a
# `Tuple` of elements in the `factors` to its tensor product in `M`.
# Unfortunately, it can not be assumed that the original method for `tensor_product`
# produces this output and it is an ongoing effort to streamline this throughout
# Oscar. Since such changes tend to happen with severe delay, if ever, we provide
# a little workaround here. If this code does not run for your type of chains,
# you may try two things:
#
# 1) Overwrite the method for `_tensor_product` below for your type and wrap the
# original method for `tensor_product` so that the anticipated output is produced
#
# 2) If that does not work, you may also overwrite the whole method for production
# of the tensor products here.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fingolfin : same here.

inc = -1
end
psi = Phi[Tuple(i+inc)]
dom_map = map(domain(Phi), 1, I)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line really confused me until I realized that the standard Julia function map is being overloaded to retrieve "maps" from hypercomplexes, free resolutions and even FreeMod / MatElem (in src/Modules/UngradedModules/SubQuoHom.jl). I think that's a really bad idea, and would urge y'all to reconsider this name (CC @jankoboehm @fieker @thofma).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took this from the ComplexOfMorphisms. But I agree! Actually, this has already lead to quite confusing error messages, because sometimes it would try to use a method of the original map on my input.

@fieker : Your call for the ComplexOfMorphisms. I will then follow your lead.


res, aug = free_resolution(Oscar.SimpleFreeResolution, M)

str, inc_str = Oscar.linear_strand(res)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here you get the linear strand together with its inclusion morphism. There is an optional keyword argument degree to specify a different initial degree.

Comment on lines +12 to +15
@test rank(str[0]) == 1
@test rank(str[1]) == 3
@test rank(str[2]) == 3
@test rank(str[3]) == 1
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You get a homogeneous Koszul complex as expected. In fact, everything is equal to the linear strand here.


res, aug = free_resolution(Oscar.SimpleFreeResolution, M)

str, inc_str = Oscar.linear_strand(res)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This case is more interesting: A direct sum of the previous Koszul complex and a shift in degree of the same complex. The linear strand will only be the first one and the second will appear as the cokernel of the inclusion.

@test compose(inc_str[(2,)], map(res, 1, (2,))) == compose(map(str, 1, (2,)), inc_str[(1,)])
@test compose(inc_str[(3,)], map(res, 1, (3,))) == compose(map(str, 1, (3,)), inc_str[(2,)])

K, pr = cokernel(inc_str)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This call to cokernel will dispatch to a special method which creates all cokernels again as free modules. It returns the resulting complex and the projection map. In particular, you can ask for the linear_strand of K in the next appropriate degree.

@HechtiDerLachs
Copy link
Collaborator Author

Now even printing of free resolutions, strands, and duals should look half way nice. We can probably improve that also later.

@HechtiDerLachs
Copy link
Collaborator Author

@wdecker : You should now be able to call linear_strand directly on your preferred free_resolution res. Note, however, that the inclusion map for that linear strand does not take values in res, because there are no maps for ComplexOfMorphisms and/or FreeResolutions. What happens internally is that res gets wrapped as an AbsHyperComplex and the inclusion takes values in that wrapper.

@HechtiDerLachs
Copy link
Collaborator Author

@wdecker : Methods for betti_table and minimal_betti_table have been worked out and I didn't find anything else that seemed relevant and has been exported for FreeResolution. Hence the types here should be at least as good as what we had regarding their user facing functionality.

But compared to FreeResolution all the other functionality like hom, tensor, dualizing, and simplify (!) now runs out of the box for SimpleFreeResolution while one has to duplicate every such method for FreeResolution. In my opinion this is a clear advantage.

What remains to be done is to hook up the Schreyer resolution and the like. But that's a different topic which has been postponed: @jankoboehm asked to look into this in order to also learn about how the things work out here. In particular this will then probably also allow to compute resolutions only up to a certain step on request.

@simonbrandhorst simonbrandhorst enabled auto-merge (squash) December 22, 2023 08:33
Copy link
Collaborator

@simonbrandhorst simonbrandhorst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in a conversation with @jankoboehm and @HechtiDerLachs
we concluded that this is ready for experimental.

@simonbrandhorst simonbrandhorst merged commit a5c9e37 into oscar-system:master Dec 22, 2023
18 of 22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants