-
-
Notifications
You must be signed in to change notification settings - Fork 419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide Hilbert series implementation beyond Singular's limitations #26243
Comments
Commit: |
This comment has been minimized.
This comment has been minimized.
comment:5
The residue class ring of the polynomial ring in 11 x 4 variables modulo the 2x2 minors is a normal monoid ring. You can use Normaliz, but there is also an explicit formula. See MR1213858 |
Some real world example for the computation of Hilbert series |
comment:6
Attachment: hilbert_example.sobj.gz Replying to @w-bruns:
On #25091, you said that normaliz can only compute the Hilbert series of the integral closure of a monomial ideal. That may not be what is required here. Anyway: Clearly I am not interested in that particular example. The example is just that: An example, that I borrowed from #20145. It is suitable as a doc test, as it can be set up in short time; and it is instructive as a doc example, as it exposes limitations of Singular that make Singular unusable in "real world examples" for which there are no explicit formulae in the literature. Or would you know a formula for the Hilbert series of the mod-2 cohomology ring of group number 299 of order 256 (numbering according to the small groups library)? That cohomology ring is the smallest potential counter example to a conjecture of Dave Benson. A small part of the attempt to verify that conjecture is the computation of the Hilbert series of several quotients of that ring. Unfortunately, that "small part" is too big for Singular or Frobby. The leading ideal of the relation ideal of the above-mentioned cohomology ring is in attachment: hilbert_example.sobjThe computation of its Hilbert series actually shouldn't take very long:
However, Singular immediately tells that the example is too big, and Frobby needs to be killed after several minutes. There is no interface to CoCoA, and I don't know how to utilise the interface to normaliz. I also don't know how to install Macaulay2, so, I can't test that either. |
comment:7
All of these comments are about Something that should give you better speed would be having it take another argument of the starting point to do the sum. That way you would not have to create all of the intermediate lists (e.g., I think this might also give you better C code (and hence, be faster):
Some other thoughts: Does It is faster to do If you replace You are probably going to be better in terms of speed to directly the low-level polynomial manipulation via flint, that way you have fewer intermediate (Sage) objects and indirection. Should those The You can avoid a few additions here: - for i from 0 <= i < 2*m._nonzero by 2:
- degree += m._data[i+1]
+ for i in range(1, 2*m._nonzero, 2):
+ degree += m._data[i] You should have two functions
I don't see why |
comment:8
Dear Travis, first of all: Thank you for your thoughtful comments! Replying to @tscrim:
Right, I should try that.
Right, actually the first "if" is nonsense in my code: I assign to m1, but do not return it.
versus
Is the second one better?
Can you give me a pointer on how to use arrays of Cython types?
I also use append and pop. How to do these with arrays?
Cool! I thought that, again, it is equivalent. But the code becomes a lot shorter:
versus
I don't understand that. What bounds check are you talking about? Are you saying that
So, I should allocate the underlying flint C types and manipulate these, and only in the very end create a Sage polynomial? I guess that's related with another point below.
Probably. Since
Almost. I get
versus
So, in the non-deprecated version there is an additional variable assignment. Actually this is why I changed my original code (which did use
Right, good point!
Really? I thought it was clearer the other way around: In all cases, we have a computation that involves degree weights, and it is the job of the function to find out if the weights are trivial or not. One thing, though: At some point, one needs to check whether
Right.
This is related with the point above: In a But just to be clear: I would need to do the memory management myself, wouldn't I? Basically, it is a binary search tree; is there code in Sage (I am sure there is!) where such data structure is used and from where I can get inspiration? |
comment:9
Just to be sure: If I turn the !ETuple functions into methods, they should probably be cpdef, not cdef, right? As long as all instances of !ETuple in my code are cdefined, a cpdef method wouldn't be less efficient than a cdef method, right? |
comment:10
|
comment:11
Replying to @w-bruns:
Right. In most cases, I actually use the wording "Hilbert series of I" and "Poincaré series of R/I", which I guess isn't standard, but at least it makes the difference clear. But I think in my comments above I haven't been sufficiently careful with these wordings. EDIT: Note that Bigatti uses the notation "", which coincides with the Hilbert-Poincaré series of R/I. But that's just a notation, not a name. It seems to me that in practical applications, one most commonly has an ideal I and wants to compute the Hilbert-Poincaré series of R/I, not of "I as a module". Hence, from a practical aspect, it actually makes sense to use a function name such as "hilb" or "hilbert_numerator" and so on for a function that tells something about R/I but takes I as an input.
That's unfortunate. I really need the !Hilbert/Poincaré series of R/I. |
comment:12
Replying to @simon-king-jena:
No problem. Sorry I am not always able to find the time to review your code.
Yes, you can see that it does not do the type check (which could result in a segfault if the result is not the correct type).
Right, you can't have an array of cdef classes. Forgot about that. There is a way around that by having an array of pointers to the objects: https://stackoverflow.com/questions/33851333/cython-how-do-you-create-an-array-of-cdef-class It is just really annoying to have to do all of the type-casting.
I missed that when going through the code. So then it is just better to use lists. Add lots of
Yes, if you want to squeeze out as much speed as possible. It is more work and makes the code a bit more obscure, but it should give you some more speed.
Hmm...curious. Although the compiler will almost certainly optimize those extra variables out anyways.
It seems like they want to separate functions to me. Plus you could then require
I see; I didn't think it was going to be that bad overall. However, it's not a big issue for me the way things are now.
Yes, that is correct, but the
Yes, you would have to do some memory management. I'm not sure exactly how much from my relatively quick look, but it should not be too bad. |
comment:13
Replying to @simon-king-jena:
No, a cpdef called from a Cython file is just a C function call IIRC. So it should not be any less efficient from within Cython code (although I think it can introduce a bit of extra overhead when doing a Python function call compared to a plain def).
(These timings are relatively consistent across multiple runs for me.) |
comment:14
Just some questions to make sure I understand your suggestions: Replying to @tscrim:
So,
So, when I do How to avoid a bound check in
In that case I should change it.
I do require Anyway, do you think that I should duplicate the loop in
Maybe go an easy middle way, and have some
The overhead would be that Python objects will be allocated (unless I use a pool, similar to what Sage does with Integers...), but I guess the allocation is not more than the allocation of a dict, and moreover I wouldn't need to care about memory management. And accessing the cdef attributes of the node will be faster than looking up dictionary items, wouldn't it? |
comment:15
PS, concerning "pool": I recall that when I was young, there used to be a utility that automatically equipped a given cdef class with a pool/kill list, similar to what is hard coded for Integer. But I cannot find it now. Has it gone? It may actually be an idea to equip !ETuple with pool/kill list. In that way, |
comment:16
Replying to @simon-king-jena:
Found it: sage.misc.allocator |
comment:17
Replying to @simon-king-jena:
Since I am sure about bounds being correct, I guess I can use |
comment:18
Replying to @simon-king-jena:
Yes, that is correct. If you want to do an explicit check in there that can raise an error, you can do
You have to also tell Cython not to do those checks. You can do this at the level of the file by adding
See above.
I bet that will make Jeoren happy. :)
I couldn't remember and was guessing.
I don't see why that one has to be split. It is only - for m2 in Id:
- if m is not m2:
- Factor *= (1-t**quotient_degree(m2,m,w))
+ if w is None:
+ for m2 in Id:
+ if m is not m2:
+ Factor *= (1-t**quotient_degree(m2,m))
+ else:
+ for m2 in Id:
+ if m is not m2:
+ Factor *= (1-t**quotient_degree_graded(m2,m,w)) In some cases, this will result in fewer
Yes, it should be much faster (at that scale) and require less memory than a Yes, you can use |
comment:19
Replying to @tscrim:
I wasn't talking about a pool for Nodes (which just build a tree), but about a pool for !ETuple, which I guess will be created and killed (during interreduction) more frequently. |
comment:20
Replying to @simon-king-jena:
Ah, I see. Yea, that might be good to do. Although the use of the pool could end up having a fair bit of overhead when not having a lot of |
comment:21
It seems to me that really
So, if the loop itself is timed, one sees that the I guess I should use the deprecated syntax and post the above benchmark on cython-users, shouldn't I? |
comment:22
Ouch. I had a type in my benchmark ( I don't know if the 2% difference in the real world example is significant. |
Branch pushed to git repo; I updated commit sha1. New commits:
|
Branch pushed to git repo; I updated commit sha1. New commits:
|
comment:67
Okay, I just reverted using |
comment:69
Am I correct in thinking that ETuple is designed for implementing polynomials, not Laurent polynomials? This and the comment on the failing test and the fact that "ETuple free code" as in comment:63 indicates that negative exponents is wrong usage. So, what could we do about it? An obvious change (on a new ticket, of course) would be to change |
comment:70
Replying to @simon-king-jena:
Too bad. ETuple is used for Laurent polynomials. In the Hilbert series code, I do assume that the exponents are non-negative, though. The assumption is explicitly documented in some places. |
comment:71
Replying to @simon-king-jena:
I think the assumption is documented enough where it is used, so it is fine. If someone needs the negatives, then they can change it to, e.g., |
comment:72
Are the tests passing? Can this be back to positive review? |
comment:73
Tests are passing for me. From #20145, don't you need another commit? |
comment:75
Replying to @tscrim:
No, why? I was told occasionally that one should keep tickets small. In this case, there is one ticket for a new implementation of Hilbert series, and one ticket (#20145) for using the new implementation (also adding an implementation of Hilbert polynomial) in existing methods. Additionally, when you ask if I need the two commits from #20145: For my own applications, I just need the commits from here. |
comment:76
Okay, I wasn't sure if the last commit from #20145 would be considered a bug from this part or not. Thank you for clarifying. So yes, all tests pass for me, and this is a positive review if my last change is good. |
comment:77
Replying to @tscrim:
No, it wasn't a bug --- at least not a bug here. I thought it was needed to amend the sign in #20145, which I did in the first commit, but in fact it is not needed, and thus I reverted the sign amendment in the second commit. |
comment:78
I see. Thanks for explaining. This is ready to be set back to positive review. |
comment:79
Thanks! |
comment:81
A little stupid trivial fix. |
Changed branch from u/tscrim/hilbert_functions-26243 to |
In my work on group cohomology, I was bitten by the fact that Singular limits integers to 32 bit in the coefficients of Hilbert series. By consequence, examples such as the following from #20145 fail:
It seems that upstream does not want to change Singular to use 64 bit integers by default and arbitrary size integers when needed (that's what I would propose. Therefore I am providing an implementation of Hilbert series computation that does not suffer from such limitation.
The above is the correct result according to #20145.
My implementation is a bit slower than libsingular, which is why I do not propose to use my implementation as a replacement of libsingular's
hilb
function. However, in big applications, it is a good way out.CC: @w-bruns
Component: commutative algebra
Keywords: Hilbert series
Author: Simon King
Branch/Commit:
5637e07
Reviewer: Travis Scrimshaw
Issue created by migration from https://trac.sagemath.org/ticket/26243
The text was updated successfully, but these errors were encountered: