Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[examples] Create an example of using small fields #13

Closed
irakliyk opened this issue Apr 28, 2021 · 5 comments
Closed

[examples] Create an example of using small fields #13

irakliyk opened this issue Apr 28, 2021 · 5 comments

Comments

@irakliyk
Copy link
Collaborator

Currently, all our examples are done in a 128-bit field. It would be good to create an example in a 62-bit field. There are a few options here - but I think signature aggregation would be one of the more interesting ones.

@Nashtare
Copy link
Contributor

Nashtare commented Jul 28, 2021

Is there anything to be done outside of the Air circuit and some helper functions?
I thought that both prover and verifier were generic over the size of the field, and I tried to make a dummy example with f62 to get a grasp of performance difference but verification is failing, always at the same step (trace query deserialization, with whatever value the program is holding):
Failed to verify proof: proof deserialization failed: trace query deserialization failed: invalid field element: value 5911323183101409168 is greater than or equal to the field modulus

I made a very basic example (by just copying the fib repo in example/ and changing the field, nothing to be considered serious!). The code is here: Nashtare@5615d8d

While investigating, I realized that function mul of both f62 and f128 implementations are mentioning a and b are assumed to be in [0, 2M). which is not ensured in other functions calling mul. I haven't checked further whether this could have a negative impact, or whether there were other related mismatch between documentation and actual code.

@irakliyk
Copy link
Collaborator Author

Thank you for looking into this! Yes, the issue is exactly where you've identified.

Basically, when we serialize field elements, they are serialized according to their internal representation, which for f62 field is in the range [0, 2M). But then we try to de-serialize them, we only accept elements in the range [0, M).

The proper way to handle this is to serialize elements according to canonical representation (to make sure they are in [0, M) range. We also need to make sure that when we hash the elements, we hash them in canonical form as well (so that hashes are consistent between the prover and the verifier). There are a couple potential ways to go about this - but I'm not sure which one of them will have least performance impact. So, there is still some work that needs to be done before the f62 examples work.

In terms of performance impact, I'd expect switching to f62 field to speed up proving time by a factor of 4x - 5x.

@Nashtare
Copy link
Contributor

Nashtare commented Jul 29, 2021

Ah thanks for the explanation! I thought I was doing something wrong 😅
As for the performance comparison on the prover side, is it at the same level of security? Like default f128 vs quadratic extension of f62? Because with the Fibonacci example, I was getting for same security level a speed up factor of only ~x1.8, or maybe it:s because the AIR for the Fib sequence is extremely simple? Would the speed-up factor of x4/x5 be similar between a field of size 256bit and the current 128-bit field (granted we use a nice prime for f256)?

@irakliyk
Copy link
Collaborator Author

I was getting for same security level a speed up factor of only ~x1.8, or maybe it:s because the AIR for the Fib sequence is extremely simple?

Yes - that's right! In general, only the first 5 steps of the protocol are executed in the base field (from trace generation to constraint evaluation). The rest is executed in the extension field. And the complexity of the first 5 steps is much more dependent on the complexity of AIR than that of all the remaining steps. For example, for something like simple Fibonacci example, the first 5 steps take up about 50% of proving time. While for something like like Lamport signature aggregation (one of the more complicated examples), the first 5 steps take up 85% - 90% of proving time.

So, I'd expect that as AIR gets more complex, the benefit of using smaller fields increases - but I think it would be interesting to see if it actually gets closer to 4x - 5x I quoted or it would be more in the 3x - 4x range.

Would the speed-up factor of x4/x5 be similar between a field of size 256bit and the current 128-bit field?

I'd expect something in the similar ballpark but hard to say without running actual experiments. Btw, for a larger prime, it is probably beneficial to go with a 252-bit prime which is actively being used in various projects (primarily by StarkWare). The prime is 2251 + 17 * 2192 + 1.

Jasleen1 pushed a commit to Jasleen1/winterfell that referenced this issue Sep 8, 2021
Refactor verifier to use channel abstraction
@irakliyk
Copy link
Collaborator Author

Closing this as we already have FibSmall example which uses f64 field.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants