New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Aggregate signature module implementation #461

Open
wants to merge 1 commit into
base: master
from

Conversation

Projects
None yet
7 participants
@apoelstra
Member

apoelstra commented Jun 19, 2017

No description provided.

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Jun 19, 2017

Member

In verification I use Bos-Coster to do the multi-exp which is very fast (consistently about 830 iterations for ten random scalar/point pairs, an iteration being a point-add and a scalar-subtract). Interestingly this does not require any pre-computation. However if you use the aggregate verifier with only one or two points the performance is relatively bad, in the one-point case it just breaks down to using a binary addition ladder. Fixing this would require I change the API to require a verify-enabled secp context, which sucks to require for just a special case. Thoughts?

Member

apoelstra commented Jun 19, 2017

In verification I use Bos-Coster to do the multi-exp which is very fast (consistently about 830 iterations for ten random scalar/point pairs, an iteration being a point-add and a scalar-subtract). Interestingly this does not require any pre-computation. However if you use the aggregate verifier with only one or two points the performance is relatively bad, in the one-point case it just breaks down to using a binary addition ladder. Fixing this would require I change the API to require a verify-enabled secp context, which sucks to require for just a special case. Thoughts?

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Jun 19, 2017

Member

Should state-machine violations (e.g. trying to create two partial signatures with same key/nonce) be ARG_CHECK errors or just return failure?

Member

apoelstra commented Jun 19, 2017

Should state-machine violations (e.g. trying to create two partial signatures with same key/nonce) be ARG_CHECK errors or just return failure?

@gmaxwell

This comment has been minimized.

Show comment
Hide comment
@gmaxwell

gmaxwell Jun 19, 2017

Member

verify-enabled secp context

I expected it to need this for the edge case.

What I haven't benchmarked out but think might be interesting: switching to a boring multiexp once enough points have dropped out that bos-coster is not making much progress anymore. (though this wouldn't be the existing very code, it would be optimized for smaller number (e.g. no endomorphism, smaller precomp, perhaps "joint-point" precomp).

Member

gmaxwell commented Jun 19, 2017

verify-enabled secp context

I expected it to need this for the edge case.

What I haven't benchmarked out but think might be interesting: switching to a boring multiexp once enough points have dropped out that bos-coster is not making much progress anymore. (though this wouldn't be the existing very code, it would be optimized for smaller number (e.g. no endomorphism, smaller precomp, perhaps "joint-point" precomp).

@gmaxwell

This comment has been minimized.

Show comment
Hide comment
@gmaxwell

gmaxwell Jun 19, 2017

Member

in your heap, I believe you will get better performance if the heap itself is on indexes to entries which themselves don't move around... your comparison code would know how to dereference the indexes. Because you support only a max of 32 right now char would be fine.

Member

gmaxwell commented Jun 19, 2017

in your heap, I believe you will get better performance if the heap itself is on indexes to entries which themselves don't move around... your comparison code would know how to dereference the indexes. Because you support only a max of 32 right now char would be fine.

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Jun 20, 2017

Member

Heh, yeah, on 30-key signatures using indices in the heap gives a 15%+ improvement on verification speed, which means we now take only 47% as long to verify compared to 30 individual verifications. (These numbers to be taken with salt, given I've only done a few measurements and I'm using my battery-backed laptop with a bunch of other stuff running. But they are pretty consistent.) Before we were at 57%.

Using an index-only heap not only gets rid of a bunch of copying, it lets us do swaps with the triple-xor trick (x ^= y, y ^= x, x ^= y). Also in ecmult_multi I was removing the second-to-max point, changing the point but not the scalar, and putting it back. This is an expensive no-op for a scalar-ordered heap, so I stopped doing that.

Next I'll try catching scalar 1's and adding their points to a running return value rather than spamming the heap with them, and using the endomorphism, but I may be in the air for the next 10+ hours and unable to report.

Member

apoelstra commented Jun 20, 2017

Heh, yeah, on 30-key signatures using indices in the heap gives a 15%+ improvement on verification speed, which means we now take only 47% as long to verify compared to 30 individual verifications. (These numbers to be taken with salt, given I've only done a few measurements and I'm using my battery-backed laptop with a bunch of other stuff running. But they are pretty consistent.) Before we were at 57%.

Using an index-only heap not only gets rid of a bunch of copying, it lets us do swaps with the triple-xor trick (x ^= y, y ^= x, x ^= y). Also in ecmult_multi I was removing the second-to-max point, changing the point but not the scalar, and putting it back. This is an expensive no-op for a scalar-ordered heap, so I stopped doing that.

Next I'll try catching scalar 1's and adding their points to a running return value rather than spamming the heap with them, and using the endomorphism, but I may be in the air for the next 10+ hours and unable to report.

@gmaxwell

This comment has been minimized.

Show comment
Hide comment
@gmaxwell

gmaxwell Jun 20, 2017

Member

You need a heapify: repeated insert is O(N log N) instead of O(N).

While working on this before I came up with an optimization that halved the number of comparison operations; after googling a bunch I found it had a name, but I don't recall the name or the exact optimization and searching again is failing. :(

I think (part of) it went like this: when starting pop the first element and set it aside in a separate super-root position effectively outside of the tree. Then in each iteration, read the super-root and root, compute the new values. Swap the root and super-root (since the root will now be the highest value) then rebalance the root. This saves replaces the rebalance in both the delete+insert with just a single rebalance.

I keep thinking there was another optimization which came from enforcing the heap property less strongly which was possible because no insertion is needed (only delete and replace, which only sweep the heap in one direction), but I don't recall what that part was. :(

It would be useful to know what a profile looks like .. e.g. how much of the time is spent in comparisons-- this would tell how much gain there would be from optimizations like the above or from speedups in the comparison (e.g. storing the number of non-zero words in the scalars, so comparison can compare just those values or if equal start on non-zero words).

For your figures above for the 10 point case 12% of your adds will be gej+ge, so it will likely be faster to detect and switch to the special case. (stop me before I make some observation that there is some crazy effective affine trick that can be done by tracing the computation depth of each point, and putting all at the same depth on the same isomorphism)

Member

gmaxwell commented Jun 20, 2017

You need a heapify: repeated insert is O(N log N) instead of O(N).

While working on this before I came up with an optimization that halved the number of comparison operations; after googling a bunch I found it had a name, but I don't recall the name or the exact optimization and searching again is failing. :(

I think (part of) it went like this: when starting pop the first element and set it aside in a separate super-root position effectively outside of the tree. Then in each iteration, read the super-root and root, compute the new values. Swap the root and super-root (since the root will now be the highest value) then rebalance the root. This saves replaces the rebalance in both the delete+insert with just a single rebalance.

I keep thinking there was another optimization which came from enforcing the heap property less strongly which was possible because no insertion is needed (only delete and replace, which only sweep the heap in one direction), but I don't recall what that part was. :(

It would be useful to know what a profile looks like .. e.g. how much of the time is spent in comparisons-- this would tell how much gain there would be from optimizations like the above or from speedups in the comparison (e.g. storing the number of non-zero words in the scalars, so comparison can compare just those values or if equal start on non-zero words).

For your figures above for the 10 point case 12% of your adds will be gej+ge, so it will likely be faster to detect and switch to the special case. (stop me before I make some observation that there is some crazy effective affine trick that can be done by tracing the computation depth of each point, and putting all at the same depth on the same isomorphism)

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Jun 20, 2017

Member

Very tired. Gained a fair bit by replacing the scalar subtraction with simpler code that doesn't do a modular negation, and another fair bit by hashing less stuff (pre-hashing all the constant stuff). Tried switching to native int math in a couple ways when the scalars got small enough, that had no effect I could observe.

The endomorphism helped but not as much as I wanted it to. Had some frustration with lambda_scalar_split giving me negative numbers, these result in putting a mix of 256-bit and 128-bit numbers into the Bos-Coster ladder, which poisons it (it gets into an effectively infinite loop repeatedly subtracting a small number from a big one). You have to detect this and negate so that everything is the same bit-size, which slows you down.

Agreed profiling would be good at this point to see what's actually happening. I'm doubtful at this point that the heap is costing us a lot, but we'll see. Will try the gej+ge thing a shot, that's quick and easy to do.

Spent some time thinking about effective affine, I didn't get anywhere, but maybe I'm just missing something obvious.

Member

apoelstra commented Jun 20, 2017

Very tired. Gained a fair bit by replacing the scalar subtraction with simpler code that doesn't do a modular negation, and another fair bit by hashing less stuff (pre-hashing all the constant stuff). Tried switching to native int math in a couple ways when the scalars got small enough, that had no effect I could observe.

The endomorphism helped but not as much as I wanted it to. Had some frustration with lambda_scalar_split giving me negative numbers, these result in putting a mix of 256-bit and 128-bit numbers into the Bos-Coster ladder, which poisons it (it gets into an effectively infinite loop repeatedly subtracting a small number from a big one). You have to detect this and negate so that everything is the same bit-size, which slows you down.

Agreed profiling would be good at this point to see what's actually happening. I'm doubtful at this point that the heap is costing us a lot, but we'll see. Will try the gej+ge thing a shot, that's quick and easy to do.

Spent some time thinking about effective affine, I didn't get anywhere, but maybe I'm just missing something obvious.

@gmaxwell

This comment has been minimized.

Show comment
Hide comment
@gmaxwell

gmaxwell Jun 21, 2017

Member

Re: on where my arms were waving about effective affine like tricks. on my desktop, gej+ge is .181us faster than gej+gej, subtracting of 0.0644us per pubkey for additional fe multiplies, it would only take a batch of 29 points that would later gej+ge to pay for the cost of a feinv to bring them back to affine.

So, e.g. as points are changed to no longer be affine they could be removed from the heap, and once half the points are removed (ratio will depend on the tradeoff in the asymptotic behavior of the algorithm), the whole thing batch inverted so everything is back at z=1. At least that is the general direction my hands were waving, -- to get really interesting it would be useful to get rid of the inverse.

Hm. I thought we already handled the negating for the endo... but I guess the wnaf code is doing that.. and manages to do it without negating the point directly but instead permuting the wnaf table entries. Cute trick. (obviously negations could be deferred here by sign tracking, but I don't see how to avoid most of them).

RE: degenerate performance when the magnitudes are too different: When I first pointed out these techniques, sipa remarked:

9:59 < gmaxwell> Anyone looked at using bos-coster (or other similar algorithims) for batch verification?
20:00 < gmaxwell> it's kind of like extgcd... the way it works is we want to compute aP1 + bP2 + cP3 ... we put these scalar,point tuples in a max-heap, and pop off the two largest items (x,y) and set x = (x-y),Px y = y,(Px+Py) then push them back into the heap (unless x becomes 0, in which case it falls out). This keeps going until at the end you have a single remaining value, which should be small, and you then use
20:00 < gmaxwell> a coventional variable time scalar multiply for it.
20:03 < gmaxwell> It would couple well with the endomorphism split, which the existing batch does not.
20:22 < gmaxwell> a little toy implementation in sage, with a batch size of 512, if I split first (meaning a batch of 1024 half sized entries) I end up needing an average of 16097 point adds total. I think this compares pretty favorably what we currently do (which IIRC does ~32 adds per pubkey, plus about 16 or so to build the precomp)
20:28 < gmaxwell> A really gigantic batch of 4096 gets it down to 24 adds per pubkey.
22:53 < gmaxwell> (I was looking for how to compute an efficient addition chain for polysig when I ran into that)
02:03 <@sipa> i guess, when x>2y, you can replace with Px+2Py
02:04 <@sipa> why does ed25519 not use this?
02:04 <@sipa> or does it?
02:04 < gmaxwell> It doesn't. So, there is a ed25519 paper that talks about it.
02:05 <@sipa> it probably only helps with very large batches?
02:07 < gmaxwell> They claim in that paper that its a win at all batch sizes, though I think because of our other optimizations that it kills it probably isn't. They also have a newer paper that goes into more detail http://cr.yp.to/badbatch.html .. but it looks as close as they have to an implementation is in python.
02:08 < gmaxwell> Paper also mentions an algorithim that should be asymtopically 2x faster, due to Pippenger -- I got the paper but my patience ran out before I managed to extract the algorithim from it. :) (the bos-coster is super trivial)
02:10 < gmaxwell> The gain is greater with bigger batches (goes up with the log of the batch size).
02:11 < gmaxwell> I think how well it composes with the endomorphism is pretty much required to make it a win for us, but I didn't compare that much.

Member

gmaxwell commented Jun 21, 2017

Re: on where my arms were waving about effective affine like tricks. on my desktop, gej+ge is .181us faster than gej+gej, subtracting of 0.0644us per pubkey for additional fe multiplies, it would only take a batch of 29 points that would later gej+ge to pay for the cost of a feinv to bring them back to affine.

So, e.g. as points are changed to no longer be affine they could be removed from the heap, and once half the points are removed (ratio will depend on the tradeoff in the asymptotic behavior of the algorithm), the whole thing batch inverted so everything is back at z=1. At least that is the general direction my hands were waving, -- to get really interesting it would be useful to get rid of the inverse.

Hm. I thought we already handled the negating for the endo... but I guess the wnaf code is doing that.. and manages to do it without negating the point directly but instead permuting the wnaf table entries. Cute trick. (obviously negations could be deferred here by sign tracking, but I don't see how to avoid most of them).

RE: degenerate performance when the magnitudes are too different: When I first pointed out these techniques, sipa remarked:

9:59 < gmaxwell> Anyone looked at using bos-coster (or other similar algorithims) for batch verification?
20:00 < gmaxwell> it's kind of like extgcd... the way it works is we want to compute aP1 + bP2 + cP3 ... we put these scalar,point tuples in a max-heap, and pop off the two largest items (x,y) and set x = (x-y),Px y = y,(Px+Py) then push them back into the heap (unless x becomes 0, in which case it falls out). This keeps going until at the end you have a single remaining value, which should be small, and you then use
20:00 < gmaxwell> a coventional variable time scalar multiply for it.
20:03 < gmaxwell> It would couple well with the endomorphism split, which the existing batch does not.
20:22 < gmaxwell> a little toy implementation in sage, with a batch size of 512, if I split first (meaning a batch of 1024 half sized entries) I end up needing an average of 16097 point adds total. I think this compares pretty favorably what we currently do (which IIRC does ~32 adds per pubkey, plus about 16 or so to build the precomp)
20:28 < gmaxwell> A really gigantic batch of 4096 gets it down to 24 adds per pubkey.
22:53 < gmaxwell> (I was looking for how to compute an efficient addition chain for polysig when I ran into that)
02:03 <@sipa> i guess, when x>2y, you can replace with Px+2Py
02:04 <@sipa> why does ed25519 not use this?
02:04 <@sipa> or does it?
02:04 < gmaxwell> It doesn't. So, there is a ed25519 paper that talks about it.
02:05 <@sipa> it probably only helps with very large batches?
02:07 < gmaxwell> They claim in that paper that its a win at all batch sizes, though I think because of our other optimizations that it kills it probably isn't. They also have a newer paper that goes into more detail http://cr.yp.to/badbatch.html .. but it looks as close as they have to an implementation is in python.
02:08 < gmaxwell> Paper also mentions an algorithim that should be asymtopically 2x faster, due to Pippenger -- I got the paper but my patience ran out before I managed to extract the algorithim from it. :) (the bos-coster is super trivial)
02:10 < gmaxwell> The gain is greater with bigger batches (goes up with the log of the batch size).
02:11 < gmaxwell> I think how well it composes with the endomorphism is pretty much required to make it a win for us, but I didn't compare that much.

@peterdettman

This comment has been minimized.

Show comment
Hide comment
@peterdettman

peterdettman Jun 21, 2017

Collaborator

Next I'll try catching scalar 1's and adding their points to a running return value rather than spamming the heap with them

You could generalise this by diverting "small" multiples (probably 2-4 bits or so) to an entry in a small-multiples table. Say it's 3 bits, so 7 (non-zero) values. This can be finished off with 3 doubles and 9 adds to get the result in the "1" entry, less if there are empty entries.

@gmaxwell That 'replace' operation was the first thing I noticed missing from Java's PriorityQueue. In that class, the heap replaces a removed head with the last (array) element and sifts down. Then a subsequent add is placed at the end and sifted up (these operations are careful to keep a dense representation in the underlying array). A replace should be able to just remove the head, bubble up the largest children, then put the new entry in the "hole" and sift up (not sure Floyd's trick refers to this specifically or any case of sifting new entries up from the end).

I'm somewhat skeptical that any sort of z->1 "fix-ups" mid-algorithm can work out; my sense is the cost is likely higher than the savings, but it obviously depends on the average number of additions you're performing on all the intermediate points. Removing points (or rather scalars) from the heap could hurt though. Assuming there is any advantage to be gained, a periodic stop-and-fix should be enough; maybe keep a counter of how many non-affine points have crept in.

Looking at n1.P1 + n2.P2 + ..., with n1 >= 2^k.n2. As noted above, https://ed25519.cr.yp.to/ed25519-20110926.pdf mentions using (n1 −2^k.n2).P1 +n2.(2^k.P1 + P2). They don't use it there, because the scalars are all randomised (by 128-bit values), and it virtually never occurs. For a general multi-exp, you couldn't assume anything of course. A few thoughts:

  • The above formula dodges the truly worst case, but you can still end up having to (chain-)double the same P more than once.
  • If n1 is even, we can first double P1 until it's not (up to k times).
  • Maybe decompose n1 as (u.2^k + v) and (conceptually) put (u, 2^k.P1) and (v, P1) into the heap. Downside is post-construction insertions (probably bounded by the maximum scalar bits), but if the small-multiples table is used, k could be limited so that we just slice "windows" from the bottom of n1 to shorten it, and add P1 to the appropriate entry, keeping (u, 2^k.P1) at (super-)root.
  • Note that the windowing decomposition plus small-multiples table gives a reasonably efficient behaviour when you are left with (or start with!) a single scalar multiple.

I'd expect the endomorphism to have a large impact for small batches, but the advantage should taper off for larger batches, and memory use is a constant factor higher (<2). I'll try and get a rough "scale" from my Java implementation.

storing the number of non-zero words in the scalars

Or just track the "longest length" (hey, it's a heap), and do all comparisons to that length.

Collaborator

peterdettman commented Jun 21, 2017

Next I'll try catching scalar 1's and adding their points to a running return value rather than spamming the heap with them

You could generalise this by diverting "small" multiples (probably 2-4 bits or so) to an entry in a small-multiples table. Say it's 3 bits, so 7 (non-zero) values. This can be finished off with 3 doubles and 9 adds to get the result in the "1" entry, less if there are empty entries.

@gmaxwell That 'replace' operation was the first thing I noticed missing from Java's PriorityQueue. In that class, the heap replaces a removed head with the last (array) element and sifts down. Then a subsequent add is placed at the end and sifted up (these operations are careful to keep a dense representation in the underlying array). A replace should be able to just remove the head, bubble up the largest children, then put the new entry in the "hole" and sift up (not sure Floyd's trick refers to this specifically or any case of sifting new entries up from the end).

I'm somewhat skeptical that any sort of z->1 "fix-ups" mid-algorithm can work out; my sense is the cost is likely higher than the savings, but it obviously depends on the average number of additions you're performing on all the intermediate points. Removing points (or rather scalars) from the heap could hurt though. Assuming there is any advantage to be gained, a periodic stop-and-fix should be enough; maybe keep a counter of how many non-affine points have crept in.

Looking at n1.P1 + n2.P2 + ..., with n1 >= 2^k.n2. As noted above, https://ed25519.cr.yp.to/ed25519-20110926.pdf mentions using (n1 −2^k.n2).P1 +n2.(2^k.P1 + P2). They don't use it there, because the scalars are all randomised (by 128-bit values), and it virtually never occurs. For a general multi-exp, you couldn't assume anything of course. A few thoughts:

  • The above formula dodges the truly worst case, but you can still end up having to (chain-)double the same P more than once.
  • If n1 is even, we can first double P1 until it's not (up to k times).
  • Maybe decompose n1 as (u.2^k + v) and (conceptually) put (u, 2^k.P1) and (v, P1) into the heap. Downside is post-construction insertions (probably bounded by the maximum scalar bits), but if the small-multiples table is used, k could be limited so that we just slice "windows" from the bottom of n1 to shorten it, and add P1 to the appropriate entry, keeping (u, 2^k.P1) at (super-)root.
  • Note that the windowing decomposition plus small-multiples table gives a reasonably efficient behaviour when you are left with (or start with!) a single scalar multiple.

I'd expect the endomorphism to have a large impact for small batches, but the advantage should taper off for larger batches, and memory use is a constant factor higher (<2). I'll try and get a rough "scale" from my Java implementation.

storing the number of non-zero words in the scalars

Or just track the "longest length" (hey, it's a heap), and do all comparisons to that length.

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Jun 23, 2017

Member

Needs rebase now, but added several commits:

  • change heap implementation to work only on indices rather than copying whole scalars and points around
  • replace heap insertions/deletions with in-place heapify and update-root
  • optimize some scalar operations to avoid interacting with the modulus
  • hash all the non-index data separately so that the per-pubkey hash has less data (hash the nonce too so the sigs are no longer forgeable ;) this was going to be a review canary once everyone stopped focusing on perf, but Jonas caught it early)
  • endomorphism support

Some approximate perf numbers on my laptop with 30-signature aggregates:

  • just verifying 30 signatures w/o endomorphism: 1960us
  • just verifying 30 signatures with endomorphism: 1400us
  • first version of this PR: 1150us
  • current PR w/o endomorphism: 730us
  • current PR with endomorphism: 650us

The numbers are much less impressive with small aggregates (in fact for one or two signatures they're significantly worse) and much more impressive for very large aggregates.

We do actually need to deal with the big-number-small-number performance issues, there is a real DoS vector in the current code where if the user passes an "aggregate signature" with a 16-bit s value, say, the endomorphism-enabled code will spend 112 bits of iterations grinding that s off of some 128-bit scalar. It's even worse without the endomorphism.

Edit: callgrind tells me we're spending 87% of our time in gej_add_var and 7% of our time in heap_siftdown, so it seems like any more gains at this point are to be had by somehow reducing the number of additions we do (or reducing the cost of additions by effective-affine or some other black magic). Comparisons etc are less than 0.1%.

I don't see any obvious ways forward here, for uniformly random input this algorithm does a very good job of quickly reducing its input sizes and not repeating any work.

Member

apoelstra commented Jun 23, 2017

Needs rebase now, but added several commits:

  • change heap implementation to work only on indices rather than copying whole scalars and points around
  • replace heap insertions/deletions with in-place heapify and update-root
  • optimize some scalar operations to avoid interacting with the modulus
  • hash all the non-index data separately so that the per-pubkey hash has less data (hash the nonce too so the sigs are no longer forgeable ;) this was going to be a review canary once everyone stopped focusing on perf, but Jonas caught it early)
  • endomorphism support

Some approximate perf numbers on my laptop with 30-signature aggregates:

  • just verifying 30 signatures w/o endomorphism: 1960us
  • just verifying 30 signatures with endomorphism: 1400us
  • first version of this PR: 1150us
  • current PR w/o endomorphism: 730us
  • current PR with endomorphism: 650us

The numbers are much less impressive with small aggregates (in fact for one or two signatures they're significantly worse) and much more impressive for very large aggregates.

We do actually need to deal with the big-number-small-number performance issues, there is a real DoS vector in the current code where if the user passes an "aggregate signature" with a 16-bit s value, say, the endomorphism-enabled code will spend 112 bits of iterations grinding that s off of some 128-bit scalar. It's even worse without the endomorphism.

Edit: callgrind tells me we're spending 87% of our time in gej_add_var and 7% of our time in heap_siftdown, so it seems like any more gains at this point are to be had by somehow reducing the number of additions we do (or reducing the cost of additions by effective-affine or some other black magic). Comparisons etc are less than 0.1%.

I don't see any obvious ways forward here, for uniformly random input this algorithm does a very good job of quickly reducing its input sizes and not repeating any work.

@peterdettman

This comment has been minimized.

Show comment
Hide comment
@peterdettman

peterdettman Jun 24, 2017

Collaborator

I see (every time) test failures after building (--enable-endomorphism=yes if that matters):

test count = 10
random seed = b26c06e44c9a45c5749433488fadbad7
./src/field_5x52_impl.h:50: test condition failed: r == 1
Abort trap: 6

Edit: callgrind tells me we're spending 87% of our time in gej_add_var and 7% of our time in heap_siftdown, so it seems like any more gains at this point are to be had by somehow reducing the number of additions we do (or reducing the cost of additions by effective-affine or some other black magic). Comparisons etc are less than 0.1%.

Actually there are still some gains to be had in the heap implementation. I re-implemented here: peterdettman@aa01f4b . That's at least 5% faster on bench_aggsig numbers for me. It's mostly due to reducing the number of scalar comparisons (branch mis-predictions) I believe, by 1) managing the max element outside of the heap, and 2) using Floyd's trick of "replacing" that element to the bottom of the heap when it's reinserted.

Collaborator

peterdettman commented Jun 24, 2017

I see (every time) test failures after building (--enable-endomorphism=yes if that matters):

test count = 10
random seed = b26c06e44c9a45c5749433488fadbad7
./src/field_5x52_impl.h:50: test condition failed: r == 1
Abort trap: 6

Edit: callgrind tells me we're spending 87% of our time in gej_add_var and 7% of our time in heap_siftdown, so it seems like any more gains at this point are to be had by somehow reducing the number of additions we do (or reducing the cost of additions by effective-affine or some other black magic). Comparisons etc are less than 0.1%.

Actually there are still some gains to be had in the heap implementation. I re-implemented here: peterdettman@aa01f4b . That's at least 5% faster on bench_aggsig numbers for me. It's mostly due to reducing the number of scalar comparisons (branch mis-predictions) I believe, by 1) managing the max element outside of the heap, and 2) using Floyd's trick of "replacing" that element to the bottom of the heap when it's reinserted.

@peterdettman

This comment has been minimized.

Show comment
Hide comment
@peterdettman

peterdettman Jun 24, 2017

Collaborator

For the pathological inputs (big-small problem), just adding a binary ladder step when a single subtraction isn't enough seems to at least avoid disaster: peterdettman@cea5d3a .

Collaborator

peterdettman commented Jun 24, 2017

For the pathological inputs (big-small problem), just adding a binary ladder step when a single subtraction isn't enough seems to at least avoid disaster: peterdettman@cea5d3a .

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Jun 24, 2017

Member

Very cool, my numbers also show your code performing 5% or so faster than mine. I'll maybe clean up your commit (just renaming MY_ to secp256k1_ and removing the #ifdef is probably sufficient) and cherry-pick it onto my branch. Can also confirm that the hit from doing the binary ladder step is small, if anything.

We want to use the existing ecmult anyway for 1- or 2-point multiexponentiations so maybe we should replace the binary ladders with that?

I'm not sure what to make of your test failures, I don't see this on my code or yours, and Travis seems happy with my code at least.

Member

apoelstra commented Jun 24, 2017

Very cool, my numbers also show your code performing 5% or so faster than mine. I'll maybe clean up your commit (just renaming MY_ to secp256k1_ and removing the #ifdef is probably sufficient) and cherry-pick it onto my branch. Can also confirm that the hit from doing the binary ladder step is small, if anything.

We want to use the existing ecmult anyway for 1- or 2-point multiexponentiations so maybe we should replace the binary ladders with that?

I'm not sure what to make of your test failures, I don't see this on my code or yours, and Travis seems happy with my code at least.

@gmaxwell

This comment has been minimized.

Show comment
Hide comment
@gmaxwell

gmaxwell Jun 24, 2017

Member

I may have reproducedg dettman's crash. looking into it, as an aside:

memset(&sc[0], 0, sizeof(sc[0]));
secp256k1_ecmult_multi(&r, sc, pt, 20);

you only zero the first one, but add 20 of them. If tests are run with ncount 0, some of the entries are uninitilized by earlier code. (looks like this error is common in the new tests, not just at this one point)

Member

gmaxwell commented Jun 24, 2017

I may have reproducedg dettman's crash. looking into it, as an aside:

memset(&sc[0], 0, sizeof(sc[0]));
secp256k1_ecmult_multi(&r, sc, pt, 20);

you only zero the first one, but add 20 of them. If tests are run with ncount 0, some of the entries are uninitilized by earlier code. (looks like this error is common in the new tests, not just at this one point)

@gmaxwell

This comment has been minimized.

Show comment
Hide comment
@gmaxwell

gmaxwell Jun 25, 2017

Member

We obviously have to handled the cases where scalars differ by large amounts, to avoid attackers making our validation slow with effort linear in the slowdown-- but it's obvious how to handle that.

While thinking about generally adversarial scalars (stronger threat model than we need for aggregate signatures, perhaps, but it's conservative and we , how could we defend against this pattern?

 n_points = 32
 randa = random.randrange(2**127,2**128)
 randb = random.randrange(2**127,2**128 - n_points)
 scalars = [randa * (randb + x) for x in range(n_points)]

This one doesn't result in gratuitously different magnitudes but it makes the algorithm exceptionally slow to converge.

Should we just have some sanity maximum for the main bos-coster iteration that detects if it is not converging and bails out to a simple straus' algorithm without precomp? Even for cases where an attacker cannot directly control the scalars except via grinding this might have give us a piece of mind in not having to prove that hard cases are cryptographically hard to reach, since even that attack could only cause a 5 times slowdown or whatever.

Member

gmaxwell commented Jun 25, 2017

We obviously have to handled the cases where scalars differ by large amounts, to avoid attackers making our validation slow with effort linear in the slowdown-- but it's obvious how to handle that.

While thinking about generally adversarial scalars (stronger threat model than we need for aggregate signatures, perhaps, but it's conservative and we , how could we defend against this pattern?

 n_points = 32
 randa = random.randrange(2**127,2**128)
 randb = random.randrange(2**127,2**128 - n_points)
 scalars = [randa * (randb + x) for x in range(n_points)]

This one doesn't result in gratuitously different magnitudes but it makes the algorithm exceptionally slow to converge.

Should we just have some sanity maximum for the main bos-coster iteration that detects if it is not converging and bails out to a simple straus' algorithm without precomp? Even for cases where an attacker cannot directly control the scalars except via grinding this might have give us a piece of mind in not having to prove that hard cases are cryptographically hard to reach, since even that attack could only cause a 5 times slowdown or whatever.

@peterdettman

This comment has been minimized.

Show comment
Hide comment
@peterdettman

peterdettman Jun 25, 2017

Collaborator

@gmaxwell Take a look at peterdettman@cea5d3a (again). The handling for the big-small case is perhaps not what you assumed it is. I tried your case in equivalent Java code and it chewed it up (much faster than random cases). If I switch to the big-small handling mentioned in the eddsa paper, then yes, it's much slower. With no special case, it essentially hangs, of course.

I don't yet see a need for a bail-out routine; if the main loop guarantees a minimum 1-bit shortening of the largest scalar in any given iteration (without linear number of point doublings), then AFAICT the overall worst-case is already not much worse than the average. That commit shows a simple way to achieve the 1-bit guarantee, but it can certainly be improved on if we care to.

Collaborator

peterdettman commented Jun 25, 2017

@gmaxwell Take a look at peterdettman@cea5d3a (again). The handling for the big-small case is perhaps not what you assumed it is. I tried your case in equivalent Java code and it chewed it up (much faster than random cases). If I switch to the big-small handling mentioned in the eddsa paper, then yes, it's much slower. With no special case, it essentially hangs, of course.

I don't yet see a need for a bail-out routine; if the main loop guarantees a minimum 1-bit shortening of the largest scalar in any given iteration (without linear number of point doublings), then AFAICT the overall worst-case is already not much worse than the average. That commit shows a simple way to achieve the 1-bit guarantee, but it can certainly be improved on if we care to.

@gmaxwell

This comment has been minimized.

Show comment
Hide comment
@gmaxwell

gmaxwell Jun 25, 2017

Member

@peterdettman I confirm that your "internal ladder step when the result is still bigger then the next best" solves my killer test case.

Contrary to my earlier belief the simpler corner case handling ( nX + mY = (n-(2^n)m)X + m((2^n)X + Y) ) is enough to make my 'killer' case not run forever (as you observed), though its still a fair bit slower than random cases. Your proposal is, as you note, faster than random for that input.

With your code I think the algorithm must converge exponentially and I am doubtful I can construct something that won't and I think this can be proven.

(As an aside, I think switching to the binary ladder step feels a lot like the escape to bisection in dekker's root finding mechanism: https://en.wikipedia.org/wiki/Brent%27s_method#Dekker.27s_method )

Member

gmaxwell commented Jun 25, 2017

@peterdettman I confirm that your "internal ladder step when the result is still bigger then the next best" solves my killer test case.

Contrary to my earlier belief the simpler corner case handling ( nX + mY = (n-(2^n)m)X + m((2^n)X + Y) ) is enough to make my 'killer' case not run forever (as you observed), though its still a fair bit slower than random cases. Your proposal is, as you note, faster than random for that input.

With your code I think the algorithm must converge exponentially and I am doubtful I can construct something that won't and I think this can be proven.

(As an aside, I think switching to the binary ladder step feels a lot like the escape to bisection in dekker's root finding mechanism: https://en.wikipedia.org/wiki/Brent%27s_method#Dekker.27s_method )

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Jun 28, 2017

Member

Rebased everything and pulled in Peter Dettmann's heap improvements.

At Greg's suggestion spent some time reading Pippenger's Algorithm by djb. Unfortunately the described algorithm appears almost certainly to be slower than Bos-Coster for our usecase (one multiexp with many bases vs many multiexps that share bases); to get the benefit of Pippenger we need to use the "transpose" of the described algorithm, which seems to require a lot of insight/study to obtain. ...Not to knock djb's writing, he did a phenomenal job translating the original Pippenger paper from a series of lemmas about matrix row counts (seriously) into an algorithm!

Next steps:

  • Add benchmarks for 1- and 2-signature aggregates so we get a picture of small-aggregate performance
  • Special case the multiexp when there are only 1 or 2 bases, to use the existing ecmult code
  • Write a special sign/verify for the 1-signature case to make Schnorr signature creation easy and obviously possible.

Then Greg has suggested we do something smarter about memory management than this "maximum 32 aggregates because we want fixed stack size requirements". Need more discussion, once we resolve that I think I can take the WIP label off.

(Edit re-push last commit with a signature/timestamp on it)

Member

apoelstra commented Jun 28, 2017

Rebased everything and pulled in Peter Dettmann's heap improvements.

At Greg's suggestion spent some time reading Pippenger's Algorithm by djb. Unfortunately the described algorithm appears almost certainly to be slower than Bos-Coster for our usecase (one multiexp with many bases vs many multiexps that share bases); to get the benefit of Pippenger we need to use the "transpose" of the described algorithm, which seems to require a lot of insight/study to obtain. ...Not to knock djb's writing, he did a phenomenal job translating the original Pippenger paper from a series of lemmas about matrix row counts (seriously) into an algorithm!

Next steps:

  • Add benchmarks for 1- and 2-signature aggregates so we get a picture of small-aggregate performance
  • Special case the multiexp when there are only 1 or 2 bases, to use the existing ecmult code
  • Write a special sign/verify for the 1-signature case to make Schnorr signature creation easy and obviously possible.

Then Greg has suggested we do something smarter about memory management than this "maximum 32 aggregates because we want fixed stack size requirements". Need more discussion, once we resolve that I think I can take the WIP label off.

(Edit re-push last commit with a signature/timestamp on it)

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Jun 28, 2017

Contributor

Would it be worthwhile to separate off the multimul (we can't really call it multiexp, right?) into a separate PR. That is generally useful utility functionality (which at least @bbuenz has asked for before), while the actual signature scheme may warrant more discussion?

Contributor

sipa commented Jun 28, 2017

Would it be worthwhile to separate off the multimul (we can't really call it multiexp, right?) into a separate PR. That is generally useful utility functionality (which at least @bbuenz has asked for before), while the actual signature scheme may warrant more discussion?

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Jun 28, 2017

Member

@sipa I think so, though the aggregate signatures are (a) the only API exposure of the multimul (that feels weird :)) and (b) are also the only benchmark exposure.

Though I think I might add a ecmult_benchmark anyway since we have so many different versions of that function now..

Member

apoelstra commented Jun 28, 2017

@sipa I think so, though the aggregate signatures are (a) the only API exposure of the multimul (that feels weird :)) and (b) are also the only benchmark exposure.

Though I think I might add a ecmult_benchmark anyway since we have so many different versions of that function now..

@gmaxwell

This comment has been minimized.

Show comment
Hide comment
@gmaxwell

gmaxwell Jun 28, 2017

Member

@sipa so, this multiexp is not constant time. I am doubtful that we want to offer a raw multiexp that isn't even constant time-- our public interface isn't intended to be a build your own cryptographic misadventure kit. :)

Member

gmaxwell commented Jun 28, 2017

@sipa so, this multiexp is not constant time. I am doubtful that we want to offer a raw multiexp that isn't even constant time-- our public interface isn't intended to be a build your own cryptographic misadventure kit. :)

@bbuenz

This comment has been minimized.

Show comment
Hide comment
@bbuenz

bbuenz Jun 29, 2017

I would still appreciate multiexp. but don't let that be a determining factor. I guess if I understand @gmaxwell correctly then libsecp256k1 isn't even designed to build new crypto-protocols but it's only supposed to be used for a few specific protocols that it has been tested and optimized for.

bbuenz commented Jun 29, 2017

I would still appreciate multiexp. but don't let that be a determining factor. I guess if I understand @gmaxwell correctly then libsecp256k1 isn't even designed to build new crypto-protocols but it's only supposed to be used for a few specific protocols that it has been tested and optimized for.

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Jul 13, 2017

Member
  • fix secp256k1_aggsig_generate_nonce doc return value
  • Add check for set_xquad failing in aggsig/main_impl.h
  • Change verification equation to R == sG - sum e_ix_i
  • Remove infinity check before doing the small multiply at the end of ecmult_multi
  • Add VERIFY_CHECK to scalar_numsub checking that there is no final carry/underflow
  • Fix compilation warnings when endomorphism is disabled

Will add "dispatch function" for ecmult_multi that encapsulates the endomorphism code, does a copy (so that the ecmult API does not require mutable inputs), and is designed so Pieter can hook in the Strauss code there for small input sizes. Will also take the G scalar separately, and when calling my code it will just put it in the pile with the rest of them. But Pieter says Strauss gets a nontrivial advantage from having it be separate, so make that easy.

Member

apoelstra commented Jul 13, 2017

  • fix secp256k1_aggsig_generate_nonce doc return value
  • Add check for set_xquad failing in aggsig/main_impl.h
  • Change verification equation to R == sG - sum e_ix_i
  • Remove infinity check before doing the small multiply at the end of ecmult_multi
  • Add VERIFY_CHECK to scalar_numsub checking that there is no final carry/underflow
  • Fix compilation warnings when endomorphism is disabled

Will add "dispatch function" for ecmult_multi that encapsulates the endomorphism code, does a copy (so that the ecmult API does not require mutable inputs), and is designed so Pieter can hook in the Strauss code there for small input sizes. Will also take the G scalar separately, and when calling my code it will just put it in the pile with the rest of them. But Pieter says Strauss gets a nontrivial advantage from having it be separate, so make that easy.

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Jul 13, 2017

Member

Added a WIP/draft version of a dispatch function, @sipa what do you think about this API? I need to decide what to do about memory allocation, it's annoying that aggsig needs an arbitrary-sized scratch space and then ecmult_multi needs the same amount of scratch space just to make an unnecessary copy of the first data.

Member

apoelstra commented Jul 13, 2017

Added a WIP/draft version of a dispatch function, @sipa what do you think about this API? I need to decide what to do about memory allocation, it's annoying that aggsig needs an arbitrary-sized scratch space and then ecmult_multi needs the same amount of scratch space just to make an unnecessary copy of the first data.

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Jul 14, 2017

Member

Replaced API with a callback-based one. Now all the looping logic is removed from aggsig_verify into the dispatch function, which gets rid of a bunch of duplicated logic and also means that all of the variable-size-allocation stuff can stay within ecmult_multi_impl.h.

Member

apoelstra commented Jul 14, 2017

Replaced API with a callback-based one. Now all the looping logic is removed from aggsig_verify into the dispatch function, which gets rid of a bunch of duplicated logic and also means that all of the variable-size-allocation stuff can stay within ecmult_multi_impl.h.

Show outdated Hide outdated include/secp256k1.h
Show outdated Hide outdated src/bench_aggsig.c
Show outdated Hide outdated src/ecmult_multi_impl.h
Show outdated Hide outdated src/ecmult_multi_impl.h
Show outdated Hide outdated src/scalar_4x64_impl.h
@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Jul 19, 2017

Member

@sipa re using manual carry bits for addition, my belief as to why subtraction is slower using int128_t is that there are 63 bits of sign-extended 1's that get repeatedly added when you do things the scalar_add way. So the int128_t code does way more work than my code. The same wouldn't be true for scalar_add.

Member

apoelstra commented Jul 19, 2017

@sipa re using manual carry bits for addition, my belief as to why subtraction is slower using int128_t is that there are 63 bits of sign-extended 1's that get repeatedly added when you do things the scalar_add way. So the int128_t code does way more work than my code. The same wouldn't be true for scalar_add.

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Jul 19, 2017

Contributor

@apoelstra But (a + b) can be rewritten as (a - (~b + 1)). If there is a means to speed up subtraction, it seems that it can be used to speed up addition too.

Contributor

sipa commented Jul 19, 2017

@apoelstra But (a + b) can be rewritten as (a - (~b + 1)). If there is a means to speed up subtraction, it seems that it can be used to speed up addition too.

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Jul 19, 2017

Member

I'm saying that my code and the gcc addition code are pretty-much identical for the case a + b because in both cases you have only one bit of carry to track and no sign extension. I expect that rewriting it as a - (~b + 1) (three primitive operations to do one) will be slower than both of these.

I am not claiming any way to "speed up subtraction", only that adding unnecessary sign extensions to my existing code will slow it down.

Member

apoelstra commented Jul 19, 2017

I'm saying that my code and the gcc addition code are pretty-much identical for the case a + b because in both cases you have only one bit of carry to track and no sign extension. I expect that rewriting it as a - (~b + 1) (three primitive operations to do one) will be slower than both of these.

I am not claiming any way to "speed up subtraction", only that adding unnecessary sign extensions to my existing code will slow it down.

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Jul 21, 2017

Contributor

Benchmarks with Strauss (#464) and Bos-Coster (this PR):

Contributor

sipa commented Jul 21, 2017

Benchmarks with Strauss (#464) and Bos-Coster (this PR):

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Jul 27, 2017

Member

add support for using heap-based allocation

Member

apoelstra commented Jul 27, 2017

add support for using heap-based allocation

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Aug 10, 2017

Member

Rebase: separate out aggsig stuff from ecmult stuff, move ecmult stuff into ecmult_impl.h rather than having its own file, collapse some of the design iterations into single commits.

Member

apoelstra commented Aug 10, 2017

Rebase: separate out aggsig stuff from ecmult stuff, move ecmult stuff into ecmult_impl.h rather than having its own file, collapse some of the design iterations into single commits.

Show outdated Hide outdated src/ecmult.h
Show outdated Hide outdated src/ecmult_multi_impl.h
Show outdated Hide outdated include/secp256k1.h
Show outdated Hide outdated include/secp256k1.h
Show outdated Hide outdated src/ecmult_impl.h
Show outdated Hide outdated src/ecmult_impl.h
Show outdated Hide outdated src/modules/aggsig/aggsig.h
@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Aug 11, 2017

Member

Address Pieter's nits.

Member

apoelstra commented Aug 11, 2017

Address Pieter's nits.

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Aug 14, 2017

Contributor

@apoelstra This will make it easier to integrate Strauss: sipa@67aa632

Contributor

sipa commented Aug 14, 2017

@apoelstra This will make it easier to integrate Strauss: sipa@67aa632

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Aug 14, 2017

Contributor

@apoelstra New version: sipa@d22829e

Contributor

sipa commented Aug 14, 2017

@apoelstra New version: sipa@d22829e

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Aug 14, 2017

Member

ok, cherry-picked

Member

apoelstra commented Aug 14, 2017

ok, cherry-picked

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Aug 15, 2017

Contributor

@apoelstra This adds Strauss-wNAF secp256k1_ecmult_multi (used for n < 200, much scientific, wow). sipa@23602bd

Contributor

sipa commented Aug 15, 2017

@apoelstra This adds Strauss-wNAF secp256k1_ecmult_multi (used for n < 200, much scientific, wow). sipa@23602bd

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Aug 15, 2017

Member

Cherry-picked and rebased. Need to refactor to be sure we're not putting multiple types in the scratch space (alignment) and to have better dispatch logic.

Member

apoelstra commented Aug 15, 2017

Cherry-picked and rebased. Need to refactor to be sure we're not putting multiple types in the scratch space (alignment) and to have better dispatch logic.

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Aug 16, 2017

Contributor

@apoelstra Squashed, restructured, and with Bos-Coster removed: https://github.com/sipa/secp256k1/commits/20170816_aggsig

Contributor

sipa commented Aug 16, 2017

@apoelstra Squashed, restructured, and with Bos-Coster removed: https://github.com/sipa/secp256k1/commits/20170816_aggsig

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Aug 17, 2017

Member

Superceded by #473

Member

apoelstra commented Aug 17, 2017

Superceded by #473

@apoelstra apoelstra closed this Aug 17, 2017

@sipa

This comment has been minimized.

Show comment
Hide comment
@sipa

sipa Aug 17, 2017

Contributor

@apoelstra No it isn't, #473 doesn't include aggsig, only ecmult_multi.

Contributor

sipa commented Aug 17, 2017

@apoelstra No it isn't, #473 doesn't include aggsig, only ecmult_multi.

@apoelstra apoelstra reopened this Aug 17, 2017

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Sep 6, 2017

Member

Updated to compare x-coordinate only of R (and checking that its y-coord is a quadratic residue).

Member

apoelstra commented Sep 6, 2017

Updated to compare x-coordinate only of R (and checking that its y-coord is a quadratic residue).

yeastplume added a commit to mimblewimble/secp256k1-zkp that referenced this pull request Dec 6, 2017

@apoelstra

This comment has been minimized.

Show comment
Hide comment
@apoelstra

apoelstra Dec 10, 2017

Member

Rebased on latest master. One mild surprise is that I can't call ecmult_multi_var with a scratch space smaller than 4.5Kb, so I had to update the aggsig tests to use a bigger on.

Member

apoelstra commented Dec 10, 2017

Rebased on latest master. One mild surprise is that I can't call ecmult_multi_var with a scratch space smaller than 4.5Kb, so I had to update the aggsig tests to use a bigger on.

@jonasnick

This comment has been minimized.

Show comment
Hide comment
@jonasnick

jonasnick Dec 11, 2017

Contributor

One mild surprise is that I can't call ecmult_multi_var with a scratch space smaller than 4.5Kb

Yes, if I recall correctly aggsig previously always used bos-coster and now strauss_wnaf which uses a fair amount of scratch space due to the multiplication table:

  • without endo 96+n_points*3656 bytes.
  • with endo 96+n_points*4568 bytes.
Contributor

jonasnick commented Dec 11, 2017

One mild surprise is that I can't call ecmult_multi_var with a scratch space smaller than 4.5Kb

Yes, if I recall correctly aggsig previously always used bos-coster and now strauss_wnaf which uses a fair amount of scratch space due to the multiplication table:

  • without endo 96+n_points*3656 bytes.
  • with endo 96+n_points*4568 bytes.
@jonasnick

This comment has been minimized.

Show comment
Hide comment
@jonasnick

jonasnick Jan 26, 2018

Contributor

I have a few minor suggestions in my branch https://github.com/jonasnick/secp256k1/commits/aggsig-module

2e75a09 Stress that seed in aggsig_context_create must be secret
ffeba25 Allow choosing number of signatures in bench_aggsig
20ad03b Remove n_sigs argument from aggsig API
1cada78 Add aggsig state machine tests
Contributor

jonasnick commented Jan 26, 2018

I have a few minor suggestions in my branch https://github.com/jonasnick/secp256k1/commits/aggsig-module

2e75a09 Stress that seed in aggsig_context_create must be secret
ffeba25 Allow choosing number of signatures in bench_aggsig
20ad03b Remove n_sigs argument from aggsig API
1cada78 Add aggsig state machine tests
@jonasnick

This comment has been minimized.

Show comment
Hide comment
@jonasnick

jonasnick Jan 29, 2018

Contributor

It would be helpful if users could compute the optimal scratch space for verification. I suggest jonasnick@e81fc06

Contributor

jonasnick commented Jan 29, 2018

It would be helpful if users could compute the optimal scratch space for verification. I suggest jonasnick@e81fc06

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment