Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement lazy carries and reductions #15

Closed
mratsim opened this issue Feb 23, 2020 · 4 comments
Closed

Implement lazy carries and reductions #15

mratsim opened this issue Feb 23, 2020 · 4 comments

Comments

@mratsim
Copy link
Owner

mratsim commented Feb 23, 2020

Context

Currently after each addition or substraction steps there is a reduction done if the result is over the field modulus.

Due to constant-time constraints, there is no shortcut if it is unnecessary, the memory accesses are always done.

Instead, at the cost of a couple bits, we could use lazy carries/reductions.

Instead of using 31 bits over 32-bit words or 63-bit over 64-bit word, we use less bits in each words. For example assuming we use 26 bits and 52 bits respectively from 32 or 64-bit words, we only need to reduce every 6 or 12 additions respectively.
This is desirable in particular for addition chains

Choosing a default: Maximizing the usage of the word overhead

256-bit curves are quite common:

  • secp256k1 for blockchain ECDSA / transaction signing
  • P256/secp256r1 or Curve25519 for Diffie-Hellman

Or closely 254-bit Barreto-Naehrig for zkSNARKS.

Representing a 256 or 254 bits curve in the most compact way on 64-bit arch requires 4 words. Assuming

  • we want at most a 1 word overhead per 256-bit
  • we want to maximize the laziness of reduction

52-bit logical words and 12 lazy bits or 51-bit logical words and 13 lazy bits would be the best logical word bitsizes. They both can represent 2^255 integers in 5 words (but a radix 2^52 representation can also represent the numbers 2^255 and 2^256 in 5 words)

Side-note on SIMD

This may also enable opportunities for SIMD vectorization using either integer or floating-point math.

  • 2^24+1 is the first integer that cannot be represented in float32 though leaving 8-bit on the table might arguably be too much.
  • 2^53+1 is the first integer that cannot be represented in float64
  • The new AVX512_IFMA instructions (Integer Fused-Multiply-Add) supports multiply-add of 52-bit integers: VPMADD52LUQ and VPMADD52HUQ

Using floating point for pairings is covered in this paper:

Side-note on "final substraction"-less Montgomery Multiplication/Exponentiation

With well-chosen word-size that allow redundant representations we can avoid final substraction in Montgomery Multiplication and Exponentiation:

Implementation strategy

The implementatons steps would be:

  1. Change the WordBitSize at
    type Word* = Ct[uint32]
    ## Logical BigInt word
    ## A logical BigInt word is of size physical MachineWord-1
    type DoubleWord* = Ct[uint64]
    type BaseType* = uint32
    ## Physical BigInt for conversion in "normal integers"
    const
    WordPhysBitSize* = sizeof(Word) * 8
    WordBitSize* = WordPhysBitSize - 1

    This can be made a {.intdefine.} for compile-time configuration
  2. Ensure the encoding/decoding routine in io_bigints.nim properly deal with more than 1 unused bit
  3. Field elements should now have a countdown that tracks how many potential carries are left given the free bits. If it reaches 0, they should call a normalization/reduction proc.
  4. To be researched: add and sub may have to return a carry Word instead of a CtBool as the carry is not 0 or 1 anymore (but we never add the carry, it is used as an input for a optional reduction so might be unneeded)

References

@mratsim
Copy link
Owner Author

mratsim commented Feb 26, 2020

Some clarifications

As mentioned in Milagro docs, lazy carry and lazy reduction are separate, independant concepts.

Lazy reductions

Most prime modulus bitsize do not match exactly with a multiple of the logical word size of the library (2^3 or 2^63), for example the BN254 curve or BLS12-381 curve will leave respectively 63*5-254 = 61 and 63*7-381 = 60 unused bits in the last word. This can be used to accumulate up to 60~61 additions without worrying about overflowing in the physical representation.
And we would need to substract up to 60~61 * P to return to a fully reduced representation.

Not fully reducing

As an optimization we can stay in a unreduced representation by conditionally substracting half that range after that many substractions, taking advantage that it's unlikely that operands are in the same order of magnitude as the prime 2^254 or 2^381.

Fully reducing in logarithmic time

Or we can use a logarithmic approach by conditionally substracting half then a quarter then a eightth then ... of the excess range.
Given a word-bit size of 2^31 or 2^63, the excess bits are at most 30 or 62 and so we would need 4~5 conditional substractions every 30~62 additions.

As far as I know this is a novel approach that is not mentioned in the literature and was never used in software.

Dealing with substraction

Substraction of a reduced/unreduced operand by an unreduced operand must be handled specially:

  • either we follow Milagro path and we use negate + addition
  • or we follow Yanik or Schwabe path by using signed integers (or an emulation of)
  • or we follow the paper
    High-Speed Elliptic Curve Cryptographyon the NVIDIA GT200 Graphics Processing Unit
    Shujie Cui, Johann Großschadl, ZheLiu and Qiuliang Xu
    https://www.doc.ic.ac.uk/~scui2/papers/Cui2014_GPU.pdf
    which rely on the fact that substraction implemented as 4p + a - b will always be in the correct range when used in curve point addition and point doubling. Note that this may not hold for extension fields.
    edit: This is also mentioned in Michael Scott paper: Slothful reduction p6 and p8

Removing the need of final substraction

With lazy reduction we have a redundant representation that can represent 2p 3p 4p ... 8p if there are enough excess bits. This may help Montgomery Multiplication and exponentiation by avoiding the need of the last conditional substraction (Walter, 1999, Hachez & Quisquater, 2000, Bos & Montgomery, 2017 and many hardware papers like Ors-Batina-Prenel-Vandewalle or Walter, 2017)

Note that the Almost Montgomery Multiplication of Gueron that checks the MSB to know if reduction is needed will leak information and we would still need to estimate if we need to remove p, 2p, ..., MaxExcess * p after each substraction

Lazy Carry

Lazy Carries are (almost) independent from Lazy Reductions, here there are excess bits within a machine word.
This allows to replace

func add*(a: BigIntViewMut, b: BigIntViewAny): CTBool[Word] =
## Constant-time in-place addition
## Returns the carry
##
## a and b MAY be the same buffer
## a and b MUST have the same announced bitlength (i.e. `bits` static parameters)
checkMatchingBitlengths(a, b)
for i in 0 ..< a.numLimbs():
a[i] = a[i] + b[i] + Word(result)
result = a[i].isMsbSet()
a[i] = a[i].mask()

by

func add*(a: BigIntViewMut, b: BigIntViewAny) =
  checkMatchingBitlengths(a, b)

  for i in 0 ..< a.numLimbs():
    a[i] += b[i]
    a[i] = a[i].mask()

func modular_add(a: BigIntViewMut, b: BigIntViewAny, m: BigIntViewConst) =
  ## This is pseudocode and would be implemented at the Field-level and not BigInt level
  a.add(b)
  if a.potential_carries == MaxPotentialCarries:
    a.normalizeCarries()

Removing loop-carried dependencies and enabling easy vectorization by the compiler.
The only potential limitation is that the compiler cannot infer the loop unrolling, but the function becomes small enough that monomorphization+inlining would probably not increase codesize (separate functions require parameters and stack bookeeping before call and after call and also within the actual function call, this can easily require 10 to 20 instructions before doing anything useful, an inlined multilimb addition + mask on less than 20 limbs wouldn't increase codesize)

They also enable multiplication by constant less than the current word excess without converting them to Montgomery form and using full-blown Montgomery multiplication. This is useful for extension field that uses sqrt(-2) or sqrt(-5) as the solution to irreducible polynomials.

For normalization of lazy carries there is optional coupling with lazy reduction (hence the almost at the beginning of the paragraph) in that all the lazy carries must be accumulated somewhere, either they fit in the extra space of the last word WordBitSize - ModulusBitSize - Word Excess and we can handle this unreduced representation like in the lazy reduction case or we need an extra temporary word.

@mratsim
Copy link
Owner Author

mratsim commented Mar 7, 2020

Beginning of a proof-of-concept here: mratsim/finite-fields@2673a04

Lazy carry

This is very easy to build for addition/substraction/negation as we can ensure that assuming u = modulus.limbs[i] we always have u <= 2^ExcessBits * modulus.limbs[i] in particular for u == 0. With lazy carries all addition/substraction/negation are carryless thanks to the extra space per word.

Full reduction is also straightforward

However with multiplication, limbs multiplication can carry over to the next one, can it also exceed u <= 2^ExcessBits * modulus.limbs[i]?
Dealing with those may introduce a lot of complexity.

Lazy reduction

Lazy reduction instead is probably much less error prone to implement since only the high word holds the excess carries. It however is quite restricted as we can only have 2 excess bits for BN254 (4*64=256) or 3 for BLS12-381 (6*64 = 384)

@mratsim
Copy link
Owner Author

mratsim commented Mar 21, 2020

None of the pairing curves have a special form that make lazy reduction and logical word smaller than the physical word worth it.
Scott's paper Slothful reduction or the logarithmic reduction I mentioned makes debugging more complex and just delay the inevitable.

However, in some select cases we can use the technique in

  • High-Speed Software Implementation of the Optimal Ate Pairing over Barreto-Naehrig Curves
    Jean-Luc Beuchat and Jorge Enrique González Díaz and Shigeo Mitsunari and Eiji Okamoto and Francisco Rodríguez-Henríquez and Tadanori Teruya, 2010
    https://eprint.iacr.org/2010/354

image
image
to delay reduction, but making that constant time would still require a table lookup that scans all the potential n*p multiple of the field modulus p to select the appropriate one for reduction.

Furthermore, #17, adds no-carry Montgomery multiplication and squaring that is usable for most pairing-friendly curves (up to 15% perf improvement). Optimizing multiplications and squarings is probably more important than additions and forcing a reduction check before each is costly. Even if we use a redundant representation with 2 extra bits to allow storing up to 4p to allow for "final substraction-less" modular multiplication that may me paying for the substraction upfront.

@mratsim
Copy link
Owner Author

mratsim commented Jun 6, 2020

For posterity, Patrick Longa's PhD Thesis dedicates 30 pages to lazy reduction and scheduling techniques to reduce or erase data dependencies

image

Followed by a dedicated section for pairings.
Notably we have these substraction formula to canonicalize a lazy reduced field element in case of double-precision arithmetic (i.e. using only 32-bit out of a 64-bit limb and leaving the extra 32 for carries)

image

This is followed by multiplication formula in FP2, Fp6 and Fp12 and squarings in Fp12 that uses the lazy reduction

https://uwspace.uwaterloo.ca/handle/10012/5857

However the space cost is high, especially in constant-time settings where you have to iterate over all the space all the time. Using more space also doesn't play well with CPU caches that are significantly more expensive than a data-dependency. Lastly since that time while the ADC instruction data dependency cost was 6 cycles, recent Intel CPUs only have a data-dependency cost of 2 cycles (but only 1 execution port AFAIK) provided you update a register (and not a memory location) and AMD is even less costly.

@mratsim mratsim mentioned this issue Jan 31, 2021
4 tasks
@mratsim mratsim mentioned this issue Feb 8, 2021
9 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant