-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k256: Add non-biased/non-zero constructors to Scalar #432
Conversation
Codecov Report
@@ Coverage Diff @@
## master #432 +/- ##
==========================================
- Coverage 64.70% 64.25% -0.45%
==========================================
Files 28 28
Lines 3598 3654 +56
==========================================
+ Hits 2328 2348 +20
- Misses 1270 1306 +36
Continue to review full report at Codecov.
|
ccaafc1
to
1e0f760
Compare
1e0f760
to
e97b09b
Compare
Lots to respond to here, but I can perhaps start here:
The current It'd be great if we could define a better Perhaps we could make the input size a generic parameter, which would permit impls for multiple sizes. |
I looked at trying to add a new pub trait FromDigest<OutputSize: ArrayLength<u8>> {
fn from_digest<D>(digest: D) -> Self
where
D: Digest<OutputSize = OutputSize>;
} Given that, perhaps the trait could be added to the |
Shouldn't this use-case be already covered by the |
In this particular case, we would need to provide several impls on the same type for different digest sizes, e.g. for a secp256k1 scalar it could be initialized from a digest with a 256-bit output using a narrow reduction, or a 512-bit output using a wide reduction, and those two cases will take separate codepaths |
Is this initialization process defined for a range of output sizes or only for those two values (e.g. would it work for SHA-1 or for SHA-384)? Do you track which hash function was used on the type level (i.e. do you get the same type when 256-bit and 512-bit hash functions were used)? |
I can't speak specifically to the hash-to-curve-point case, but in the case of scalars it's effectively two values:
I think if you were to use something "in the middle" for a wide reduction, it would bias the output (it's already minutely biased even with a uniformly random input), but perhaps @fjarri can check me on that.
The |
I am not sure why struct Foo<D>
where D: Digest, // also constrain D::OutputSize % Self::ScalarSize == 0
{ .. }
impl<D> InnerUser for Foo<D>
where // same bounds
{ type Inner = D; }
impl<D> InnerInit for Foo<D>
where // same bounds
{
fn inner_init(digest: D) -> Self { .. }
} Am I missing something? |
Can you write a full translation of something like this? pub trait FromDigest<OutputSize: ArrayLength<u8>> {
fn from_digest<D>(digest: D) -> Self
where
D: Digest<OutputSize = OutputSize>;
}
pub struct Scalar([u64; 4]);
impl Scalar {
pub fn from_bytes_mod_order(bytes: &[u8; 32]) -> Self {
...
}
pub fn from_bytes_mod_order_wide(bytes: &[u8; 64]) -> Self {
...
}
}
impl FromDigest<U32> for Scalar {
fn from_digest<D>(digest: D) -> Self
where
D: Digest<OutputSize = U32>
{
Self::from_bytes_mod_order(&digest.finalize().into())
}
}
impl FromDigest<U64> for Scalar {
fn from_digest<D>(digest: D) -> Self
where
D: Digest<OutputSize = U64>
{
Self::from_bytes_mod_order_wide(&digest.finalize().into())
}
} The real issue is the overlapping impls, which you can't have without a generic parameter. |
Here is my take on it: https://play.rust-lang.org/?gist=6547bf4c636d51196326fd26aa4bbdee The idea is that the trait should be implemented by a type which tracks used hash function. It's not necessary should be the scalar itself, it could be wrapper around it as well. The trait bounds are somewhat annoying, but it should be better eventually with panicking |
It might be good to split this off into a separate issue, possibly RustCrypto/traits#481 But I don't get how that works @newpavlov, especially this:
It looks like that can only be satisfied by a number that is simultaneously equal to But then the problem further becomes, in the context of ECDSA I need to bound on support for a specific digest size: pub struct SigningKey<C>
where
C: PrimeCurve + ProjectiveArithmetic,
Scalar<C>: FromDigest<FieldSize<C>> + Invert<Output = Scalar<C>> + SignPrimitive<C>,
SignatureSize<C>: ArrayLength<u8>,
{
inner: NonZeroScalar<C>,
} ECDSA specifically relies on a narrow reduction, so I only want to use digests with an output size equal to the scalar field's modulus. |
Yeah, I've answered to you here: RustCrypto/traits#481 (comment) |
Sidestepping the issue of digests for the moment, I've opened a PR which adds a It's generic around a There are also provided methods for reducing from a big endian or little endian byte array. The bounds for the ECDSA use case would look something like: Scalar<C>: Reduce<C::Uint> ...to select a narrow reduction. (Edit: actually, in that PR I bounded
|
#436 adds "narrow" impls of the |
Circling back on this, I think this is the current state of things: an ability to generate random NonZeroScalars (without unwrapping)
an ability to generate a Scalar/NonZeroScalar from a digest safely according to hash-to-curve standard
A couple things are still missing though:
It seems like supporting this might require changes to the If that existed, then it would be possible to impl |
Adds a trait similar to `Reduce`, but where the output of the reduction is ensured to be non-zero. Also impls `Reduce` and `ReduceNonZero` for `NonZeroScalar`. This means that end users need only concern themselves with `Reduce` as they can use `NonZeroScalar::<C>::from_uint_reduced` instead of the more cumbersome `Scalar::<C>::from_uint_reduced_non_zero`. Related: RustCrypto/elliptic-curves#432
Proposed traits for non-zero reductions here: RustCrypto/traits#827 |
Adds a trait similar to `Reduce`, but where the output of the reduction is ensured to be non-zero. Also impls `Reduce` and `ReduceNonZero` for `NonZeroScalar`. This means that end users need only concern themselves with `Reduce` as they can use `NonZeroScalar::<C>::from_uint_reduced` instead of the more cumbersome `Scalar::<C>::from_uint_reduced_non_zero`. Related: RustCrypto/elliptic-curves#432
Adds a trait similar to `Reduce`, but where the output of the reduction is ensured to be non-zero. Also impls `Reduce` and `ReduceNonZero` for `NonZeroScalar`. This means that end users need only concern themselves with `Reduce` as they can use `NonZeroScalar::<C>::from_uint_reduced` instead of the more cumbersome `Scalar::<C>::from_uint_reduced_non_zero`. Related: RustCrypto/elliptic-curves#432
Adds a trait similar to `Reduce`, but where the output of the reduction is ensured to be non-zero. Also impls `Reduce` and `ReduceNonZero` for `NonZeroScalar`. This means that end users need only concern themselves with `Reduce` as they can use `NonZeroScalar::<C>::from_uint_reduced` instead of the more cumbersome `Scalar::<C>::from_uint_reduced_non_zero`. Related: RustCrypto/elliptic-curves#432
Provides an impl of the `ReduceNonZero` trait added in #827, which provides a reduction from a 512-bit (64-byte) input, i.e. a "wide" reduction from an integer twice the size of the curve's order, to a `Scalar` which is guaranteed to be non-zero. Based on @fjarri's work in #432 Co-authored-by: Bogdan Opanchuk <bogdan@opanchuk.net>
Went ahead and addressed what I believe are the remaining concerns, adapting this PR into #474, which is now merged. |
This is a "spitballing" PR, intended primarily for illustration of required API and discussion.
There are two things I'm missing from the current
Scalar
API:NonZeroScalar
s (without unwrapping)Scalar
/NonZeroScalar
from a digest safely according to hash-to-curve standardThe latter needs the scalar to be reduced from
L = ceil((ceil(log2(p)) + k) / 8)
bytes (Section 5.3), wherep
is the order, andk
is the security parameter. Fork = 256
this gives 64 bytes, fork = 128
it's 48 bytes. I went with 64 for simplicity.The public methods added to
Scalar
:pub fn random_nonzero(rng: impl RngCore) -> NonZeroScalar
pub fn from_digest_safe<D>(digest: D) -> Self where D: Digest<OutputSize = U64>
pub fn from_digest_safe_nonzero<D>(digest: D) -> NonZeroScalar where D: Digest<OutputSize = U64>
Now for the problems:
random()
is a part of theField
trait. IfNonZeroScalar
implementedField
, the code fromrandom_nonzero()
could go there.FromDigest
requiresOutputSize = FieldSize
, so it can't be changed to 64 bytes output. The ways to deal with it include:SafeExpansionSize
(or whatever) type and lockingFromDigest
to itFromDigestSafe
traitfrom_digest_safe_nonzero()
can be an impl ofFromDigest
forNonZeroScalar
(but again there's the size problem)Scalar::from_wide_bytes_reduced()
assumes thatWideScalar::reduce()
can reduce any 512 bit, and not just anything beloworder^2
. I'm 99.9% certain that's true, but I can't present a formal proof right now. I did test it for0xfff...fff
, and it works.WideScalar::reduce_nonzero()
assumes thatreduce()
works just as well for the modulus decreased by one. See the comment above about being 99.9% sure.WideScalar::reduce_nonzero()
contains anunwrap()
. Ideally I'd prefer to have something likeNonZeroScalar::from_bytes_unchecked()
to avoid it.