New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrating ed25519 components #57
Conversation
…coq 8.6git) WIP factor out some goals cleaner spec change all big numbers in EdDSA spec to Z compate eddsa spec omega no more Require inside sections In Coq 8.4, notations cannot be passed to [cbv] lists In Coq 8.4, scope binding does not impact already-defined constants So we bind the scope before the constants are defined. refactor EdD25519 spec build on 8.4 (slowly)
If Global Unset Shrink Abstract.
Global Unset Shrink Obligations. does not allow the code to build in both 8.5 and 8.6, that's a bug in Coq. I'll look into it tomorrow. GitHub claims there are no merge conflicts. |
@@ -618,6 +627,7 @@ Module IntegralDomain. | |||
End IntegralDomain. | |||
|
|||
Module Field. | |||
Require Coq.setoid_ring.Field_theory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is unsupported. Coq only fully supports requires at top level. (Symptoms of unsupportedness include anomalies, inconsistent states, and sad bug minimizers.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
I added |
You seem to have broken the 8.4 (and 8.5) build(s) with the latest commit. |
Bug #5044 Your Local Ltac bash' :=
repeat match goal with
| |- Proper _ _ => intro
| _ => progress intros
| [ H: _ /\ _ |- _ ] => destruct H
| [ p:E.point |- _ ] => destruct p as [ [??] ? ]
| [ p:point |- _ ] => destruct p as [ [ [ [??] ? ] ? ] ? ]
| _ => progress autounfold with bash in *
| |- _ /\ _ => split
| _ => solve [neq01]
| _ => solve [eauto]
| _ => solve [intuition eauto]
| _ => solve [etransitivity; eauto]
| |- _*_ <> 0 => apply Ring.zero_product_iff_zero_factor
| [H: _ |- _ ] => solve [intro; apply H; super_nsatz]
| _ => progress uninv
| |- Feq _ _ => super_nsatz
end.
Local Ltac bash := bash'; pose proof E.char_gt_2; bash'. |
What's up with your changes to |
: (exists a, P a /\ RA a ref) <-> P ref. | ||
Proof. | ||
logic; try match goal with [H:_|-_] => (rewrite H || rewrite <-H); assumption end. | ||
repeat (assumption||reflexivity||econstructor); assumption. (* WHY the last [assumption]?*) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could instead do repeat repeat (assumption||reflexivity||econstructor).
The answer is goal ordering and evar information.
You're not going to get |
|
||
Program Definition from_twisted (P:Epoint) : point := exist _ | ||
(let (x,y) := E.coordinates P in (x, y, 1, x*y)) _. | ||
|
||
Program Definition to_twisted (P:point) : Epoint := exist _ | ||
(let '(X,Y,Z,T) := coordinates P in ((X/Z), (Y/Z))) _. | ||
(let '(X,Y,Z,T) := coordinates P in let iZ := Finv Z in ((X*iZ), (Y*iZ))) _. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JasonGross I think this change and the tactics to undo it in proofs made this file slower. I did this to make the generated code not duplicate the inversion. Not sure how to best fix it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's possible that moving uninv
to the top of bash
will be enough to fix it. You can also do something like:
Let to_twisted_final (P:point) := (let '(X,Y,Z,T) := coordinates P in let iZ := Finv Z in ((X*iZ), (Y*iZ))).
Let to_twisted_fast (P:point) := (let '(X,Y,Z,T) := coordinates P in ((X/Z), (Y/Z))).
Lemma to_twisted_speed (P : point) : to_twisted_final P = to_twisted_fast P. ... Qed.
and then rewrite with to_twisted_speed
wherever you need to.
Probably if the lists are the same lengths, then we want to compare them element-wise rather than all at once. It's way too verbose to keep expanding them, so we only do that when lists are not the same length. We now get error messages such as ``` Unable to unify: [inr [378, 381, 384]] == [inr [101, 106, 108]] Could not unify the values at index 0: [mit-plv#378, mit-plv#381, mit-plv#384] != [mit-plv#101, mit-plv#106, mit-plv#108] index 0: mit-plv#378 != mit-plv#101 (slice 0 44, [mit-plv#377]) != (slice 0 44, [mit-plv#98]) index 0: mit-plv#377 != mit-plv#98 (add 64, [mit-plv#345, mit-plv#375]) != (add 64, [mit-plv#57, mit-plv#96]) index 0: mit-plv#345 != mit-plv#57 (slice 0 44, [mit-plv#337]) != (slice 0 44, [#44]) index 0: mit-plv#337 != #44 (add 64, [#41, mit-plv#334]) != (add 64, [#25, #41]) index 1: mit-plv#334 != #25 (mul 64, [#1, mit-plv#331]) != (mul 64, [#0, #1, #22]) [(add 64, [mit-plv#329, mit-plv#329])] != [#0, (const 20, [])] [(add 64, [(mul 64, [#7, mit-plv#328]), (mul 64, [#7, mit-plv#328])])] != [#0, (const 20, [])] [(add 64, [(mul 64, [(const 2, []), (add 64, [#0, mit-plv#327])]), (mul 64, [(const 2, []), (add 64, [#0, mit-plv#327])])])] != [#0, (const 20, [])] [(add 64, [(mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, mit-plv#326])])]), (mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, mit-plv#326])])])])] != [#0, (const 20, [])] [(add 64, [(mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, (const 4, [])])])]), (mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, (const 4, [])])])])])] != [#0, (const 20, [])] [(add 64, [(mul 64, [(const 2, []), (add 64, [(old 64 0, []), (mul 64, [(old 64 0, []), (const 4, [])])])]), (mul 64, [(const 2, []), (add 64, [(old 64 0, []), (mul 64, [(old 64 0, []), (const 4, [])])])])])] != [(old 64 0, []), (const 20, [])] ``` The second to last line is generally the one to look at; the last line adds a bit more detail to it. Perhaps we should instead list out the values of indices rather than expanding one additional level?
Probably if the lists are the same lengths, then we want to compare them element-wise rather than all at once. It's way too verbose to keep expanding them, so we only do that when lists are not the same length. We now get error messages such as ``` Unable to unify: [inr [378, 381, 384]] == [inr [101, 106, 108]] Could not unify the values at index 0: [mit-plv#378, mit-plv#381, mit-plv#384] != [mit-plv#101, mit-plv#106, mit-plv#108] index 0: mit-plv#378 != mit-plv#101 (slice 0 44, [mit-plv#377]) != (slice 0 44, [mit-plv#98]) index 0: mit-plv#377 != mit-plv#98 (add 64, [mit-plv#345, mit-plv#375]) != (add 64, [mit-plv#57, mit-plv#96]) index 0: mit-plv#345 != mit-plv#57 (slice 0 44, [mit-plv#337]) != (slice 0 44, [#44]) index 0: mit-plv#337 != #44 (add 64, [#41, mit-plv#334]) != (add 64, [#25, #41]) index 1: mit-plv#334 != #25 (mul 64, [#1, mit-plv#331]) != (mul 64, [#0, #1, #22]) [(add 64, [mit-plv#329, mit-plv#329])] != [#0, (const 20, [])] [(add 64, [(mul 64, [#7, mit-plv#328]), (mul 64, [#7, mit-plv#328])])] != [#0, (const 20, [])] [(add 64, [(mul 64, [(const 2, []), (add 64, [#0, mit-plv#327])]), (mul 64, [(const 2, []), (add 64, [#0, mit-plv#327])])])] != [#0, (const 20, [])] [(add 64, [(mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, mit-plv#326])])]), (mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, mit-plv#326])])])])] != [#0, (const 20, [])] [(add 64, [(mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, (const 4, [])])])]), (mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, (const 4, [])])])])])] != [#0, (const 20, [])] [(add 64, [(mul 64, [(const 2, []), (add 64, [(old 64 0, []), (mul 64, [(old 64 0, []), (const 4, [])])])]), (mul 64, [(const 2, []), (add 64, [(old 64 0, []), (mul 64, [(old 64 0, []), (const 4, [])])])])])] != [(old 64 0, []), (const 20, [])] ``` The second to last line is generally the one to look at; the last line adds a bit more detail to it. Perhaps we should instead list out the values of indices rather than expanding one additional level?
Probably if the lists are the same lengths, then we want to compare them element-wise rather than all at once. It's way too verbose to keep expanding them, so we only do that when lists are not the same length. We now get error messages such as ``` Unable to unify: [inr [378, 381, 384]] == [inr [101, 106, 108]] Could not unify the values at index 0: [#378, #381, #384] != [#101, #106, #108] index 0: #378 != #101 (slice 0 44, [#377]) != (slice 0 44, [#98]) index 0: #377 != #98 (add 64, [#345, #375]) != (add 64, [#57, #96]) index 0: #345 != #57 (slice 0 44, [#337]) != (slice 0 44, [#44]) index 0: #337 != #44 (add 64, [#41, #334]) != (add 64, [#25, #41]) index 1: #334 != #25 (mul 64, [#1, #331]) != (mul 64, [#0, #1, #22]) [(add 64, [#329, #329])] != [#0, (const 20, [])] [(add 64, [(mul 64, [#7, #328]), (mul 64, [#7, #328])])] != [#0, (const 20, [])] [(add 64, [(mul 64, [(const 2, []), (add 64, [#0, #327])]), (mul 64, [(const 2, []), (add 64, [#0, #327])])])] != [#0, (const 20, [])] [(add 64, [(mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, #326])])]), (mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, #326])])])])] != [#0, (const 20, [])] [(add 64, [(mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, (const 4, [])])])]), (mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, (const 4, [])])])])])] != [#0, (const 20, [])] [(add 64, [(mul 64, [(const 2, []), (add 64, [(old 64 0, []), (mul 64, [(old 64 0, []), (const 4, [])])])]), (mul 64, [(const 2, []), (add 64, [(old 64 0, []), (mul 64, [(old 64 0, []), (const 4, [])])])])])] != [(old 64 0, []), (const 20, [])] ``` The second to last line is generally the one to look at; the last line adds a bit more detail to it. Perhaps we should instead list out the values of indices rather than expanding one additional level?
This puts together several key components of the ed25519, with proofs wherever easy and admits (for now) if not. The main achievement is an elliptic curve scalar multiplication routine that uses the extended representation of the underlying Edwards curve and a non-uniform base system with lazy carrying for the field elements. I have extracted this code and executed it using
GHC
, passing the test that multiplication by group order returns identity. A lot of work is to be done, still -- the correctness proof of the described implementation has several non-trivial admits and the limbs of the field elements are stored as arbitrary-precision integers, not machine words.The main hurdle in this patch is
Shrink Obligations
, which seems to have different behavior in 8.6 and older versions regardless of the state of the flag. @JasonGross please investigate and help -- I think this PR as-is does not build in 8.6.The merge is also likely to be somewhat nasty, but I haven't looked into it yet.
@jadephilipoom please review the EdDSA and Ed25519 specs, cross-referencing with "eddsa for more curves" and the ed25519 RFC wherever relevant.
@JasonGross please see whether the
setoid_rewrite
s at https://github.com/mit-plv/fiat-crypto/compare/fiat-crypto-integration?expand=1#diff-5366caedd04016399124d528353b4a5cR134 and the next derivation in that file can be made less fragile.Other crap that leaked in:
F.of_nat
andF.to_nat
Require
-s moved out of sections