Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrating ed25519 components #57

Closed
wants to merge 4 commits into from

Conversation

andres-erbsen
Copy link
Contributor

This puts together several key components of the ed25519, with proofs wherever easy and admits (for now) if not. The main achievement is an elliptic curve scalar multiplication routine that uses the extended representation of the underlying Edwards curve and a non-uniform base system with lazy carrying for the field elements. I have extracted this code and executed it using GHC, passing the test that multiplication by group order returns identity. A lot of work is to be done, still -- the correctness proof of the described implementation has several non-trivial admits and the limbs of the field elements are stored as arbitrary-precision integers, not machine words.

The main hurdle in this patch is Shrink Obligations, which seems to have different behavior in 8.6 and older versions regardless of the state of the flag. @JasonGross please investigate and help -- I think this PR as-is does not build in 8.6.

The merge is also likely to be somewhat nasty, but I haven't looked into it yet.

@jadephilipoom please review the EdDSA and Ed25519 specs, cross-referencing with "eddsa for more curves" and the ed25519 RFC wherever relevant.

@JasonGross please see whether the setoid_rewrites at https://github.com/mit-plv/fiat-crypto/compare/fiat-crypto-integration?expand=1#diff-5366caedd04016399124d528353b4a5cR134 and the next derivation in that file can be made less fragile.

Other crap that leaked in:

  • F.of_nat and F.to_nat
  • Require-s moved out of sections
  • fix reversed carry chain in GF25519

…coq 8.6git)

WIP

factor out some goals

cleaner spec

change all big numbers in EdDSA spec to Z

compate

eddsa spec omega

no more Require inside sections

In Coq 8.4, notations cannot be passed to [cbv] lists

In Coq 8.4, scope binding does not impact already-defined constants

So we bind the scope before the constants are defined.

refactor EdD25519 spec

build on 8.4 (slowly)
@JasonGross
Copy link
Collaborator

If

Global Unset Shrink Abstract.
Global Unset Shrink Obligations.

does not allow the code to build in both 8.5 and 8.6, that's a bug in Coq. I'll look into it tomorrow.

GitHub claims there are no merge conflicts.

@@ -618,6 +627,7 @@ Module IntegralDomain.
End IntegralDomain.

Module Field.
Require Coq.setoid_ring.Field_theory.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is unsupported. Coq only fully supports requires at top level. (Symptoms of unsupportedness include anomalies, inconsistent states, and sad bug minimizers.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

@andres-erbsen
Copy link
Contributor Author

I added Global Unset Shrink Abstract and moved the Require out of modules, but the build still fails in my slightly dated 8.6.

@JasonGross
Copy link
Collaborator

JasonGross commented Aug 25, 2016

You seem to have broken the 8.4 (and 8.5) build(s) with the latest commit.

@JasonGross
Copy link
Collaborator

the build still fails in my slightly dated 8.6

Bug #5044 Your bash tactic is the culprit. Try this (don't pose proof E.char_gt_2 if you don't need to) instead:

    Local Ltac bash' :=
      repeat match goal with
             | |- Proper _ _ => intro
             | _ => progress intros
             | [ H: _ /\ _ |- _ ] => destruct H
             | [ p:E.point |- _ ] => destruct p as [ [??] ? ]
             | [ p:point |- _ ] => destruct p as [ [ [ [??] ? ] ? ] ? ]
             | _ => progress autounfold with bash in *
             | |- _ /\ _ => split
             | _ => solve [neq01]
             | _ => solve [eauto]
             | _ => solve [intuition eauto]
             | _ => solve [etransitivity; eauto]
             | |- _*_ <> 0 => apply Ring.zero_product_iff_zero_factor
             | [H: _ |- _ ] => solve [intro; apply H; super_nsatz]
             | _ => progress uninv
             | |- Feq _ _ => super_nsatz
             end.
    Local Ltac bash := bash'; pose proof E.char_gt_2; bash'.

@JasonGross
Copy link
Collaborator

What's up with your changes to ExtendedCoordinates? It's 2x slower now than it was (2m on travis rather than 1m). Is this expected?

: (exists a, P a /\ RA a ref) <-> P ref.
Proof.
logic; try match goal with [H:_|-_] => (rewrite H || rewrite <-H); assumption end.
repeat (assumption||reflexivity||econstructor); assumption. (* WHY the last [assumption]?*)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could instead do repeat repeat (assumption||reflexivity||econstructor). The answer is goal ordering and evar information.

@JasonGross
Copy link
Collaborator

see whether the setoid_rewrites at https://github.com/mit-plv/fiat-crypto/compare/fiat-crypto-integration?expand=1#diff-5366caedd04016399124d528353b4a5cR134 and the next derivation in that file can be made less fragile.

You're not going to get setoid_rewrite to do this transformation in a nice way. You can't rely on higher-order unification to find non-unique solutions for you in the way you want. I can probably get an approach working where first you "tag" a term with the proof that you want to rewrite with, and then you push the tags in until either you hit bottom (whence you drop them), or until you find the pattern you're supposed to be rewriting. Except setoid_rewrite is modulo delta, so it might unfold too many things. If Ltac supported manipulation of open terms, you could just use pattern, but, alas, it does not (yet). If you can figure out a way to break apart your transformation into multiple steps, none of which require higher-order unification to infer non-unique solutions (for example, if you have an inverse function to Eenc, that is probably sufficient), then you might be able to make things work nicely.


Program Definition from_twisted (P:Epoint) : point := exist _
(let (x,y) := E.coordinates P in (x, y, 1, x*y)) _.

Program Definition to_twisted (P:point) : Epoint := exist _
(let '(X,Y,Z,T) := coordinates P in ((X/Z), (Y/Z))) _.
(let '(X,Y,Z,T) := coordinates P in let iZ := Finv Z in ((X*iZ), (Y*iZ))) _.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JasonGross I think this change and the tactics to undo it in proofs made this file slower. I did this to make the generated code not duplicate the inversion. Not sure how to best fix it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible that moving uninv to the top of bash will be enough to fix it. You can also do something like:

Let to_twisted_final (P:point) := (let '(X,Y,Z,T) := coordinates P in let iZ := Finv Z in ((X*iZ), (Y*iZ))).
Let to_twisted_fast (P:point) := (let '(X,Y,Z,T) := coordinates P in ((X/Z), (Y/Z))).
Lemma to_twisted_speed (P : point) : to_twisted_final P = to_twisted_fast P. ... Qed.

and then rewrite with to_twisted_speed wherever you need to.

@andres-erbsen
Copy link
Contributor Author

The parts of this that did not have open issues were merged as commits
1ea69cd through e51fb9e

@andres-erbsen andres-erbsen deleted the fiat-crypto-integration branch September 22, 2016 00:48
JasonGross added a commit that referenced this pull request Oct 17, 2016
This makes progress towards #75, #57, and compatiblity between versions of Coq
JasonGross added a commit to JasonGross/fiat-crypto that referenced this pull request Feb 21, 2022
Probably if the lists are the same lengths, then we want to compare them
element-wise rather than all at once.  It's way too verbose to keep
expanding them, so we only do that when lists are not the same length.

We now get error messages such as
```
Unable to unify: [inr [378, 381, 384]] == [inr [101, 106, 108]]
Could not unify the values at index 0: [mit-plv#378, mit-plv#381, mit-plv#384] != [mit-plv#101, mit-plv#106, mit-plv#108]
index 0: mit-plv#378 != mit-plv#101
(slice 0 44, [mit-plv#377]) != (slice 0 44, [mit-plv#98])
index 0: mit-plv#377 != mit-plv#98
(add 64, [mit-plv#345, mit-plv#375]) != (add 64, [mit-plv#57, mit-plv#96])
index 0: mit-plv#345 != mit-plv#57
(slice 0 44, [mit-plv#337]) != (slice 0 44, [#44])
index 0: mit-plv#337 != #44
(add 64, [#41, mit-plv#334]) != (add 64, [#25, #41])
index 1: mit-plv#334 != #25
(mul 64, [#1, mit-plv#331]) != (mul 64, [#0, #1, #22])
[(add 64, [mit-plv#329, mit-plv#329])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [#7, mit-plv#328]), (mul 64, [#7, mit-plv#328])])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [(const 2, []), (add 64, [#0, mit-plv#327])]), (mul 64, [(const 2, []), (add 64, [#0, mit-plv#327])])])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, mit-plv#326])])]), (mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, mit-plv#326])])])])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, (const 4, [])])])]), (mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, (const 4, [])])])])])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [(const 2, []), (add 64, [(old 64 0, []), (mul 64, [(old 64 0, []), (const 4, [])])])]), (mul 64, [(const 2, []), (add 64, [(old 64 0, []), (mul 64, [(old 64 0, []), (const 4, [])])])])])] != [(old 64 0, []), (const 20, [])]
```

The second to last line is generally the one to look at; the last line
adds a bit more detail to it.  Perhaps we should instead list out the
values of indices rather than expanding one additional level?
JasonGross added a commit to JasonGross/fiat-crypto that referenced this pull request Feb 21, 2022
Probably if the lists are the same lengths, then we want to compare them
element-wise rather than all at once.  It's way too verbose to keep
expanding them, so we only do that when lists are not the same length.

We now get error messages such as
```
Unable to unify: [inr [378, 381, 384]] == [inr [101, 106, 108]]
Could not unify the values at index 0: [mit-plv#378, mit-plv#381, mit-plv#384] != [mit-plv#101, mit-plv#106, mit-plv#108]
index 0: mit-plv#378 != mit-plv#101
(slice 0 44, [mit-plv#377]) != (slice 0 44, [mit-plv#98])
index 0: mit-plv#377 != mit-plv#98
(add 64, [mit-plv#345, mit-plv#375]) != (add 64, [mit-plv#57, mit-plv#96])
index 0: mit-plv#345 != mit-plv#57
(slice 0 44, [mit-plv#337]) != (slice 0 44, [#44])
index 0: mit-plv#337 != #44
(add 64, [#41, mit-plv#334]) != (add 64, [#25, #41])
index 1: mit-plv#334 != #25
(mul 64, [#1, mit-plv#331]) != (mul 64, [#0, #1, #22])
[(add 64, [mit-plv#329, mit-plv#329])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [#7, mit-plv#328]), (mul 64, [#7, mit-plv#328])])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [(const 2, []), (add 64, [#0, mit-plv#327])]), (mul 64, [(const 2, []), (add 64, [#0, mit-plv#327])])])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, mit-plv#326])])]), (mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, mit-plv#326])])])])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, (const 4, [])])])]), (mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, (const 4, [])])])])])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [(const 2, []), (add 64, [(old 64 0, []), (mul 64, [(old 64 0, []), (const 4, [])])])]), (mul 64, [(const 2, []), (add 64, [(old 64 0, []), (mul 64, [(old 64 0, []), (const 4, [])])])])])] != [(old 64 0, []), (const 20, [])]
```

The second to last line is generally the one to look at; the last line
adds a bit more detail to it.  Perhaps we should instead list out the
values of indices rather than expanding one additional level?
JasonGross added a commit that referenced this pull request Feb 22, 2022
Probably if the lists are the same lengths, then we want to compare them
element-wise rather than all at once.  It's way too verbose to keep
expanding them, so we only do that when lists are not the same length.

We now get error messages such as
```
Unable to unify: [inr [378, 381, 384]] == [inr [101, 106, 108]]
Could not unify the values at index 0: [#378, #381, #384] != [#101, #106, #108]
index 0: #378 != #101
(slice 0 44, [#377]) != (slice 0 44, [#98])
index 0: #377 != #98
(add 64, [#345, #375]) != (add 64, [#57, #96])
index 0: #345 != #57
(slice 0 44, [#337]) != (slice 0 44, [#44])
index 0: #337 != #44
(add 64, [#41, #334]) != (add 64, [#25, #41])
index 1: #334 != #25
(mul 64, [#1, #331]) != (mul 64, [#0, #1, #22])
[(add 64, [#329, #329])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [#7, #328]), (mul 64, [#7, #328])])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [(const 2, []), (add 64, [#0, #327])]), (mul 64, [(const 2, []), (add 64, [#0, #327])])])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, #326])])]), (mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, #326])])])])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, (const 4, [])])])]), (mul 64, [(const 2, []), (add 64, [#0, (mul 64, [#0, (const 4, [])])])])])] != [#0, (const 20, [])]
[(add 64, [(mul 64, [(const 2, []), (add 64, [(old 64 0, []), (mul 64, [(old 64 0, []), (const 4, [])])])]), (mul 64, [(const 2, []), (add 64, [(old 64 0, []), (mul 64, [(old 64 0, []), (const 4, [])])])])])] != [(old 64 0, []), (const 20, [])]
```

The second to last line is generally the one to look at; the last line
adds a bit more detail to it.  Perhaps we should instead list out the
values of indices rather than expanding one additional level?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants