-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unary integers #1964
Unary integers #1964
Conversation
Some of the proofs are still pretty tricky with this approach. Here is how I would consider structuring these results:
|
Signed-off-by: Ali Caglayan <alizter@gmail.com> <!-- ps-id: 60011b45-5cd1-40cc-a7d3-f1d665aad8af -->
I've only rebased and removed the unintentional changes. |
Signed-off-by: Ali Caglayan <alizter@gmail.com>
Signed-off-by: Ali Caglayan <alizter@gmail.com>
This wasn't possible because negation is part of the induction principle. I can't just stick to
I still needed to introduce lemmas. I don't see a way to make them hold by definition due to the way we had to define induction and its interaction with negation. The basic problem is that |
Also I did some archaeology and here is the definition of the integers we had before I introduced the binary ones: https://github.com/Alizter/HoTT/blob/02fe3325deb76bb8b4bd5aa0079c9b32b51a7325/theories/Spaces/Int.v. This current one appears to be a slight improvement on that by reducing the number of constructors from 3 to 2. |
Ok, I see that my proposed plan was too optimistic. I was thinking of a slightly incorrect induction principle. Your latest commit actually added slightly more lines than it removed, so it's not clear if it is better this way. But what I realized is that the There's also the approach of The Integers as a Higher Inductive Type, Thorsten Altenkirch, Luis Scoccola, https://arxiv.org/abs/2007.00167, Definition 1.3. They show that to prove a family of propositions (such as identities between integers) with their definition, you just need to prove them for 0, successors, and predecessors, so I think it would end up working out more like i described in my plan. It would be a big change from what is in this PR, but it could have some advantages. What do you think about either using the old definition with three constructors or the Altenkirch-Scoccola definition? In the latter case, there's an implementation in cubical Agda that could be roughly followed. |
This is somewhat subjective, as I could be far more compact with the rewrites. The previous attempt had far more complex proofs since I had to reason about inequalities often.
That's a good point. I'll try and see if it works out.
I've also thought about this before, however I am a little worried that introducing a HIT for the integers might be a little heavy. We could also define the integers the usual classical way with a pair of natural numbers. (IIRC this comes from medieval Italian accountants tacking credit and debit). I'm not sure how well these compute intentionally. We actually already have these with
I think switching to the AS definition of the integers would be too radical at this point. It's not entirely clear to me if it will work out in a practical way. The main aim of this PR was to get the integers into a workable and practical state so that we can start to develop some more algebra on top of them. I'm of course open to some experimentation with the AS definition, but I wouldn't recommend we make it the main definition. I will therefore experiment with the 3 constructor integers, bringing us full-circle, which is ironic since we are talking about integers. However this time at least we have a lot more experience and lemmas about them. |
It sounds like you think the current approach is better which is great. Length is of course not the most important factor, but I didn't do a careful comparison, which is why I mentioned the length. If you think it was cleaner, then I agree with continuing this way. And switching to the three-constructor version of the integers will trim a bit of the code, probably making the length comparable or even shorter than the previous version. |
I had a go at: Inductive Int : Type0 :=
| negS : nat -> Int
| zero :Int
| posS : nat -> Int
. however it quickly becomes apparent that this definition is inferior. I'll list all the problems here:
Definition int_of_nat (n : nat) :=
match n with
| O => zero
| S n => posS n
end.
Definition int_succ (n : Int) : Int :=
match n with
| posS n => posS (S n)
| 0 => 1
| -1 => 0
| negS (S n) => negS n
end.
Definition int_pred (n : Int) : Int :=
match n with
| posS (S n) => posS n
| 1 => 0
| 0 => -1
| negS n => negS (S n)
end.
Definition Int_ind@{i} (P : Int -> Type@{i})
(H0 : P 0)
(HP : forall n : nat, P n -> P (int_succ n))
(HN : forall n : nat, P (- n) -> P (int_pred (-n)))
: forall x, P x.
Proof.
intros[x | | x].
- induction x as [|x IHx] using Core.nat_ind.
+ apply (HN 0%nat), H0.
+ apply (HN x.+1%nat), IHx.
- exact H0.
- induction x as [|x IHx] using Core.nat_ind.
* apply (HP 0%nat), H0.
* apply (HP x.+1%nat), IHx.
Defined. This means I often have to split induction into 5 cases rather than 3 (maybe 4) from before For example: (** Integer addition with zero on the right is the identity. *)
Definition int_add_0_r@{} (x : Int) : x + 0 = x.
Proof.
induction x as [|[|x] IHx|[|x] IHx].
- reflexivity.
- reflexivity.
- change ((x.+1%nat + 0).+1 = x.+1%nat.+1).
by rewrite IHx.
- reflexivity.
- change (?x.-1 + ?y) with (x + y).-1.
by rewrite IHx.
Defined. Therefore it looks like the current approach is the best. |
When I did some formalization work on the integers in a different proof assistant, I felt that the conceptual clarity of the three-constructor version outweighed the disadvantages of a few more cases. With the two-constructor version I kept being confused about whether or not to add or subtract one from my natural numbers to get the definitions correct. Not saying you have to do it that way, just contributing my experience. |
I'm not completely convinced by those examples. While there are more cases, they look pretty trivial and follow an easy to see pattern. And I would guess that there are situations where things simplified as well. I guess I'd be more convinced by a more difficult thing blowing up with one approach vs the other. I also wonder if there's an induction principle that will reduce the number of cases when you are proving a proposition. E.g. having hypotheses (That all said, I think it's fine however you choose to do this.) |
Also a side note, in case anyone is interested or has suggestions. One of the features I've considered for Narya is "nominal subtyping that requires structural compatibility". Specifically, that means that if A and B are inductive types, and for every constructor of A, B has a constructor of the same name, and the inputs of the constructor of A are subtypes of those of the constructor of B (assuming recursively that A is a subtype of B), then we can declare A to be a subtype of B. This is like an implicit coercion except that no actual coercion function needs to be inserted even internally, since a term of type A can literally be used like a term of type B in computations. With this sort of subtyping, if we define
then we can declare Of course, this doesn't reduce the number of cases for defining operations on |
Signed-off-by: Ali Caglayan <alizter@gmail.com>
@Alizter That last commit looks pretty good to me. It's a small diff, and it did in fact reduce the number of lines of code, while also simplifying a few things. |
Let me know if you think this is ready for another review. |
I want to do another pass over trying to simplify a few proofs and there are things that need some comments. |
Signed-off-by: Ali Caglayan <alizter@gmail.com>
@jdchristensen I've added some more comments. It was a bit tricky putting lemmas like |
Signed-off-by: Ali Caglayan <alizter@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made two comments, and I'm about to push a few minor clean-ups.
(** Negation distributes over addition. *) | ||
Definition int_neg_add@{} (x y : Int) : - (x + y) = - x - y. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just before this point, we have proved everything that shows that Int is an abelian group. So int_neg_add follows from general facts about abelian groups (e.g. negate_sg_op and Int_add_comm). Is there a way to restructure to do it this way? (It can also be locally proved without induction on x, using algebraic rules we also have.) But it's also fine this way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see a way to do this without having to pull in mathclasses which I would prefer not to do at this point. I would like to keep the group theory restricted to Z.v so that this file can be used without pulling in too much.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense. So this PR LGTM.
Signed-off-by: Ali Caglayan <alizter@gmail.com>
We define the unary integers as discussed in #1913. We show that we have addition and multiplication and that these operations are associative.
This should be enough for the ring structure on
Int
. We can think about replacingBinInt
after this PR.The proofs here might need some cleanup, but they work.
The (initial) design is based on mathcomps integers but the proofs are only similar. These appear to be more pleasant to work with than the binary integers so far.
Edit: We now define the integer operations by integer induction, essentially 6.10.12 in the HoTT book.