Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How should we show square roots exist in iset.mm? #2101

Closed
jkingdon opened this issue Jul 19, 2021 · 23 comments · Fixed by #2142
Closed

How should we show square roots exist in iset.mm? #2101

jkingdon opened this issue Jul 19, 2021 · 23 comments · Fixed by #2142

Comments

@jkingdon
Copy link
Contributor

There's background at http://us.metamath.org/ileuni/df-rsqrt.html and the square root section (df-sqrt to sqrtm1) of http://us.metamath.org/ileuni/mmil.html but the short summary is that we need to find a way to prove http://us.metamath.org/mpeuni/resqrex.html and it will be in terms of the real number completeness axiom in #2100 one way or another.

What is a convenient sequence which converges to the square root of a nonnegative real? Trying to look for this on the internet finds sequences which are aimed at computing square roots but for our purposes we are much more interested in a sequence which is easy to work with (for example, easy to show that its limit is the square root, easy to prove the Cauchy condition - either as stated in axcaucvg in #2100 or some other modulus of convergence type condition which can be proved from that).

As for the larger motivation, square roots of nonnegative reals are used to define complex absolute value (some of this is already in iset.mm and is fully developed in set.mm), so this is presumably the next step in unlocking a lot of theorems related to sequences and/or series (and from there to trigonometry and other topics which rely on convergence in set.mm).

@digama0
Copy link
Member

digama0 commented Jul 19, 2021

I don't think you will find a better method than the babylonian one, x_(n+1) = f(x_n) := (x_n + a/x_n) / 2. That converges to the positive square root starting from x_0 = max(1, a) (any positive x_0 will work but it is convenient to start with an overestimate), so it can be used as the cauchy sequence, but proving that it converges is somewhat challenging without the monotone convergence theorem (every monotonic and bounded sequence converges). There is a proof on this MSE post that I think will work for your situation. That is:

  • x_n^2 >= a and x_n > 0 and x_(n+1) < x_n and x_n >= 1/2^n for all n
  • f(x)^2 - a = (x^2 - a)^2 / 4x^2 <= (x^2 - a) / 4
  • Thus x_n^2 - a <= x_0^2 / 4^n
  • Thus x_n^2 - x_m^2 <= 2 x_0^2 / 4^n (for m >= n)
  • Thus x_n - x_m <= 4 x_0^2 / 2^n (for m >= n)

Thus x_n is a Cauchy sequence for all a >= 0.

@jkingdon
Copy link
Contributor Author

I don't think you will find a better method than the babylonian one

Thanks! At first glance that would appear to be well suited to this purpose.

As for showing it converges, I'll need to keep chewing on that proof before I fully understand it well enough to formalize but it would appear to be a good approach. I don't know if stackexchange ever deletes comments but in case they do here is the part of that post which was linked to:

Screenshot from 2021-07-19 11-57-14

@benjub
Copy link
Contributor

benjub commented Jul 19, 2021

@jkingdon : I think @digama0's proof suffices and there is no need for the MSE answer. With the following corrections, it seems:

* `x_n >= a` and `x_n > 0` and `x_(n+1) < x_n` and `x_n >= 1/2^n` for all n

Probable typo: x_n^2 >= a.

* Thus `x_n^2 - a <= 1/4^n`
* Thus `x_n^2 - x_m^2 <= 2/4^n` (for m >= n)

Missing constant, something like (x_0^2 - a).

* Thus `x_n - x_m <= 4 / 2^n` (for m >= n)

I think the power term can still be 4^n, and using x_n + x_m >= 2 \sqrt(a) (Item 1 with typo corrected), you obtain your majoration with constant (x_0^2 - a) / \sqrt(a) or something like that. Of course you cannot use \sqrt(a), so I think you can get away with finding nonzero naturals p, q such that p^2 < a q^2, so p/q < x_n. On the other hand, Mario uses the above bound x_n >= 1/2^n, which is easily obtained, so maybe this is faster.

Finally, you set indeed x_0 = max(1,a), but it's probably more convenient to distinguish a<=1 and a>=1 from the start of the proof (maybe even two separate proofs with lemmas for the common parts) ?

@jkingdon
Copy link
Contributor Author

jkingdon commented Jul 19, 2021

I think @digama0's proof suffices

Thanks for taking a look. At least some of those corrections I had also noticed (part of what I meant by "keep chewing on"). If you want to start in on any of the square root stuff (either formalizing, or just carefully writing up the informal proof), just speak up. I have at least a few days before I'm ready to start formalizing anything in this issue, as there are some (non-square-root-specific) notational cleanups which I'm working on (described in #2103 ).

Finally, you set indeed x_0 = max(1,a), but it's probably more convenient to distinguish a<=1 and a>=1 from the start of the proof

Wouldn't we need excluded middle if we wanted a<=1 \/ a>=1 (a la http://us.metamath.org/mpeuni/letric.html as opposed to say http://us.metamath.org/ileuni/zletric.html )? Unless there is some trick I'm not seeing we'd only be able to divide into cases where the cases overlap, for example, a<=2 \/ a>=1 ( http://us.metamath.org/ileuni/axltwlin.html )

We haven't yet formalized max but if we want it, we should be able to have it (this is mentioned in Definition 11.2.7 of the HoTT book for example).

@digama0
Copy link
Member

digama0 commented Jul 19, 2021

* `x_n >= a` and `x_n > 0` and `x_(n+1) < x_n` and `x_n >= 1/2^n` for all n

Probable typo: x_n^2 >= a.

Yes, that was a typo.

* Thus `x_n^2 - a <= 1/4^n`
* Thus `x_n^2 - x_m^2 <= 2/4^n` (for m >= n)

Missing constant, something like (x_0^2 - a).

The constant was supposed to be 1, but the other typo confused me, and it is not 1. max(1, a^2) will do.

* Thus `x_n - x_m <= 4 / 2^n` (for m >= n)

I think the power term can still be 4^n, and using x_n + x_m >= 2 \sqrt(a) (Item 1 with typo corrected), you obtain your majoration with constant (x_0^2 - a) / \sqrt(a) or something like that. Of course you cannot use \sqrt(a), so I think you can get away with finding nonzero naturals p, q such that p^2 < a q^2, so p/q < x_n. On the other hand, Mario uses the above bound x_n >= 1/2^n, which is easily obtained, so maybe this is faster.

Indeed, I started by using the x_n^2 >= a bound here, but that proof breaks down when a = 0 and we can't do a disjunctive proof, so this version uses a lower bound that decreases along with x_n, hence the slower rate of convergence.

Finally, you set indeed x_0 = max(1,a), but it's probably more convenient to distinguish a<=1 and a>=1 from the start of the proof (maybe even two separate proofs with lemmas for the common parts) ?

No disjunctive proof, remember? This is intuitionistic logic. Using max is okay because max is a continuous function. I don't know if @jkingdon has proved the existence of max yet, but it's a very useful function. You can take the pointwise max of the two cauchy sequences to get a cauchy sequence for the max.

If max is too much work, you can also use 1 + a, since the actual value of x_0 doesn't matter too much, it adds some constant factors in a few places and that's all.

@benjub
Copy link
Contributor

benjub commented Jul 19, 2021

@digama0 and @jkingdon : I had assumed that (a<=1 or a>=1) for $a$ real was intuitionistic because there was enough overlap, so thanks for clearing that up: you need better overlap, and now I clearly see why: if your $a$ appeared to be exactly 1, you couldn't tell for sure whether it was at least 1 or at most 1. (And also, you cannot deal with a=0 separately so my above proposal breaks down.)
This makes the existence of max all the more interesting.

@jkingdon
Copy link
Contributor Author

So I have kind of a dumb technical question. How are we even going to define a recursive sequence like this one? http://us.metamath.org/ileuni/df-iseq.html seems awkward (if it can work at all) because x_(n+1) just depends on x_n, it doesn't accumulate the way that seq is built for. http://us.metamath.org/ileuni/df-frec.html is closer but is indexed by _om rather than NN (or NN0). The situation isn't much better in set.mm (unless there is something lying around that I didn't notice, which is certainly possible).

I mean, I can define something along the lines of http://us.metamath.org/ileuni/df-frec.html but designed to give us a sequence of the form F : N --> RR suitable for sending to caucvgre (from #2105 ) but is new notation necessary here?

Theorems like http://us.metamath.org/ileuni/frec2uz0d.html seem to be on the right track - and I suppose the mapping there could be inverted based on http://us.metamath.org/ileuni/f1ocnvb.html and the like.

Is there an example from set.mm of how to make this sort of recursively defined sequence, which we can adapt? Or is it more a matter of paving (somewhat) new ground?

@digama0
Copy link
Member

digama0 commented Jul 21, 2021

So I have kind of a dumb technical question. How are we even going to define a recursive sequence like this one?

You can use seq, you just ignore the F argument (set it to _I) and put the smarts in the + operation.

@digama0
Copy link
Member

digama0 commented Jul 21, 2021

See seq1st or algrp1 and their users for the general idiom.

@jkingdon
Copy link
Contributor Author

See seq1st or algrp1 and their users for the general idiom.

I don't retract "awkward" but OK this seems like the best option.

And thanks for the pointers to seq1st and algrp1 - that does make it a lot clearer.

@benjub
Copy link
Contributor

benjub commented Jul 21, 2021

I'm not sure I understand the comment of http://us.metamath.org/ileuni/df-iseq.html. In particular, the second paragraph seems to imply that there is an "input sequence" and an "output sequence" ??

Is it the case, using informal language, that:

if M \in ZZ and F : ZZ_{\geq M} \to S and G : S \times S \to S, then seq M (G, F, S) is the sequence u : ZZ_{\geq M} \to S such that u_M = F(M) and u_{n+1} = G(u_n, F(n+1)) for n \geq M.

If this is true, it might be helpful to put it in the comment, to help "informal mathematicians".

Also, why use "+" in place of G ? This might be misleading, or maybe I am misled myself ?

Finally, why not define more simply a function, say seq2 (to distinguish it from the above), such that:

if M \in ZZ and A \in S and G : S \times ZZ_{\geq M+1} \to S, then seq2 M (G, A, S) is the sequence u : ZZ_{\geq M} \to S such that u_M = A and u_{n+1} = G(u_n, n+1) for n \geq M.

@digama0
Copy link
Member

digama0 commented Jul 21, 2021

Yes, seq2 would be just as good as seq. seq was clearly written with series in mind; in particular it is common to pin the G argument to + and use F, so that it acts as the "sequence of partial sums of a sequence" operator. The "input sequence" is F, and the "output sequence" is seq1(+, F). I believe it has been generalized from the original, so that might explain why the documentation and design of the function are focused on this case even though you can use it for other things as well.

@digama0
Copy link
Member

digama0 commented Jul 21, 2021

Is it the case, using informal language, that:

if M \in ZZ and F : ZZ_{\geq M} \to S and G : S \times S \to S, then seq M (G, F, S) is the sequence u : ZZ_{\geq M} \to S such that u_M = F(M) and u_{n+1} = G(u_n, F(n+1)) for n \geq M.

Actually you can generalize it a bit further: G : S \times T \to S, and F : ZZ_{\geq M} \to T. The function G doesn't have to be a binary operator on S.

@benjub
Copy link
Contributor

benjub commented Jul 21, 2021

Is it the case, using informal language, that:

if M \in ZZ and F : ZZ_{\geq M} \to S and G : S \times S \to S, then seq M (G, F, S) is the sequence u : ZZ_{\geq M} \to S such that u_M = F(M) and u_{n+1} = G(u_n, F(n+1)) for n \geq M.

Actually you can generalize it a bit further: G : S \times T \to S, and F : ZZ_{\geq M} \to T. The function G doesn't have to be a binary operator on S.

I don't think so, since you have u_M = F(M) \in S. This mismatch (the fact that F is used in different ways for the initial value and for the induction step) is not the case for seq2, which is more streamlined, even if not optimized for the "sequence of partial sums".

@digama0
Copy link
Member

digama0 commented Jul 21, 2021

Oh that's true. You will notice that a few applications of seq actually split F for this reason, using an expression with a dependent type F(0) : S and F(n+1) : T for this reason. (That was probably a bad design choice in seq.)

@jkingdon
Copy link
Contributor Author

Going back to our square root existence sequence, here's my attempt to restate the proof from @digama0 .

  • We are trying to show the existence of the square root of a where 0 <= a
  • Define x_(n+1) to be (x_n + a/x_n) / 2. For sake of argument we'll take x_0 to be ( 1 + a ) / 2 although other choices will probably work with a little attention to the rate of convergence part of http://us.metamath.org/ileuni/caucvgre.html (if necessary, discarding a term or two at the start for example)
  • a <= x_n^2 (there is a close to a proof of this at https://www.cantorsparadise.com/a-modern-look-at-square-roots-in-the-babylonian-way-ccd48a5e8716 where it says "all other values in the succession will be greater than the root we seek", although the proof we actually want is slightly modified - to work with a itself rather than assuming that √a exists)
  • 0 <= x_n by induction on n
  • x_(n+1) < x_n (I forget why, but this is noted in most descriptions of this sequence)
  • 1/2^n <= x_n (I assume this is the "lower bound that decreases along with x_n" part. where do we use this?)
  • x_(n+1)^2 - a = (x_n^2 - a)^2 / 4x_n^2 (by expanding the definition and rearranging)
  • <= (x_n^2 - a) / 4 (why?)
  • x_n^2 - a <= x_0^2 / 4^n by induction on n, using the previous inequality as the induction step
  • x_n^2 - x_m^2 <= 2 x_0^2 / 4^n (for m >= n) because x_m^2 - a is less than x_0^2 / 4^n (by the previous step)
  • x_n - x_m <= 4 x_0^2 / 2^n (for m >= n) (why?)

This isn't exactly in the form needed to show Cauchy-ness in http://us.metamath.org/ileuni/caucvgre.html but seems pretty close, so a modified version of caucvgre should work and not be that hard.

Tentative conclusions:

  • We don't need max (I've moved this to Add maximum of two real numbers to iset.mm #2115 to elaborate more on what it entails, and why it might be more work than we want to do right now).
  • We need one or more theorems on sequence convergence, although I'm not completely sure exactly how to state them yet.
  • The following definition for the sequence should work:
    resqrexlemex.seq $e |- F = seq 1 (
      ( x e. RR , y e. RR |-> ( ( x + A / x ) / 2 ) ) ,
      ( NN X. { ( ( 1 + A ) / 1 ) } ) ,
      RR ) $.

@digama0
Copy link
Member

digama0 commented Jul 27, 2021

  • Define x_(n+1) to be (x_n + a/x_n) / 2. For sake of argument we'll take x_0 to be ( 1 + a ) / 2 although other choices will probably work with a little attention to the rate of convergence part of http://us.metamath.org/ileuni/caucvgre.html (if necessary, discarding a term or two at the start for example)

Why (1 + a) / 2 instead of 1 + a? (1+a)/2 is actually problematic because we want a strict upper bound on sqrt(a) so that the convergents are strictly monotonic, and this function is actually equal to sqrt(a) at 1.

The proof for this that I was imagining involves the AM-GM inequality (for n=2, which is easy, it boils down to squares are positive). More explicitly, x_(n+1)^2 = (x_n + a/x_n)^2 / 4 = (x_n - a/x_n)^2 / 4 + a >= a.

  • x_(n+1) < x_n (I forget why, but this is noted in most descriptions of this sequence)

(x_n + a/x_n) / 2 < x_n iff a/x_n < x_n iff x_n^2 > a. Note that we need strict inequality of the first point to get strict inequality here, and you can do that by strengthening the previous argument to say that (x_n - a/x_n)^2 / 4 + a > a as long as x_n - a/x_n != 0, that is, a != x_n^2, which thus goes by induction (provided that x_0^2 != a, which is why we don't want the choice x_0 = (1+a)/2`).

  • 1/2^n <= x_n (I assume this is the "lower bound that decreases along with x_n" part. where do we use this?)

It comes up in the last (why?) step. This one is an easy proof by induction assuming x_0 >= 1 (so again, 1+a works), but we only need it up to a constant factor anyway, so even if x_0 is smaller it's not a big problem.

  • x_(n+1)^2 - a = (x_n^2 - a)^2 / 4x_n^2 (by expanding the definition and rearranging)

  • <= (x_n^2 - a) / 4 (why?)

This is using that (x_n^2 - a) / x_n^2 <= 1 because a >= 0 and x_n > 0.

  • x_n^2 - a <= x_0^2 / 4^n by induction on n, using the previous inequality as the induction step

  • x_n^2 - x_m^2 <= 2 x_0^2 / 4^n (for m >= n) because x_m^2 - a is less than x_0^2 / 4^n (by the previous step)

  • x_n - x_m <= 4 x_0^2 / 2^n (for m >= n) (why?)

x_n^2 - x_m^2 = (x_n - x_m) (x_n + x_m), so we want to lower bound (x_n + x_m) so we can divide by it. The larger one is x_n, so we can just use x_m >= 0 and x_n >= 1/2^n to get x_n + x_m >= 1/2^n, so x_n - x_m <= (x_n^2 - x_m^2) * 2^n. The factor of 2 is a mistake (I added an unnecessary additional term for x_m), this actually proves a bound of x_n - x_m <= 2 x_0^2 / 2^n.

  • The following definition for the sequence should work:
    resqrexlemex.seq $e |- F = seq 1 (
      ( x e. RR , y e. RR |-> ( ( x + A / x ) / 2 ) ) ,
      ( NN X. { ( ( 1 + A ) / 1 ) } ) ,
      RR ) $.

You've written (1 + A) / 1 here, which is a typo somewhere between your version and my version.

@jkingdon
Copy link
Contributor Author

jkingdon commented Jul 27, 2021

Why (1 + a) / 2 instead of 1 + a?

1 + a it is. I was worried about whether the first and second terms are within 1 / 1 of each other (in terms of rate of convergence), but I'm not sure the factor of two is even enough and the rate of convergence can be addressed a lot of ways.

You've written (1 + A) / 1 here, which is a typo somewhere between your version and my version.

Freudian slip perhaps? In any event, I've corrected it below.

Here is the proof as it now stands with @digama0 's edits. My plan is to start proving the lemmas shown below which should flesh out any remaining confusions or questions (we are a formal math project, after all). I'll push my changes to the https://github.com/jkingdon/set.mm/tree/iset-resqrex branch as I go.

  • We are trying to show the existence of the square root of a where 0 <= a
  • Define x_(n+1) to be (x_n + a/x_n) / 2 and x_0 to be1 + a
  • a < x_n^2 (note strict inequality which we will use later. (x_n - a/x_n)^2 / 4 + a > a as long as 0 < x_n - a/x_n, that is, a < x_n^2, which thus goes by induction with a < x_0^2 as the base case) (update: this is resqrexlemover on the branch)
  • 0 <= x_n by induction on n (update: 0 < x_n is resqrexlemf on the branch)
  • x_(n+1) < x_n ((x_n + a/x_n) / 2 < x_n iff a/x_n < x_n iff x_n^2 > a. Here's where we use a < x_n^2) (update: resqrexlemdec on the branch)
  • 1/2^n <= x_n (This one is an easy proof by induction assuming x_0 >= 1. We'll use this at the end where we want to divide by (x_n + x_m) and will need to that is apart from zero) (update: resqrexlemlo on the branch)
  • x_(n+1)^2 - a = (x_n^2 - a)^2 / 4x_n^2 (by expanding the definition and rearranging) (update: resqrexlemcalc1 on the branch)
  • <= (x_n^2 - a) / 4 (using that (x_n^2 - a) / x_n^2 <= 1 because a >= 0 and x_n > 0.) (update: resqrexlemcalc2 on the branch)
  • x_n^2 - a <= x_0^2 / 4^n by induction on n, using the previous inequality as the induction step (update: resqrexlemcalc3 on the branch. Note that the branch uses 1 rather than 0 as the first index in the sequence, so 4^n becomes 4^(n-1) because of that )
  • x_n^2 - x_m^2 <= 2 x_0^2 / 4^n (for m >= n) because x_m^2 - a is less than x_0^2 / 4^n (by the previous step) (update: resqrexlemnmsq on the branch. Again the exponent is n-1. The branch proves < rather than <= because it was easy )
  • x_n - x_m <= 2 x_0^2 / 2^n (for m >= n) (x_n^2 - x_m^2 = (x_n - x_m) (x_n + x_m), so we want to lower bound (x_n + x_m) so we can divide by it. The larger one is x_n, so we can just use x_m >= 0 and x_n >= 1/2^n to get x_n + x_m >= 1/2^n, so x_n - x_m <= (x_n^2 - x_m^2) * 2^n.) (update: resqrexlemnm on the branch. Again the exponent is n-1. The branch proves < rather than <= because it was easy. Also the factor of two is back although it is for reasons related to the n - 1 so I'm not sure it is for the same reason as it was here previously )

This isn't exactly in the form needed to show Cauchy-ness in http://us.metamath.org/ileuni/caucvgre.html but seems pretty close, so we'll need to prove a modified version of caucvgre (where the Cauchy condition looks more like the above in terms of rate of convergence).

That proves that the limit exists. We'll also need to prove that the limit is the square root.

The sequence definition will be (update: see resqrexlemex.seq on the branch for the current one. The one which had been here in github had errors)

@jkingdon
Copy link
Contributor Author

jkingdon commented Aug 7, 2021

Status report: the branch now contains a proof that the sequence converges:

MM> show statement resqrexlemcvg
$d y A z $.  $d i F j y x $.  $d ph i j y $.  $d ph z $.  $d ph x $.
30945 resqrexlemex.seq $e |- F = seq 1 ( ( y e. RR+ , z e. RR+ |-> ( ( y + ( A
      / y ) ) / 2 ) ) , ( NN X. { ( 1 + A ) } ) , RR+ ) $.
30946 resqrexlemex.a $e |- ( ph -> A e. RR ) $.
30947 resqrexlemex.agt0 $e |- ( ph -> 0 <_ A ) $.
31045 resqrexlemcvg $p |- ( ph -> E. y e. RR A. x e. RR+ E. j e. NN A. i e. (
      ZZ>= ` j ) ( ( F ` i ) < ( y + x ) /\ y < ( ( F ` i ) + x ) ) ) $= ... $.
MM> 

So what is left is to show:

  1. 0 <_ y
  2. ( y ^ 2 ) = A

The first one seems straightforward (the first hit I found was https://math.stackexchange.com/questions/1424273/let-a-n-be-a-convergent-sequence-of-positive-real-numbers-why-is-the-limit and it seems like some version of the proofs there would work in iset.mm).

The second one is, I guess, a bit trickier given the recursive definition of the sequence. Again, without really knowing much about this but doing a small amount of internet searching I see things which do things like: since y is the limit, that it satisfies the recursion rule when put in for both x_n and x_(n+1):

( y + ( A / y ) ) / 2 = y

which simplifies to A = y ^ 2 as desired. What is the formal justification for this procedure? At face value it would seem to be...... well dividing by zero (in the A=0 case) for one thing..... but perhaps this problem goes away when this is stated in a better way? Is there a set.mm theorem which shows it or some other source which goes into how this can be formally stated (and proved)?

@digama0
Copy link
Member

digama0 commented Aug 7, 2021

  • x_n >= 0 for all n, so y >= 0. This is easy to prove by contradiction, but without contradiction we can still do it if we have the lemma (A. e > 0, x >= -e) -> x >= 0. Assuming that, we want to show y >= -e for e > 0. Pick some n such that |x_n - y| < e; then x_n >= 0 implies y >= -e.

  • x_n^2 converges to y^2. This is true in general for limits since x |-> x^2 is a continuous function. More explicitly, given e > 0, |x_n^2 - y^2| = |x_n - y| |x_n + y|, and x_n is absolutely bounded by some (positive) M so |x_n + y| <= M + |y|. (The lemma about x_n being absolutely bounded is independently useful but you can also get by with eventually bounded, which follows more directly from the definition of limit - just take N such that |x_n - y| <= 1 for n >= N and then |x_n + y| <= 2|y| + 1.)

  • x_n^2 converges to a. This is easy from the previous proof, since 0 <= x_n^2 - a <= x_0^2 / 4^n.

  • Limits are unique because RR is a hausdorff space. More explicitly, if x_n limits to a and to b, then for any e > 0, |a - b| <= e, so |a - b| <= 0 and hence a = b.

@jkingdon
Copy link
Contributor Author

jkingdon commented Aug 8, 2021

* `y >= 0` . . . easy to prove by contradiction, but . . .

Contradiction works fine given http://us.metamath.org/ileuni/lenlt.html . The https://github.com/jkingdon/set.mm/tree/iset-resqrex branch now contains a proof that y >= 0 (lemma resqrexlemgt0).

* `x_n^2` converges to `y^2`. . . .
* `x_n^2` converges to `a`. . . .
* Limits are unique . . .

Hmm, there's a lot to chew on here and there might still be a significant gap between what iset.mm has to start with and what we need to formalize these arguments. But I'll see what I can do, I guess starting with the http://us.metamath.org/mpeuni/climuni.html equivalent (seems like something a bit like the proof of climuni might work given http://us.metamath.org/ileuni/apti.html ).

@david-a-wheeler
Copy link
Member

@jkingdon - I think one problem is that the term "contradiction" has multiple meanings. One works in intuitionistic logic, while another doesn't.

@jkingdon
Copy link
Contributor Author

@jkingdon - I think one problem is that the term "contradiction" has multiple meanings. One works in intuitionistic logic, while another doesn't.

Terminology aside the two proofs under discussion were (1) show that y < 0 -> F. which implies 0 <= y or (2) we can still do it if we have the lemma (A. e > 0, x >= -e) -> x >= 0. Assuming that, we want to show y >= -e for e > 0. Pick some n such that |x_n - y| < e; then x_n >= 0 implies y >= -e . The #2142 pull request formalizes (1) because it seemed a lot simpler.

As for the status report, #2142 completes the proof of the existence of square roots and there is more detail there. Thanks for all of the comments on this issue; I would have really struggled to get this one done alone.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants