Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stuck on Ex. 2.5 - Prove that the sum and product of algebraic numbers is algebraic #1

Open
ArthurAllshire opened this issue Dec 7, 2018 · 20 comments
Labels
Polynomials Exercises for Chapter 2, Polynomials question Further information is requested

Comments

@ArthurAllshire
Copy link

Not really sure how to approach this proof. Help would be greatly appreciated :)

@ArthurAllshire
Copy link
Author

(Also, maybe I'm missing something here but I don't think that I can tag this issue as requested because only contributors can add labels)

@j2kun j2kun added question Further information is requested Polynomials Exercises for Chapter 2, Polynomials labels Dec 7, 2018
@j2kun
Copy link
Contributor

j2kun commented Dec 7, 2018

This one requires some more elbow-grease than it appears at first glance. I think the best vague hint I can give is to pose the question: how do you know that square-root(2) + cube-root(3) is algebraic (or even simpler, sqrt(2) + sqrt(3))? If you can find a polynomial that proves this is algebraic, I think the steps you took would generalize to a neat proof.

Also, be warned, that a lot of the discussion on the internet about this topic is couched in flowery language about field extensions and Galois groups. Those tools provide a tidy analysis, but I think one shouldn't need any special tools to prove this.

@ArthurAllshire
Copy link
Author

So obviously to show say sqrt(2) + sqrt(3) is algebraic you have to find a polynomial with that as a root. To do this, I fiddled about by setting a generic quartic to zero, so I had
0 = a0 + a1(sqrt(2) + sqrt(3)) + a2(sqrt(2) + sqrt(3))^2 + a3(sqrt(2) + sqrt(3))^3 + a4(sqrt(2) + sqrt(3))^4
I then realized that by choosing the coefficients a2 and a4 to get the sqrt(6)s that result from them to cancel, a1 and a3 to zero, and a0 to cancel the rational portion to get the whole expression to zero, giving me
p(x) = x^4 - 10x^2 + 1
I've tried generalizing this but I'm not sure that I'm on the right track because this method doesn't directly depend on the numbers being algebraic (ie use the fact that they are algebraic to find the solution)...

@j2kun
Copy link
Contributor

j2kun commented Dec 8, 2018

I think you have the right idea, but being a little bit less direct will help keep the reasoning more focused on the general proof. When I think about sqrt(2) + sqrt(3), I notice that each part is algebraic, and they both have polynomials of degree 2. So My first thought is to square that sucker.

(\sqrt 2 + \sqrt 3)^2 = (\sqrt 2)^2 + (\sqrt 3)^2 + 2 \sqrt2 \sqrt 3

The first two terms on the right where I use that sqrt(2) and sqrt(3) are algebraic, and in this simplified example they already turn out to be integers. So I can ignore them and henceforth reduce any power of (sqrt(2) + sqrt(3)) into a power of (sqrt(2)*sqrt(3)) plus some rationals. Moreover, since a power of sqrt(2)*sqrt(3) can be reduced in each factor—(sqrt(2) sqrt(3))^3 is an integer times (sqrt(2) sqrt(3))—I really just have to worry about (sqrt(2)sqrt(3)). As you noticed, the fourth power also helps.

\begin{aligned}(\sqrt 2 + \sqrt 3)^4 &= (\sqrt 2)^4 + 4(\sqrt 2)^3(\sqrt 3)^1 + 6 (\sqrt2)^2 (\sqrt 3)^2 + 4(\sqrt 2)^1(\sqrt 3)^3 + (\sqrt 3)^4 \ &= k + m (\sqrt 2)(\sqrt 3) \end{aligned}

Here k and m are some rational numbers that one can work out, but it doesn't matter what they are.

Using these two equations together, you can rearrange to get all the rationals and powers of (sqrt(2) + sqrt(3)) on one side, and zero on the other side, because the sqrt(2)sqrt(3) terms can be made to cancel by the suitable choice of a rational coefficient, something like -2/m if I haven't screwed up :)

As you may have noticed, we're taking advantage of my note on page 8 that a polynomial is more general than the raw definition; it can be any combination of multiplication and addition and constants (in this case the constants have to be rational numbers). While I still haven't quite revealed the proof, we're definitely taking advantage of this to make our calculations more compact.

In the general case, what can you say about similarly large powers of (a+b)?

@j2kun
Copy link
Contributor

j2kun commented Dec 8, 2018

I misspoke above about "reducing any power", but hopefully you still get the idea? I am trying to convey the idea that you can get from large powers of (sqrt(a) + sqrt(b)) to polynomials involving only sqrt(a)sqrt(b)

@ArthurAllshire
Copy link
Author

ArthurAllshire commented Dec 9, 2018

OK, I sort of get how to prove this now I think, not sure if this is sufficiently rigorous, and it may just be wrong... (also apologies for horrific typography - maybe learning latex is the next project)
Let a be an algebraic number which is the root of a polynomial of degree q, and b an algebraic number the root of a polynomial of degree r.
Now, let n = q * r. Then a^ n, b^n are rational.

Now, take all of the powers (a+b)^n…(a+b)^0, ie
(a+b)^n = a^n + k1 * a^n-1 + … + b^n
(a+b)^(n-1) = a^(n-1) + j1 * a^(n-2) + … + b^(n-1)

(a+b)^1 = a + b
(a+b)^0 = 1

Now, it is clear that through a “linear combination” using rational numbers c1...c(n-1) of the equations with powers n-1 through 1 we may eliminate all but a^n and b^n (since coefficients of each term on RHS of each equation are rational apart from a and b to their various powers). Since a^n + b^n is rational, set cn = -c0. c0, c1, … cn now form the coefficients of a polynomial p of degree n with p(a+b) = 0.

@j2kun
Copy link
Contributor

j2kun commented Dec 10, 2018

You're very close, and I think what matters more than rigor is that you understand the core idea. However, in what you wrote a^n and b^n are not necessarily rational. For example, if a = 1 + cuberoot(3), then a^3 still has terms involving powers of cuberoot(3). But a^3 can be written entirely in terms of smaller powers of a.

What I think you're saying is that, if a^q can be written in terms of smaller powers, and likewise for b^r, then (a+b)^{qr} can be written in terms of {a^i b^j : i=0...q-1, j=0...r-1}.

The core idea of the proof is that these terms are "enough." I.e., as you said, the powers of (a+b) admit a linear combination that's zero.

And since you said "linear combination", I don't feel so bad looking ahead to the linear algebra chapter: for an irrational number a, the set which is the union of rational numbers and "all possible fractions involving a" (i.e., a, 1/a, (a^2 - 1)/(a+1), etc.) is a vector space with rational scalars. Call it Q(a). The dimension of Q(a) is the degree of the smallest-degree polynomial p for which p(a) = 0. Indeed, this exercise is essentially showing that the basis for Q(a+b) can be built from the bases of Q(a) and Q(b) by taking all possible products.

@j2kun
Copy link
Contributor

j2kun commented Dec 12, 2018

Thanks for the discussion. I think I will close this, and if any readers have additional questions on this exercise, feel free to reopen.

@j2kun j2kun closed this as completed Dec 12, 2018
@sritchie
Copy link

This discussion was really helpful in making progress, BUT I'm still stuck, so I'll drop a comment here.

First, the state of where I am after a few hours of noodling over this (love the book so far, by the way):

  1. @j2kun, I'm with you on what seems like an assumption that algebraic numbers are nth roots of rational numbers. (Of course these can be roots of polynomials with rational coefficients... but are these the ONLY things that can be algebraic numbers? I get nervous wading into the math waters that, without a compiler, I've forgotten to shore up some critical piece of the raft I'm floating on.)
  2. Given that assumption, a cube root, a^(1/3), is an algebraic number that is the root of a cubic polynomial. This is always true, correct?
  3. Okay, now the steps I've taken in the proof.

If

  • a is an algebraic number that's the root of a polynomial of degree q, and
  • b is an algebraic number that's the root of a polynomial of degree r,
    then the goal is to prove that we can construct a polynomial of degree n = (q * r) with a root equal to (a + b).

That polynomial will have the form

f(x) = 
  c0
+ c1 * (x + y)
+ c2 * (x + y)^2
+ ...
+ cn * (x + y)^n

NOW I also follow your argument here:

What I think you're saying is that, if a^q can be written in terms of smaller powers, and likewise for b^r, then (a+b)^{qr} can be written in terms of {a^i b^j : i=0...q-1, j=0...r-1}.

because a^q can be written in terms of smaller powers then we can always expand out any a with an exponent >= q, and same with b and r.

These new terms that we've decomposed from, say, cn * (x + y)^n can always find a place to fit down into the expansion of the polynomial terms with exponent at most max(q, r). So I get that we can write the entire polynomial f(x) as the sum of the terms in

 {a^i b^j : i=0...q-1, j=0...r-1}

each with its own rational coefficient. ( :) I don't know how to write this as math!)

OKAY! Here's where I'm stuck. How can we claim that there exist rational numbers c0, c1,... cn that will cause of all these reduced terms to sum to zero? I don't know how to think about this - I've been looking for a way to show what the constants should be by construction, but can't manage it.

I also don't understand this note from @ArthurAllshire:

Now, take all of the powers (a+b)^n…(a+b)^0, ie
(a+b)^n = a^n + k1 * a^n-1 + … + b^n
(a+b)^(n-1) = a^(n-1) + j1 * a^(n-2) + … + b^(n-1)
…
(a+b)^1 = a + b
(a+b)^0 = 1

It seems like he's lost all powers of b except the final power in each line and added in constants k, j etc instead?

This final comment may be the clue that I'm not following:

Now, it is clear that through a “linear combination” using rational numbers c1...c(n-1) of the equations with powers n-1 through 1 we may eliminate all but a^n and b^n (since coefficients of each term on RHS of each equation are rational apart from a and b to their various powers). Since a^n + b^n is rational, set cn = -c0. c0, c1, … cn now form the coefficients of a polynomial p of degree n with p(a+b) = 0.

It's not clear!

Thanks again for the fantastic book. Hope this is enough information for a clue to help me along where I'm stuck.

@sritchie
Copy link

(I can't re-open the issue, but I think the subscription should work anyway for notifications. Thanks!)

@ArthurAllshire
Copy link
Author

ArthurAllshire commented Dec 22, 2018

Addressing your first point - the roots of polynomials with rational coefficients are the only things that can be algebraic numbers. This is not an assumption, it is the definition of an algebraic number. Now, these numbers aren't only nth roots of rational numbers - eg (the golden ratio) is a root of equation is thus algebraic.

You are right about my losing the powers of b in that snippet - I think it was a typo on my part (otherwise I'm not sure what I was thinking). The numbers k, j etc that I added are integers which represent the binomial coefficients in the expansion of those powers.

The logic in that comment sort of goes like this:
We don't actually need to construct coefficients c0,...,cn to show that they exist such that they make the equations sum to zero. Saying that we can express in terms of smaller powers in is the same as saying that each term in the expansion of appears in more than one equation. Since the coefficients of these terms are all rational (because they are all just terms of a binomial expansion), we know that by choosing rational coefficients c0,...cn to multiply each equation by we can make all the terms in the different equations cancel out when we add the different equations together. We don't need to know what these terms are per se. to know that they actually exist.
I hope this helps!

@j2kun
Copy link
Contributor

j2kun commented Dec 22, 2018

I suppose there'e no point in ever closing these issues, and when they're open they're easier to discover.

@j2kun j2kun reopened this Dec 22, 2018
@sritchie
Copy link

@ArthurAllshire, this plus your previous notes are really helpful! I think I've got it; the key is letting go (as you both sort of pointed out) that there's no need to actually wrangle everything into the canonical form. If you remember that you can tag constants onto any of the expanded terms, then you can choose your constants such that

  • the irrational number terms from each term in the summation, like (sqrt(2)*sqrt(3)) in the example, cancel out, and
  • c0 takes care of cancelling out the sum of the terms that have evaluated to rational numbers,
  • cn isn't zero, as required by the definition of a polynomial

then you've shown that the polynomial exists, which shows that (a + b) is algebraic.

Thank you both!

@sritchie
Copy link

And the product proof is just the same idea with fewer terms to worry about.

@j2kun
Copy link
Contributor

j2kun commented Dec 22, 2018

If you want a lasting understanding of this exercise, it will help to read the first chapter on linear algebra, because it allows you to "ignore the actual coefficients" in a way that you can still precisely talk about existence. In that language, the set of all linear combinations of powers of a is a vector space over the rational numbers, similarly for b. These vector spaces are finite dimensional because a and b are algebraic. This is a general fact: any number is algebraic if and only if it's powers form a basis of a finite-dimensional vector space over the rationals.

Now the set of all powers of (a+b) is contained in a finite-dimensional vector space: the vector space of all linear combinations of all powers of a^i b^j (the powers a^i b^j from (i,j) = (0,0) up to (q-1, r-1) form a basis). We know this containing vector space is finite dimensional because of the algebraic-ness of a and b separately: a^n b^m can be written in terms of smaller a^i b^j.

Since the powers of (a+b) are all contained in a (qr)-dimensional vector space, the set {1, (a+b), ... (a+b)^qr} is linearly dependent, i.e., provides a polynomial.

@tabidots
Copy link

tabidots commented Jan 20, 2019

How about the next assertion about pi and e? Most of what has been written above is beyond my understanding, but I'm wondering if the first proof is supposed to forge a path to approach the second proof.

My answer to the first part was not very rigorous. This is what I wrote:

A polynomial encompasses the operations of addition, multiplication, and exponentiation. To find the root of a polynomial is to perform the ”opposite” of these operations.

Addition and multiplication are commutative, so their opposite is themselves; the opposite of exponentiation is to take a root.

It would seem from the above work (i.e., my answers to 2.4) that any number made from combinations of these “opposite” operations performed on rational numbers is the root of some polynomial and is therefore algebraic.

Therefore, for any two algebraic numbers, their sum and their product are both algebraic as well.

The “above work” refers to the method I used to find the polynomial whose root was the golden ratio: writing the operations to get the golden ratio from 0 in Clojure and working through them backwards, like this:

(/ 2              ; 2x
  (+ 1            ; - 1
    (sqrt         ; all of that ^2
      (+ 5 0))))  ; - 5
; therefore phi is the root of (2x - 1)^2 - 5
; this simplifies to 2x^2 - 4x - 4

I don't even know if this is in the ballpark, but the problem is that the number itself has to be “decomposable” to use this method, and pi and e are not decomposable through addition/multiplication/exponentiation as far as I know.

@j2kun
Copy link
Contributor

j2kun commented Jan 21, 2019

For the pi+e and pi*e part, there are two steps: (1) prove that a number which is the root of a polynomial whose coefficients are algebraic is also algebraic, and (2) construct a polynomial whose roots are pi and e, and whose coefficients can be expressed in terms of pi+e and pi*e.

(2) is easy, I think, and (1) is some more work.

In retrospect I may have misjudged the difficulty of this problem for a first chapter exercise :)

I think I may revise it in the second edition.

@ciniglio
Copy link

ciniglio commented Aug 6, 2021

Hi there,

I've spent some hours on trying to prove that a+b is algebraic and trying to understand the comments above, but I'm still having trouble making something click.

I think where I'm getting lost is in this section where @sritchie is quoting @j2kun, perhaps because I'm not sure what it means concretely:

NOW I also follow your argument here:

What I think you're saying is that, if a^q can be written in terms of smaller powers, and likewise for b^r, then (a+b)^{qr} can be written in terms of {a^i b^j : i=0...q-1, j=0...r-1}.

because a^q can be written in terms of smaller powers then we can always expand out any a with an exponent >= q, and same with b and r.

These new terms that we've decomposed from, say, cn * (x + y)^n can always find a place to fit down into the expansion of the polynomial terms with exponent at most max(q, r). So I get that we can write the entire polynomial f(x) as the sum of the terms in

Given an example like a = sqrt(2) and b = cuberoot(3), I'm not sure what it means to say that e.g. b^3 (= 3) can be written in terms of smaller powers and therefore, I'm not sure why that helps with e.g. (a+b)^6.

I think I understand how to mechanically solve this particular instance (i.e. compute (a+b)^i for i in 0...6), then combine those until you land at 0). But I'm not sure why or how that generalizes to a proof. Particularly when looking at an a that is not just a root (e.g phi).

Any help would be greatly appreciated!

@j2kun
Copy link
Contributor

j2kun commented Aug 8, 2021

So with, phi, even though it's not just a root the ideas are similar: e.g., (phi + b)^2 = phi^2 + 2bphi + b^2 = phi + 1 + 2bphi + b^2

and so you can write the parts involving phi in terms of smaller powers of phi.

The difficulty in this example, as you surely realize, is how do you get the crossing-terms (b * phi) to go away or be expressed in terms of (phi + b)? This is the part that makes this too hard for a first chapter, and again I apologize for the distress it's caused.

But to continue down this thread: if you take a large power of (a+b)^(n*m), and you know that a has a degree-n polynomial and b has a degree-m polynomial, then when you multiply that out, expand the terms, you'll get a polynomial in terms of the products (a^i b^j) for i=0..n-1 and j=0..m-1.

Thinking about these a^i b^j products as entries of a vector space, we can ask what happens when you multiply (a+b) by each one. E.g., a will map to a^2 + b, etc. If the resulting power of a or b gets larger than n or m (resp), then you can apply the polynomial to bring the powers back down. If you do this for all of the products a^i b^j, and you write the resulting coefficients in the columns of a matrix, then you get A matrix that represents the mapping "multiply by (a+b)" as applied to any linear combination of products a^i b^j with coefficients being rational numbers.

This matrix is square, and hence it has a characteristic equation, which is exactly a polynomial (where the matrix is the variable), which has the matrix itself as a root. You can compute this by computing a determinant (det(lambda I - A)). This is true of every square matrix, known as the Cayley-Hamilton theorem

In other words, the operation "multiply by (a+b)" can be applied repeatedly (and scaled at each step), and if you do that according to the characteristic equation of the matrix, you will get zero no matter what input you start with (including, say, 1).

Perhaps another way to think about it is as follows. Let the rows of a matrix be indexed by the products a^i b^j as before, and the columns indexed by powers of (a+b)^k, k=0, ..., n*m. Then in each column write down the coefficients of (a+b)^k as they are expanded in terms of a^i b^j. Think of this as a mapping from the vector space with basis {(a+b)^k} to basis {a^i b^j}. You're looking for an input vector to this mapping that evaluates to zero (fairly sure this will work simply because there are more columns than rows). That vector will provide the coefficients of your (a+b)^k terms to show it's algebraic. A proof would require proving this can always be done. I would recommend trying this with phi and sqrt(2), say, and then use a software package to find a vector that maps to zero.

@forketyfork
Copy link

Stuck on this exercise too 😄 At some point in the discussion here, there appears an idea that if a and b are algebraic, and they are roots of polynomials with powers of q and r respectively, then for (a + b) we try to construct a polynomial of power q * r. I lack the intuition for this step, where does this multiplication of polynomial degrees come from?

Also, I still have the same question that appeared several times throughout the discussion regarding the "reduction of power:

So with, phi, even though it's not just a root the ideas are similar: e.g., (phi + b)^2 = phi^2 + 2bphi + b^2 = phi + 1 + 2bphi + b^2

and so you can write the parts involving phi in terms of smaller powers of phi.

But it looks like you're using the fact that phi^2=phi+1 to reduce its power, which is a pretty unique property. It's not clear to me that such "reduction of power" can be done with any algebraic root, like, how are you sure you could reduce the power for (cubic_root(13) + power_59_root(81))^92? And this intuition seems to be crucial for the proof.

As a side note, using the same word "root" to mean two completely different things (root of a polynomial and an algebraic operation with a number) really complicates the reasoning, I wish mathematicians would have come up with different words for those 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Polynomials Exercises for Chapter 2, Polynomials question Further information is requested
Projects
None yet
Development

No branches or pull requests

6 participants