{{ message }}

# jimmysong / programmingbitcoin

Cannot retrieve contributors at this time
783 lines (569 sloc) 29.6 KB

## Finite Fields

One of the most difficult things about learning how to program Bitcoin is knowing where to start. There are so many components that depend on each other that learning one thing may lead you to have to learn another, which in turn may lead you to need to learn something else before you can understand the original thing.

This chapter is going to get you off to a more manageable start. It may seem strange, but we’ll start with the basic math that you need to understand elliptic curve cryptography. Elliptic curve cryptography, in turn, gives us the signing and verification algorithms. These are at the heart of how transactions work, and transactions are the atomic unit of value transfer in Bitcoin. By learning about finite fields and elliptic curves first, you’ll get a firm grasp of concepts that you’ll need to progress logically.

Be aware that this chapter and the next two chapters may feel a bit like you’re eating vegetables, especially if you haven’t done formal math in a long time. I would encourage you to get through them, though, as the concepts and code presented here will be used throughout the book.

### Learning Higher-Level Math

Learning about new mathematical structures can be a bit intimidating, and in this chapter, I hope to dispel the myth that high-level math is difficult. Finite fields, in particular, don’t require all that much more in terms of prior mathematical knowledge than, say, algebra.

Think of finite fields as something that you could have learned instead of trigonometry, except that the education system you’re a part of decided that trigonometry was more important for you to learn. This is my way of telling you that finite fields are not that hard to learn and require no more background than algebra.

This chapter is required if you want to understand elliptic curve cryptography. Elliptic curve cryptography is required for understanding signing and verification, which is at the heart of Bitcoin itself. As I’ve said, this chapter and the next two may feel a bit unrelated, but I encourage you to endure. The fundamentals here will not only make understanding Bitcoin a lot easier, but also make understanding Schnorr signatures, confidential transactions, and other leading-edge Bitcoin technologies easier.

### Finite Field Definition

Mathematically, a finite field is defined as a finite set of numbers and two operations + (addition) and (multiplication) that satisfy the following:

1. If a and b are in the set, a + b and ab are in the set. We call this property closed.

2. 0 exists and has the property a + 0 = a. We call this the additive identity.

3. 1 exists and has the property a ⋅ 1 = a. We call this the multiplicative identity.

4. If a is in the set, –a is in the set, which is defined as the value that makes a + (–a) = 0. This is what we call the additive inverse.

5. If a is in the set and is not 0, a–1 is in the set, which is defined as the value that makes aa–1 = 1. This is what we call the multiplicative inverse.

Let’s unpack each of these criteria.

We have a set of numbers that’s finite. Because the set is finite, we can designate a number p, which is how big the set is. This is what we call the order of the set.

#1 says we are closed under addition and multiplication. This means that we have to define addition and multiplication in a way that ensures the results stay in the set. For example, a set containing {0,1,2} is not closed under addition, since 1 + 2 = 3 and 3 is not in the set; neither is 2 + 2 = 4. Of course we can define addition a little differently to make this work, but using "normal" addition, this set is not closed. On the other hand, the set {–1,0,1} is closed under normal multiplication. Any two numbers can be multiplied (there are nine such combinations), and the result is always in the set.

The other option we have in mathematics is to define multiplication in a particular way to make these sets closed. We’ll get to how exactly we define addition and multiplication later in this chapter, but the key concept here is that we can define addition and subtraction differently than the addition and subtraction you are familiar with.

#2 and #3 mean that we have the additive and multiplicative identities. That means 0 and 1 are in the set.

#4 means that we have the additive inverse. That is, if a is in the set, –a is in the set. Using the additive inverse, we can define subtraction.

#5 means that multiplication has the same property. If a is in the set, a–1 is in the set. That is aa–1 = 1. Using the multiplicative inverse, we can define division. This will be the trickiest to define in a finite field.

### Defining Finite Sets

If the order (or size) of the set is p, we can call the elements of the set, 0, 1, 2, …​ p – 1. These numbers are what we call the elements of the set, not necessarily the traditional numbers 0, 1, 2, 3, etc. They behave in many ways like traditional numbers, but have some differences in how we add, subtract, multiply, and so forth.

In math notation the finite field set looks like this:

• Fp = {0, 1, 2, ... p–1}

What’s in the finite field set are the elements. Fp is a specific finite field called "field of p" or "field of 29" or whatever the size of it is (again, the size is what mathematicians call order). The numbers between the {}s represent what elements are in the field. We name the elements 0, 1, 2, etc. because these names are convenient for our purposes.

A finite field of order 11 looks like this:

• F11 = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}

A finite field of order 17 looks like this:

• F17= {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}

A finite field of order 983 looks like this:

• F983= {0, 1, 2, ... 982}

Notice the order of the field is always 1 more than the largest element. You might have noticed that the field has a prime order every time. For a variety of reasons that will become clear later, it turns out that fields must have an order that is a power of a prime, and that the finite fields whose order is prime are the ones we’re interested in.

#### Constructing a Finite Field in Python

We want to represent each finite field element, so in Python, we’ll be creating a class that represents a single finite field element. Naturally, we’ll name the class `FieldElement`.

The class represents an element in a field Fprime. The bare bones of the class look like this:

`link:code-ch01/ecc.py[]`
1. We first check that `num` is between `0` and `prime-1` inclusive. If not, we get an invalid FieldElement and we raise a `ValueError`, which is what we should raise when we get an inappropriate value.

2. The rest of the `init` method assigns the initialization values to the object.

3. The `eq` method checks if two objects of class `FieldElement` are equal. This is only true when the `num` and `prime` properties are equal.

What we’ve defined already allows us to do this:

`link:code-ch01/examples.py[]`

Python allows us to override the `==` operator on `FieldElement` with the `eq` method, which is something we’ll be taking advantage of going forward.

You can see this in action in the code that accompanies this book. Once you’ve set up Jupyter Notebook (see [setting_up]), you can navigate to code-ch01/Chapter1.ipynb and run the code to see the results. For the next exercise, you’ll want to open up ecc.py by clicking the link in the Exercise 1 box. If you get stuck, please remember that the answers to every exercise are in [appendix_solutions].

### Modulo Arithmetic

One of the tools we can use to make a finite field closed under addition, subtraction, multiplication, and division is something called modulo arithmetic.

We can define addition on the finite set using modulo arithmetic, which is something you probably learned when you first learned division. Remember problems like the one in Long division example 1?

Figure 1. Long division example 1

Whenever the division wasn’t even, there was something called the "remainder," which is the amount left over from the actual division. We define modulo in the same way. We use the operator % for modulo:

• 7 % 3 = 1

Long division example 2 shows another example.

Figure 2. Long division example 2

Formally speaking, the modulo operation is the remainder after division of one number by another. Let’s look at another example with larger numbers:

• 1747 % 241 = 60

If it helps, you can think of modulo arithmetic as "wraparound" or "clock" math. Imagine a problem like this:

• It is currently 3 o'clock. What hour will it be 47 hours from now?

The answer is 2 o’clock because (3 + 47) % 12 = 2 (see Clock going forward 47 hours).

Figure 3. Clock going forward 47 hours

We can also see this as "wrapping around" in the sense that we go past 0 every time we move ahead 12 hours.

We can perform modulo on negative numbers. For example, you can ask:

• It is currently 3 o'clock. What hour was it 16 hours ago?

• (3 – 16) % 12 = 11

The minute hand is also a modulo operation. For example, you can ask:

• It is currently 12 minutes past the hour. What minute will it be 843 minutes from now?

It will be 15 minutes past the hour:

• (12 + 843) % 60 = 15

• It is currently 23 minutes past the hour. What minute will it be 97 minutes from now?

In this case, the answer is 0:

• (23 + 97) % 60 = 0

0 is another way of saying there is no remainder.

The result of the modulo (%) operation for minutes is always between 0 and 59, inclusive. This happens to be a very useful property as even very large numbers can be brought down to a relatively small range with modulo:

• 14738495684013 % 60 = 33

We’ll be using modulo as we define field arithmetic. Most operations in finite fields use the modulo operator in some capacity.

#### Modulo Arithmetic in Python

Python uses the `%` operator for modulo arithmetic. Here is how the modulo operator is used:

`link:code-ch01/examples.py[]`

We can also use the modulo operator on negative numbers, like this:

`link:code-ch01/examples.py[]`

### Finite Field Addition and Subtraction

Remember that we need to define finite field addition such that we ensure the result is still in the set. That is, we want to make sure that addition in a finite field is closed.

We can use what we just learned, modulo arithmetic, to make addition closed. Let’s say we have a finite field of 19:

• F19 = {0, 1, 2, ... 18}

where a, bF19. Note that the symbol ∈ means "is an element of." In our case, a and b are elements of F19.

• a+fb ∈ F19

We denote finite field addition with +f to avoid confusion with normal integer addition, +.

If we utilize modulo arithmetic, we can guarantee this to be the case. We can define a+fb this way:

• a+fb = (a + b)%19

For example:

• 7+f8 = (7+8)%19 = 15
• 11+f17 = (11+17)%19 = 9

and so on.

We take any two numbers in the set, add, and "wrap around" the end to get the sum. We are creating our own addition operator here and the result is a bit unintuitive. After all, 11+f17 = 9 just doesn’t look right because we’re not used to finite field addition.

More generally, we define field addition this way:

• a+fb = (a + b)%p

where a, bFp.

We also define the additive inverse this way. aFp implies that –faFp:

• fa = (–a) % p

Again, for clarity, we use –f to distinguish field subtraction and negation from integer subtraction and negation.

In F19:

• f9 = (–9) % 19 = 10

which means that:

• 9+f 10 = 0

And that turns out to be true.

Similarly, we can do field subtraction:

• afb = (ab)%p

where a, bFp.

In F19:

• 11–f9=(11-9)%19 = 2
• 6–f13=(6-13)%19 = 12

and so on.

#### Coding Addition and Subtraction in Python

In the class `FieldElement` we can now define `add` and `sub` methods. The idea of these methods is that we want something like this to work:

`link:code-ch01/examples.py[]`

In Python we can define what addition (or the + operator) means for our class with the `add` method. So how do we do this? We combine what we learned with modulo arithmetic and create a new method of the class `FieldElement` like so:

`link:code-ch01/ecc.py[]`
1. We have to ensure that the elements are from the same finite field, otherwise this calculation doesn’t have any meaning.

2. Addition in a finite field is defined with the modulo operator, as explained earlier.

3. We have to return an instance of the class, which we can conveniently access with `self.class`. We pass the two initializing arguments, `num` and `self.prime`, for the `init` method we saw earlier.

Note that we could use `FieldElement` instead of `self.class`, but this would not make the method easily inheritable. We will be subclassing `FieldElement` later, so making the method inheritable is important here.

### Finite Field Multiplication and Exponentiation

Just as we defined a new addition (+f) for finite fields that was closed, we can also define a new multiplication for finite fields that’s closed. By multiplying the same number many times, we can also define exponentiation. In this section, we’ll go through exactly how to define this using modulo arithmetic.

• 5 ⋅ 3 = 5 + 5 + 5 = 15
• 8 ⋅ 17 = 8 + 8 + 8 + ... (17 total 8's) ... + 8 = 136

We can define multiplication on a finite field the same way. Operating in F19 once again:

• 5 ⋅f 3 = 5 +f 5 +f 5
• 8 ⋅f 17 = 8 +f 8 +f 8 +f ... (17 total 8's) ... +f 8

We already know how to do the right side, and that yields a number within the F19 set:

• 5 ⋅f 3 = 5 +f 5 +f 5 = 15 % 19 = 15
• 8 ⋅f 17 = 8 +f 8 +f 8 +f ... (17 total 8's) ... +f 8 = (8⋅17) % 19 = 136 % 19 = 3

Note that the second result is pretty unintuitive. We don’t normally think of 8 ⋅f 17 = 3, but that’s part of what’s necessary in order to define multiplication to be closed. That is, the result of field multiplication is always in the set {0, 1, …​ p–1}.

Exponentiation is simply multiplying a number many times:

• 73=7⋅f7⋅f7 = 343

In a finite field, we can do exponentiation using modulo arithmetic.

In F19:

• 73 = 343 % 19=1
• 912 = 7

Exponentiation again gives us counterintuitive results. We don’t normally think 73 = 1 or 912 = 7. Again, finite fields have to be defined so that the operations always result in a number within the field.

 Note Why Prime Fields are Useful The answer to Exercise 5 is why we choose to use finite fields with a prime number of elements. No matter what k you choose, as long as it’s greater than 0, multiplying the entire set by k will result in the same set as you started with. Intuitively, the fact that we have a prime order results in every element of a finite field being equivalent. If the order of the set was a composite number, multiplying the set by one of the divisors would result in a smaller set.

#### Coding Multiplication in Python

Now that we understand what multiplication should be in `FieldElement`, we want to define the `mul` method that overrides the `*` operator. We want this to work:

`link:code-ch01/examples.py[]`

As with addition and subtraction, the next exercise is to make multiplication work for our class by defining the `mul` method.

#### Coding Exponentiation in Python

We need to define the exponentiation for `FieldElement`, which in Python can be defined with the `pow` method, overriding the `**` operator. The difference here is that the exponent is not a `FieldElement`, so it has to be treated a bit differently. We want something like this to work:

`link:code-ch01/examples.py[]`

Note that because the exponent is an integer, instead of another instance of `FieldElement`, the method receives the variable `exponent` as an integer. We can code it this way:

```class FieldElement:
...
def __pow__(self, exponent):
num = (self.num ** exponent) % self.prime  # (1)
return self.__class__(num, self.prime)  # (2)```
1. This is a perfectly fine way to do it, but `pow(self.num, exponent, self.prime)` is more efficient.

2. We have to return an instance of the class as before.

Why don’t we force the exponent to be a `FieldElement` object? It turns out that the exponent doesn’t have to be a member of the finite field for the math to work. In fact, if it were, the exponents wouldn’t display the intuitive behavior we expect, like being able to add the exponents when we multiply with the same base.

Some of what we’re doing now may seem slow for large numbers, but we’ll use some clever tricks to improve the performance of these algorithms.

### Finite Field Division

The intuition that helps us with addition, subtraction, multiplication, and perhaps even exponentiation unfortunately doesn’t help us quite as much with division. Because division is the hardest operation to make sense of, we’ll start with something that should make sense.

In normal math, division is the inverse of multiplication:

• 7 ⋅ 8 = 56 implies that 56/8 = 7

• 12 ⋅ 2 = 24 implies that 24/12 = 2

And so on. We can use this as the definition of division to help us. Note that like in normal math, you cannot divide by 0.

In F19, we know that:

• 3⋅f7 = 21%19 = 2 implies that 2/f7 = 3
• 9⋅f5 = 45%19 = 7 implies that 7/f5 = 9

This is very unintuitive, as we generally think of 2/f7 or 7/f5 as fractions, not nice finite field elements. Yet that is one of the remarkable things about finite fields: finite fields are closed under division. That is, dividing any two numbers where the denominator is not 0 will result in another finite field element.

The question you might be asking yourself is, how do I calculate 2/f7 if I don’t know beforehand that 3⋅f7 = 2? This is indeed a very good question; to answer it, we’ll have to use the result from Exercise 7.

In case you didn’t get it, the answer is that n(p–1) is always 1 for every p that is prime and every n > 0. This is a beautiful result from number theory called Fermat’s little theorem. Essentially, the theorem says:

• n(p–1)%p = 1

where p is prime.

Since we are operating in prime fields, this will always be true.

Fermat’s Little Theorem

There are many proofs of this theorem, but perhaps the simplest is using what we saw in Exercise 5—namely, that these sets are equal:

• {1, 2, 3, ... p–2, p–1} = {n%p, 2n%p, 3n%p (p–2)n%p, (p–1)n%p}

The resulting numbers might not be in the right order, but the same numbers are in both sets. We can then multiply every element in both sets to get this equality:

• 1 ⋅ 2 ⋅ 3 ⋅ ... ⋅ (p–2) ⋅ (p–1) % p = n ⋅ 2n ⋅ 3n ⋅ ... ⋅ (p–2)n ⋅ (p–1)n % p

The left side is the same as (p–1)! % p where ! is the factorial (e.g., 5! = 5 ⋅ 4 ⋅ 3 ⋅ 2 ⋅ 1). On the right side, we can gather up all the n’s and get:

• (p–1)! ⋅ n(p–1) % p

Thus:

• (p–1)! % p = (p–1)! ⋅ n(p–1) % p

The (p–1)! on both sides cancel, giving us:

• 1 = n(p–1) % p

This proves Fermat’s little theorem.

Because division is the inverse of multiplication, we know:

• a/b = af(1/b) = afb–1

We can reduce the division problem to a multiplication problem as long as we can figure out what b–1 is. This is where Fermat’s little theorem comes into play. We know:

• b(p–1) = 1

because p is prime. Thus:

• b–1 = b–1f1=b–1fb(p–1) = b(p–2)

or:

• b–1 = b(p–2)

In F19, this means practically that b18 = 1 , which means that b–1 = b17 for all b > 0.

So in other words, we can calculate the inverse using the exponentiation operator. In F19:

• 2/7 = 2⋅7(19 – 2) = 2⋅717=465261027974414%19 = 3
• 7/5 = 7⋅5(19 – 2) = 7⋅517=5340576171875%19 = 9

This is a relatively expensive calculation as exponentiating grows very fast. Division is the most expensive operation for that reason. To lessen the expense, we can use the `pow` function in Python, which does exponentiation. In Python, `pow(7,17)` does the same thing as `717`. The `pow` function, however, has an optional third argument that makes our calculation more efficient. Specifically, `pow` will modulo by the third argument. Thus, `pow(7,17,19)` will give the same result as `7``17%19` but do so faster because the modulo function is done after each round of multiplication.

### Redefining Exponentiation

One last thing that we need to take care of before we leave this chapter is the `pow` method, which needs to handle negative exponents. For example, a–3 needs to be a finite field element, but the current code does not take care of this case. We want, for example, something like this to work:

`link:code-ch01/examples.py[]`

Unfortunately, the way we’ve defined `pow` simply doesn’t handle negative exponents, because the second parameter of the built-in Python function `pow` is required to be positive.

Thankfully, we can use some math we already know to solve this. We know from Fermat's little theorem that:

• ap–1 = 1

This fact means that we can multiply by ap–1 as many times as we want. So, for a–3, we can do:

• a–3 = a–3ap–1 = ap–4

This is a way we can do negative exponents. A naive implementation would do something like this:

```class FieldElement:
...
def __pow__(self, exponent):
n = exponent
while n < 0:
n += self.prime - 1 # (1)
num = pow(self.num, n, self.prime) # (2)
return self.__class__(num, self.prime)```
1. We add until we get a positive exponent.

2. We use the Python built-in `pow` to make this more efficient.

Thankfully, we can do even better. We already know how to force a number out of being negative, using our familiar friend `%`! As a bonus, we can also reduce very large exponents at the same time given that ap–1 = 1. This will make the `pow` function not work as hard:

```class FieldElement:
...