Skip to content

Commit

Permalink
update to filter
Browse files Browse the repository at this point in the history
  • Loading branch information
boazbk committed Jun 10, 2019
1 parent 30d4ed1 commit 0432b01
Show file tree
Hide file tree
Showing 22 changed files with 383 additions and 326 deletions.
18 changes: 8 additions & 10 deletions __latexindent_temp.tex
Original file line number Diff line number Diff line change
@@ -1,11 +1,9 @@
@unpublished{harvey:hal-02070778,
TITLE = {{Integer multiplication in time O(n log n)}},
AUTHOR = {Harvey, David and Van Der Hoeven, Joris},
URL = {https://hal.archives-ouvertes.fr/hal-02070778},
NOTE = {working paper or preprint},
YEAR = {2019},
MONTH = Mar,
PDF = {https://hal.archives-ouvertes.fr/hal-02070778/file/nlogn.pdf},
HAL_ID = {hal-02070778},
HAL_VERSION = {v1},
@article{maass1985combinatorial,
title={Combinatorial lower bound arguments for deterministic and nondeterministic Turing machines},
author={Maass, Wolfgang},
journal={Transactions of the American Mathematical Society},
volume={292},
number={2},
pages={675--693},
year={1985}
}
45 changes: 45 additions & 0 deletions introtcs.bib
Original file line number Diff line number Diff line change
Expand Up @@ -891,4 +891,49 @@ @unpublished{HarveyvdHoeven2019
PDF = {https://hal.archives-ouvertes.fr/hal-02070778/file/nlogn.pdf},
HAL_ID = {hal-02070778},
HAL_VERSION = {v1},
}


@article{schrijver2005history,
title={On the history of combinatorial optimization (till 1960)},
author={Schrijver, Alexander},
journal={Handbooks in operations research and management science},
volume={12},
pages={1--68},
year={2005},
publisher={Elsevier}
}



@book{CLRS,
title={Introduction to algorithms},
author={Cormen, Thomas H and Leiserson, Charles E and Rivest, Ronald L and Stein, Clifford},
year={2009},
publisher={MIT press}
}

@misc{TardosKleinberg,
title={Algorithm Design},
author={Tardos, Eva and Kleinberg, Jon},
year={2006},
publisher={Reading (MA): Addison-Wesley}
}

@book{dasgupta2008algorithms,
title={Algorithms},
author={Dasgupta, Sanjoy and Papadimitriou, Christos H and Vazirani, Umesh Virkumar},
year={2008},
publisher={McGraw-Hill Higher Education}
}


@article{maass1985combinatorial,
title={Combinatorial lower bound arguments for deterministic and nondeterministic Turing machines},
author={Maass, Wolfgang},
journal={Transactions of the American Mathematical Society},
volume={292},
number={2},
pages={675--693},
year={1985}
}
22 changes: 17 additions & 5 deletions lec_01_introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,12 +112,22 @@ We ask some questions that were already pondered by the Babylonians, such as "wh



::: {.remark title="Value vs. length of a number." #lengthofinput}
It is important to distinguish between the _value_ of a number, and the _length of its representation_ (i.e., the number of digits it has).
There is a big difference between the two: having 1,000,000,000 dollars is not the same as having 10 dollars!
When talking about running time of algorithms, "less is more", and so an algorithm that runs in time proportional to the _number of digits_ of an input number (or even the number of digit squared) is much preferred to an algorithm that runs in time proportional to the _value_ of the input number.
::: {.remark title="Specification, implementation and analysis of algorithms." #implspecanarem}
A full description of an algorithm has three components:

* __Specification__: __What__ is the task that the algorithm performs (e.g., multiplication in the case of [naivemultalg](){.ref} and [gradeschoolalg](){.ref}.)

* __Implementation__: __How__ is the task accomplished: what is the sequence of instructions to be performed. Even though [naivemultalg](){.ref} and [gradeschoolalg](){.ref} perform the same computational task (i.e., they have the same _specification_), they do it in different ways (i.e., they have different _implementations_).

* __Analysis:__ __Why__ does this sequence of instructions achieves the desired task. A full description of [naivemultalg](){.ref} and [gradeschoolalg](){.ref} will include a _proof_ for each one of these algorithms that on input $x,y$, the algorithm does indeed output $x\cdot y$.

Often as part of the analysis we show that the algorithm is not only __correct__ but also __efficient__. That is, we want to show that not only will the algorithm compute the desired task, but will do so in prescribed number of operations. For example [gradeschoolalg](){.ref} computes the multiplication function on inputs of $n$ digits using $O(n^2)$ operations, while [karatsubaalg](){.ref} (described below) computes the same function using $O(n^{1.6})$ operations.
:::





## Extended Example: A faster way to multiply (optional) {#karatsubasec }

Once you think of the standard digit-by-digit multiplication algorithm, it seems like the ``obviously best'' way to multiply numbers.
Expand Down Expand Up @@ -456,7 +466,9 @@ Aaronson's book [@Aaronson13democritus] is another great read that touches upon
For more on the algorithms the Babylonians used, see [Knuth's paper](http://steiner.math.nthu.edu.tw/disk5/js/computer/1.pdf) and Neugebauer's [classic book](https://www.amazon.com/Exact-Sciences-Antiquity-Neugebauer/dp/0486223329).


Many of the algorithms we mention in this chapter are covered in algorithms textbooks such as those by Cormen, Leiserson, Rivert, and Stein [@CLRS], Kleinberg and Tardos [@KleinbergTardos06], and Dasgupta, Papadimitriou and Vazirani [@DasguptaPV08].
Many of the algorithms we mention in this chapter are covered in algorithms textbooks such as those by Cormen, Leiserson, Rivert, and Stein [@CLRS], Kleinberg and Tardos [@KleinbergTardos06], and Dasgupta, Papadimitriou and Vazirani [@DasguptaPV08], as well as [Jeff Erickson's textbook](http://jeffe.cs.illinois.edu/teaching/algorithms/).
Erickson's book is freely available online and contains a great exposition of recursive algorithms in general and Karatsuba's algorithm in particular.



The story of Karatsuba's discovery of his multiplication algorithm is recounted by him in [@Karatsuba95]. As mentioned above, further improvements were made by Toom and Cook [@Toom63, @Cook66], Schönhage and Strassen [@SchonhageStrassen71], Fürer [@Furer07], and recently by Harvey and Van Der Hoeven [@HarveyvdHoeven2019], see [this article](https://www.quantamagazine.org/mathematicians-discover-the-perfect-way-to-multiply-20190411/) for a nice overview.
Expand Down
2 changes: 1 addition & 1 deletion lec_04_code_and_data.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ See [codedataoverviewfig](){.ref} for an overview of the results of this chapte
## Representing programs as strings {#representprogramsec }


![In the Harvard Mark I computer, a program was represented as a list of triples of numbers, which were then encoded by perforating holes in a control card.](../figure/tapemarkI.png){#figureid .margin }
![In the Harvard Mark I computer, a program was represented as a list of triples of numbers, which were then encoded by perforating holes in a control card.](../figure/tapemarkI.png){#markonerep .margin }

We can represents programs or circuits as strings in a myriad of ways.
For example, we can represent the code of a program using the ASCII or UNICODE representations.
Expand Down
7 changes: 5 additions & 2 deletions lec_07_other_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,12 @@ The NAND-RAM programming language extends NAND-TM by adding the following featur

* As is often the case in programming languages, we will assume that for Boolean operations such as `NAND`, a zero valued integer is considered as _false_, and a nonzero valued integer is considered as _true_.

* In addition to `NAND`, NAND-RAM also includes all the basic arithmetic operations of addition, subtraction, multiplication, (integer) division, as well as comparisons (equal, greater than, less than, etc..)
* In addition to `NAND`, NAND-RAM also includes all the basic arithmetic operations of addition, subtraction, multiplication, (integer) division, as well as comparisons (equal, greater than, less than, etc..).

* We will also include as part of the language basic control flow structures such as `if` and `goto`.
* NAND-RAM includes conditional statements `if`/`then` as part of the language.

* As in NAND-TM we encapsulate a NAND-RAM program in one large loop. That is, the last instruction is `JMP(flag)` which goes back to the beginning of the program if `flag` equals $1$ and halts otherwise.
As usual, we can implement other looping constructs such as `goto` and `while` or `for` inner loops using syntactic sugar.



Expand Down
50 changes: 32 additions & 18 deletions lec_08_uncomputability.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,11 @@ That is, if the machine $M$ halts on $x$ and outputs some $y\in \{0,1\}^*$ then
![A _Universal Turing Machine_ is a single Turing Machine $U$ that can evaluate, given input the (description as a string of) arbitrary Turing machine $M$ and input $x$, the output of $M$ on $x$. In contrast to the universal circuit depicted in [universalcircfig](){.ref}, the machine $M$ can be much more complex (e.g., more states or tape alphabet symbols) than $U$. ](../figure/universaltm.png){#universaltmfig .margin }


::: { .bigidea #universaltmidea}
There is a single algorithm that can evaluate arbitrary algorithms on arbitrary inputs.
:::


::: {.proofidea data-ref="universaltmthm"}
Once you understand what the theorem says, it is not that hard to prove. The desired program $U$ is an _interpreter_ for Turing machines. That is, $U$ gets a representation of the machine $M$ (think of it as source code), and some input $x$, and needs to simulate the execution of $M$ on $x$.

Expand Down Expand Up @@ -273,6 +278,10 @@ Specifically, the proof will be by contradiction.
That is, we will assume towards a contradiction that $HALT$ is computable, and use that assumption, together with the universal Turing machine of [universaltmthm](){.ref}, to derive that $F^*$ is computable, which will contradict [uncomputable-func](){.ref}.
:::

::: { .bigidea #reductionuncomputeidea}
If a function $F$ is uncomputable we can show that another function $H$ is uncomputable by giving a way to _reduce_ the task of computing $F$ to computing $H$.
:::


::: {.proof data-ref="halt-thm"}
The proof will use the previously established result [uncomputable-func](){.ref}.
Expand Down Expand Up @@ -446,7 +455,7 @@ If we now set `(f,x) = CantSolveMe(T)`, then `T(f,x)=False` but `f(x)` does in f



## Reductions
## Reductions {#reductionsuncompsec }

The Halting problem turns out to be a linchpin of uncomputability, in the sense that [halt-thm](){.ref} has been used to show the uncomputability of a great many interesting functions.
We will see several examples in such results in this chapter and the exercises, but there are many more such results (see [haltreductions](){.ref}).
Expand All @@ -470,10 +479,21 @@ For starters, since we need $R$ to be computable, we should describe the algorit
The algorithm to compute $R$ is known as a _reduction_ since the transformation $R$ modifies an input to $HALT$ to an input to $BLAH$, and hence _reduces_ the task of computing $HALT$ to the task of computing $BLAH$.
The second component of a reduction-based proof is the _analysis_ of the algorithm $R$: namely a proof that $R$ does indeed satisfy the desired properties.

At the end of the day reduction-based proofs are just like other proofs by contradiction, but the fact that they involve hypothetical algorithms that don't really exist tends to make reductions quite confusing.
Reduction-based proofs are just like other proofs by contradiction, but the fact that they involve hypothetical algorithms that don't really exist tends to make reductions quite confusing.
The one silver lining is that at the end of the day the notion of reductions is mathematically quite simple, and so it's not that bad even if you have to go back to first principles every time you need to remember what is the direction that a reduction should go in.


::: {.remark title="Reductions are algorithms" #reductionsaralg}
A reduction is an _algorithm_, which means that, as discussed in [implspecanarem](){.ref}, a reduction has three components:

* __Specification (what):__ In the case of a reduction from $HALT$ to $BLAH$, the specification is that function $R:\{0,1\}^* \rightarrow \{0,1\}^*$ should satisfy that $HALT(M,x)=BLAH(R(M,x))$ for every Turing machine $M$ and input $x$. In general, to reduce a function $F$ to $G$, the reduction should satisfy $F(w)=G(R(w))$ for every input $w$ to $F$.

* __Implementation (how):__ The algorithm's description: the precise instructions how to transform an input $w$ to the output $R(w)$.

* __Analysis (why):__ A _proof_ that the algorithm meets the specification. In particular, in a reduction from $F$ to $G$ this is a proof that for every input $w$, the output $y$ of the algorithm satisfies that $F(w)=G(y)$.
:::


### Example: Halting on the zero problem

Here is a concrete example for a proof by reduction.
Expand Down Expand Up @@ -503,26 +523,21 @@ Since this is our first proof by reduction from the Halting problem, we will spe



Our Algorithm $B$ works as follows:


::: {.quote}
__Algorithm B:__ (_Goal:_ Compute $HALT$.)

__Input:__ $M,x$ where $M$ represents a Turing machine and $x$ is a string that represents its input.
``` {.algorithm title="$HALT$ to $HALTONZERO$ reduction" #halttohaltonzerored}
INPUT: Turing machine $M$ and string $x$.
OUTPUT: Turing machine $M'$ such that $M$ halts on $x$ iff $M'$ halts on zero
__Assumption:__ There is an algorithm $A$ such that
$A(M)=HALTONZERO(M)$ for every $M$.

__Operation:__
Procedure{$N_{M,x}$}{$w$} # Description of the T.M. $N_{M,x}$
Return $EVAL(M,x)$ # Ignore the input $w$, evaluate $M$ on $x$.
Endprocedure
1. Let $N_{M,x}$ denote the Turing machine that does the following: _"on input $z\in \{0,1\}^*$, evaluate $M$ on the input $x$ and return the result"_.

2. Return $y= A(N_x)$.
:::
Return $N_{M,x}$ # We do not execute $N_{M,x}$: only return its description
```

That is, on input a pair $(M,x)$ the algorithm $B$ uses this pair to construct a Turing machine $N_{M,x}$, feeds this machine to $A$, and outputs the result.
The machine $N_{M,x}$ ignores its input $z$ and simply runs $M$ on $x$.
Our Algorithm $B$ works as follows: on input $M,x$, it runs [halttohaltonzerored](){.ref} to obtain a Turing Machine $M'$, and then returns $A(M')$.
The machine $M'$ ignores its input $z$ and simply runs $M$ on $x$.

In pseudocode, the program $N_{M,x}$ will look something like the following:

Expand All @@ -537,7 +552,6 @@ def N(z):
```

That is, if we think of $N_{M,x}$ as a program, then it is a program that contains $M$ and $x$ as "hardwired constants", and given any input $z$, it simply ignores the input and always returns the result of evaluating $M$ on $x$.

The algorithm $B$ does _not_ actually execute the machine $N_{M,x}$. $B$ merely writes down the description of $N_{M,x}$ as a string (just as we did above) and feeds this string as input to $A$.


Expand Down
3 changes: 0 additions & 3 deletions lec_09_godel.md
Original file line number Diff line number Diff line change
Expand Up @@ -469,9 +469,6 @@ Hence the uncomputability of $QMS$ ([QMS-thm](){.ref}) implies the uncomputabil

## Exercises

::: {.remark title="Disclaimer" #disclaimerrem}
Most of the exercises have been written in the summer of 2018 and haven't yet been fully debugged. While I would prefer people do not post online solutions to the exercises, I would greatly appreciate if you let me know of any bugs. You can do so by posting a [GitHub issue](https://github.com/boazbk/tcs/issues) about the exercise, and optionally complement this with an email to me with more details about the attempted solution.
:::

::: {.exercise title="Gödel's Theorem from uncomputability of $QIS$" #godelfromqisex}
Prove [godelthmqis](){.ref} using [QIS-thm](){.ref}
Expand Down
Loading

0 comments on commit 0432b01

Please sign in to comment.