From 354d8c7357f18461a6f5464d32f0b802018c11ce Mon Sep 17 00:00:00 2001
From: Scinawa <2940017+Scinawa@users.noreply.github.com>
Date: Wed, 20 Nov 2024 10:13:18 +0100
Subject: [PATCH 01/22] First modifications to chapter 3
---
data.Rmd | 1177 ++++++++++++++++++++++++++++++++++++++++++------------
1 file changed, 928 insertions(+), 249 deletions(-)
diff --git a/data.Rmd b/data.Rmd
index b3f122b..127299b 100644
--- a/data.Rmd
+++ b/data.Rmd
@@ -1,256 +1,497 @@
-# Classical data in quantum computers {#chap-classical-data-quantum-computers}
+# Classical data on quantum computers {#chap-classical-data-quantum-computers}
-```{r, echo=FALSE, fig.align = 'center', fig.width=10, fig.cap="This section is heavily work in progress. In this [TODO list](https://github.com/Scinawa/quantumalgorithms.org/issues/70) you can see the improvements of this Chapter in the following months."}
-knitr::include_graphics("images/wip.png")
-```
+
Contributors: Alessandro Luongo, Jun Hao Hue, Francesco Ghisoni, João F. Doriguello
+
-In this chapter we will discuss the problem of manipulating classical information (numbers, vectors, matrices, and functions) into our quantum computer. More precisely, after describing possible ways of storing information into quantum states, we discuss the problem of loading and retrieving data from quantum computer. In other words, we are just studying the I/O interface of our quantum computer.
+ Version: 0.5.1
+
-We size the opportunity of discussing classical data in quantum computers to step back, and show you [all the possible combinations](https://indico.desy.de/event/26672/contributions/60982/attachments/39512/49045/qml_maria.pdf) of quantum and classical data and algorithms. This book is mostly interested in classical and quantum data processed by quantum computer. What is quantum data? ( Actually, no one knows, but it is often something that you hear at conferences. No.. I am kidding!) Quantum data is supposed to be quantum states that is generated by a generic quantum process, which could be another quantum circuit, a quantum channel (i.e. communication from quantum internet) or any density matrix that you receive from experiments.
+
+
+
-```{r, echo=FALSE, fig.width=10, fig.cap="We can have four combinations between classica and quantum data, and classical and quantum computers. As you can imagine, in these pages we will focus on quantum algorithms on classical data, with some detours on quantum algorithms on quantum data. "}
-knitr::include_graphics("algpseudocode/typesofdata-1.png")
-```
+
+
+
+
+
+
+
+
-## Representing data in quantum computers
+
-We begin our journey into quantum algorithms by understanding how we can represent and store data as a quantum state. This problem is of paramount importance, because knowing what is the best way of encoding data in a quantum computer might pave the way for intuitions in solving our problems. On the contrary, using the wrong encoding might prevent you from reasoning about the right algorithm design, and obtaining the desired advantages in the implementation of your algorithm. As it has been well-said: *"In order to use the strengths of quantum mechanics without being confined by classical ideas of data encoding, finding "genuinely quantum" ways of representing and extracting information could become vital for the future of quantum machine learning".* [@schuld2015introduction]. There are two fundamental ways of encoding information in a quantum state: the *amplitude* encoding and the *binary* encoding. In amplitude encoding we store your data in the amplitudes of a quantum state, therefore we can encode $n$ real values (or better, some fixed point precision approximation of a real number) using $O(\lceil \log n\rceil )$ qubits. In the binary (or digital) encoding you store a bit in the state of a qubit. Each encoding allows to process the data in different ways, unlocking different possibilities. As tacit convention that is used in literature - and throughout this book - we often use Greek letters inside kets to represent generically quantum states $\ket{\psi}, \ket{\phi}, \ket{\varphi}$, etc..., and use Latin letters to represent quantum registers holding classical data interpreted as bitstrings. The precision that we can use for specifying the *amplitude* of a quantum state might be limited - in practice - by the precision of our quantum computer in manipulating quantum states (i.e. development in techniques in quantum metrology and sensing). Techniques that use a certain precision in the amplitude of a state might suffer of initial technical limitations of the hardware. The precision in the manipulation could be measured, for instance, by the fidelity, but discussing this subject is out of scope for this work.
-### Numbers and quantum arithmetics {#sec:numbers}
+In this chapter discuss how to represent and load classically available data on a quantum computer.
+First, we describe how to represent data, which reduces to understanding the possible ways of storing information in quantum states. Then, we introduce the quantum memory model of computation, which is the model that we use to load data (which we assume to know classically) into a quantum computer. We finally look at the problem of retrieving data from a quantum computer, discussing the complexity of the problem. The main takeaway from this chapter is the understanding of the tools that are often used at the very start and very end of many quantum algorithms, which will set us up to understanding quantum algorithms in future chapters. This chapter can be tought as the study of the I/O interface of our quantum computer.
-Number can be stored as binary encoding: each bit of a number is encoded in the state of a single qubit. Let's start with the most simple scalar: an integer. Let $x \in \mathbb{N}$. To represent it on a quantum computer, we consider the binary expansion of $x$ as a list of $m$ bits, and we set the state of the $i$-th qubit as the value of the $i$-th bit of $x$:
+## Representing data in quantum computers{#sec:representing-data}
+
+
+We'll begin our journey into quantum algorithms by understanding how we can represent and store data as a quantum state. Data plays a key role and is at the heart of most modern algorithms and knowing the best way to encode it on a quantum computer might pave the way for intuitions in solving problems, an essential step towards quantum advantage (as noted also in [@schuld2015introduction]).
+
+There are various ways to achieve this task. Some are borrowed from classical computation, such as the *binary* encoding, which consist in representing encoding boolean strings of length $n$ using $n$ qubits, while some leverage quantum properties, such as the *amplitude* encoding, which consits in representing vectors as linear combination of computational basis. We note that some of the presented schemes depend heavily on the accuracy of the available quantum computers in manipulating quantum states (i.e. developments in metrology and sensing). For example, techniques that rely on precise amplitudes of a state will be hindered by the current noisy hardware, or incour in high overhead of the quantum error correction. Considerations on the practical feasibility of an encoding technique are out of scope for this book.
+
+### Binary encoding {#sec:binary-encoding}
+
+The first method is a way to represent natural numbers on a quantum computer by using the binary expansion of the number to determine the state of a sequence of qubits. Each qubit is set to either the state $\ket{0}$ or $\ket{1}$, corresponding to each bit in the binary representation of the number. To represent a natural number $x \in \mathbb{N}$ on a quantum computer, we consider the binary expansion of $x$ as a list of $m$ bits, and we set the state of the $i^{th}$ qubit as the value of the $i^{th}$ bit of $x$:
\begin{equation}
-\ket{x} = \bigotimes_{i=0}^{m} \ket{x_i}
+\ket{x} = \bigotimes_{i=0}^{m} \ket{x_i}.
+(\#eq:binary-encoding)
\end{equation}
-Eventually, we can use one more qubit for the sign. In most of the cases, we want to work also with non-integer numbers. Real numbers can be approximated with decimal numbers with a certain bits of precision. For this, we need a bit to store the sign, some bits to store the integer part, and some other bits to store the decimal part. This is more precisely stated in the following definition.
-
+When extending this definition to signed integers we can, for example, use an additional qubit to store the sign of $x \in \mathbb{Z}$. Another possibility, is to represent signed integer using $2$s complement. This is actually the representation of choice for classical and quantum arithmetic [@luongo2024measurement]. For real numbers we consider that, as on classical computers, $x \in \mathbb{R}$ can be approximated with binary representation up to a certain precision. As before, we need a bit to store the sign, some bits to store the integer part, and some bits to store the fractional part. This is more precisely stated in the following definition, which is a possible way to represent number with fixed precision.
(ref:rebentrost2021quantum) [@rebentrost2021quantum]
```{definition, fixed-point-encoding, name="Fixed-point encoding of real numbers (ref:rebentrost2021quantum)"}
-Let $c_1,c_2$ be positive integers, and $a\in\{0,1\}^{c_1}$, $b \in \{0,1\}^{c_2}$, and $s \in \{0,1\}$ be bit strings. Define the rational number as:
+Let $c_1,c_2$ be positive integers, and $a\in\{0,1\}^{c_1}$, $b \in \{0,1\}^{c_2}$, and $s \in \{0,1\}$ be bit strings. Define the rational number as
\begin{equation}
\mathcal{Q}(a,b,s):=
(-1)^s
\left(2^{c_1-1}a_{c_1}+ \dots + 2a_2 + a_1 + \frac{1}{2}b_1 + \dots + \frac{1}{2^{c_2}}b_{c_2} \right) \in [-R,R],
\end{equation}
-where $R := 2^{c_1}-2^{-c_2}$. If $c_1,c_2$ are clear from the context, we can use the shorthand notation for a number $z:=(a,b,s)$ and write $\mathcal{Q}(z)$ instead of $\mathcal{Q}(a,b,s)$. Given an $n$-dimensional vector $v \in (\{0,1\}^{c_1} \times \{0,1\}^{c_2} \times \{0,1\})^n$
-the notation $\mathcal{Q}(v)$ means an $n$-dimensional vector whose $j$-th component is $\mathcal{Q}(v_j)$, for $j \in[n]$.
+where $R= 2^{c_1}-2^{-c_2}$.
```
-It might seem complicated, but it is really the (almost) only thing that a reasonable person might come up with when expressing numbers as (qu)bits with fixed-point precision. In most of the algorithms we implicitly assume this (or equivalent) models. Stating clearly how to express numbers on a quantum computer as fixed point precision is important: we want to work a model where we can represent numbers with enough precision so that numerical errors in the computation are negligible and will not impact the final output of our algorithm. The choice of values for $c_1$ and $c_2$ in the previous definition depends on the problem and algorithm. For the purposes of optimizing the quantum circuit, these constants can be changed dynamically in various steps of the computation (for instance, if at some point we need to work with numbers between $0$ and $1$ we can neglect the $c_1$ bits needed to represent the integer part of a number). While analyzing how error propagates and accumulates throughout the operations in the quantum circuit is essential to ensure a correct final result, this analysis is often done numerically (via simulations, which we will discuss in Chapter \@ref(chap-QML-on-real-data) ), or when implementing the algorithm on real hardware. In principle, we could also think of having [floating point](https://en.wikipedia.org/wiki/IEEE_754) representation of numbers in our quantum computer. However, it is believed that the circuital overhead in the computation is not worth the trouble.
+If $c_1,c_2$ are clear from the context, we use the shorthand notation for a number $z:=(a,b,s)$ and write $\mathcal{Q}(z)$ instead of $\mathcal{Q}(a,b,s)$. Given an $n$-dimensional vector $v \in (\{0,1\}^{c_1} \times \{0,1\}^{c_2} \times \{0,1\})^n$
+the notation $\mathcal{Q}(v)$ means an $n$-dimensional vector whose $j$-th component is $\mathcal{Q}(v_j)$, for $j \in[n]$.
-When programming quantum algorithms, it is very common to use subroutines to perform arithmetic on numbers, and we will discuss these procedures in later sections of this work. We avoid the analysis of such details by using the quantum arithmetic model as in Definition \@ref{def:defQArith}. Recall that any Boolean circuit can be made reversible, and any reversible computation can be realized with a circuit involving negation and three-bit Toffoli gates. Such a circuit can be turned into a quantum circuit with single-qubit NOT gates and three-qubit Toffoli gates. Since most of the boolean circuits for arithmetic operations operate with a number of gates of $O(\text{poly}(c_1,c_2))$ this implies a number of quantum gates of $O(\text{poly}(c_1,c_2))$ for the corresponding quantum circuit.
+We note that the choice of $c_1$ and $c_2$ in definition \@ref(def:fixed-point-encoding) depends both on the problem at hand and the implemented algorithm. For the purposes of optimizing a quantum circuit, these constants can be dynamically changed. For example, if at some point of a computation we are required to work with numbers between $0$ and $1$, then we can neglect the $c_1$ bits.
+
+One of the utilities of having a definition to express numbers on a quantum computer to a fixed point precision is the analysis of numerical errors, which is essential to ensure the validity of the solution. This is often done numerically (via simulations, which we will discuss in Chapter \@ref(chap-QML-on-real-data) ), or during the implementation of the algorithm on real hardware. This binary encoding encompasses other kinds of encoding like $2$-complement encoding and a possible quantum implementation of [floating point](https://en.wikipedia.org/wiki/IEEE_754) representation. Howevever, we observe that the floating point encoding has a relatively high circuital overhead and, therefore, is not a common choice. A further layer of complexity arises in understanding how to treat arithmetic operations. This is addressed in the section below.
+
+#### Arithmetic model {#sec:arithmetic-model}
-```{definition, defQArith, name="Quantum arithmetic model"}
-Given $c_1, c_2 \in \mathbb{N}$ specifying fixed-point precision numbers as in Definition \@ref(def:fixed-point-encoding), we say we use a quantum arithmetic model of computation if the four arithmetic operations can be performed in constant time in a quantum computer.
-```
-Most often than not, quantum algorithms are not taking into account in the complexity of their algorithm the cost for performing operations described in their arithmetic model. In fact, they somehow don't even define a quantum arithmetic model, leaving that implicit. However, when estimating the resources needed to run an algorithm on a quantum computer, specifying these values become important. For a resource estimation for problems in quantum computational finance that takes into account the cost of arithmetic operations in fixed-point precision we refer to [@chakrabarti2021threshold].
+The advantage of using a binary encoding is that we can use quantum circuits for arithmetic operations.
+As we will discuss more in depth in \@ref(sec:implementation-oracle-synthesis) any Boolean circuit can be made reversible, and any reversible circuit can be implemented using single-qubit NOT gates and three-qubit Toffoli gates. Since most of the classical Boolean circuits for arithmetic operations operate with a number of gates of $O(\text{poly}(c_1,c_2))$, this implies a number of quantum gates of $O(\text{poly}(c_1,c_2))$ for the corresponding quantum circuit. Extending the analogy with classical computation allows us to introduce the arithmetic model of computation for performing operations on binary encoded numbers in constant time.
-
+(ref:optimalstoppingtime) [@optimalstoppingtime]
+
+```{definition, defQArith, name="Quantum arithmetic model (ref:optimalstoppingtime)"}
+Given $c_1, c_2 \in \mathbb{N}$ specifying fixed-point precision numbers as in Definition \@ref(def:fixed-point-encoding), we say we use a quantum arithmetic model of computation if the four arithmetic operations can be performed in constant time in a quantum computer.
+```
-### Vectors and matrices {#subsec-stateprep-matrices}
+Beware that using Definition \@ref(def:fixed-point-encoding) is not the only possibile choice. For example, most of the non-modular and modular arithemtic circuits are expressed in 2s complement. For a comprehensive and optimized list of results about this topic, the interest reader can read [@luongo2024measurement]. As for the classical counterpart, a quantum algorithm's complexity does not take into account the cost of performing arithmetic operations, as the number of digits of precisions used to represents numbers is a constant, and does not depend on the input size. However, when estimating the resources needed to run an algorithm on a quantum computer, specifying these values becomes important. For a good example of a complete resource analysis, including arithmetic operations in fixed-point precision, of common algorithms in quantum computational finance we refer to [@chakrabarti2021threshold].
-Representing vectors and matrices in quantum computers is the best way to understand the amplitude encoding. We can represent a vector $x \in \mathbb{R}^{2^n}$ as the following quantum state:
+### Amplitude encoding{#sec:amplitude-encoding}
+Amplitude encoding is a way to represent a vector $x$ of size $n$ (where $n$ is a power of $2$) in the amplitudes of an $\log(n)$ qubit pure state. We can map a vector $x \in \mathbb{R}^{N}$ (or even $\in \mathbb{C}^{N}$) to the following quantum state:
\begin{equation}
-\ket{x} = \frac{1}{{\left \lVert x \right \rVert}}\sum_{i=0}^{2^n-1}x_i\ket{i} = \|x\|^{-1}x
+\ket{x} = \frac{1}{{\left \lVert x \right \rVert}}\sum_{i=0}^{N-1}x_i\ket{i} = \|x\|^{-1}x,
+(\#eq:amplitude-encoding)
\end{equation}
-To represent a vector of size $2^n$, for some integer $n$, we just need $n$ qubits: we encode each component of the classical vector in the amplitudes of a pure state. In fact, we are just building an object representing $\ell_2$-normalized version of the vector $x$. Note that, in the quantum state in the previous equation we are somehow "losing" the information on the norm of the vector $x$: however we will see how this is not a problem when we work with more than one vector. This idea can be generalized to matrices: let $X \in \mathbb{R}^{n \times d}$, a matrix of $n$ rows of length $d$. We will encode them using $\lceil log(d) \rceil +\lceil log(n) \rceil$ qubits. Let $x(i)$ be the $i$-th row of $X$.
+Sometimes, amplitude encoding is also known as *quantum sampling access*, as sampling from a quantum state we prepare can also be interpreted as sampling from a probability distribution. In fact, if the amplitude of a computational basis $\ket{i}$ is $\alpha_i$, then we sample $\ket{i}$ with probability $|\alpha_i|^2$. Observe from the state in the above equation we are actually representing an $\ell_2$-normalized version of the vector $x$. Therefore we have "lost" the information on the norm of the vector $x$. However we will see how this is not a problem when we work with more than one vector. This type of encoding can be generalized to matrices. Let $x(i)$ be the $i$-th row of $X \in \mathbb{R}^{n \times d}$, a matrix with $n$ rows and $d$ columns (here we take again $n$ and $d$ to be a power of $2$). Then we can encode $X$ with $\lceil log(d) \rceil +\lceil log(n) \rceil$ qubits as:
\begin{equation}
-\frac{1}{\sqrt{\sum_{i=1}^n {\left \lVert x(i) \right \rVert}^2 }} \sum_{i=1}^n {\left \lVert x(i) \right \rVert}\ket{i}\ket{x(i)}
+\ket{X} = \frac{1}{\sqrt{\sum_{i=1}^n {\left \lVert x(i) \right \rVert}^2 }} \sum_{i=1}^n {\left \lVert x(i) \right \rVert}\ket{i}\ket{x(i)}
(\#eq:matrix-state1)
\end{equation}
-\begin{equation}\frac{1}{\sqrt{\sum_{i,j=1}^{n,d} |X_{ij}|^2}} \sum_{i,j=1}^{n,d} X_{ij}\ket{i}\ket{j}
+```{exercise}
+Check that Equation \@ref(eq:matrix-state1) is equivalent to
+\begin{equation}
+\ket{X} = \frac{1}{\sqrt{\sum_{i,j=1}^{n,d} |X_{ij}|^2}} \sum_{i,j=1}^{n,d} X_{ij}\ket{i}\ket{j},
(\#eq:matrix-state2)
\end{equation}
+```
+
-```{exercise}
-Check that Equation \@ref(eq:matrix-state1) and \@ref(eq:matrix-state2) are in fact equivalent?
+
+
+
+From an algorithms perspective, amplitude encoding is useful because it requires a logarithmic number of qubits with respect to the vector size, which might seem to lead to an exponential saving in physical resources when compared to classical encoding techniques. A major drawback of amplitude encoding is that, in the worst case, for the majority of states, it requires a circuit of size $\Omega(N)$.
+
+
+
+
+### Block encoding {#sec:block-encoding}
+Block encoding is another type of encoding for working with matrices on a quantum computer. More precisely, we want to encode a matrix into a unitary that has a circuit representation on a quantum computer. As it will become clear in the next chapters, being able to perform such encoding unlocks many possibilities in terms of new quantum algorithms.
+
+```{definition, name="Block encodings"}
+Let $A \in \mathbb{R}^{N \times N}$ be a square matrix for $N = 2^n$ for $n\in\mathbb{N}$, and let $\alpha \geq 1$. For $\epsilon > 0$, we say that a $(n+a)$-qubit unitary $U_A$ is a $(\alpha, a, \epsilon)$-block encoding of $A$ if
+
+\begin{equation}
+ | A - \alpha ( \bra{0}^{\otimes a} \otimes I) U_A (\ket{0}^{\otimes a} \otimes I) | \leq \epsilon
+\end{equation}
```
-
+It is useful to observe that an $(\alpha, a, \epsilon)$-block encoding of $A$ is just a $(1, a, \epsilon)$-block encoding of $A/\alpha$. Often, we do not want to take into account the number of qubits $a$ we need to create the block encoding because these are expected to be negligible. Therefore, some definitions of block encoding in the literature use the notation $(\alpha, \epsilon)$ or the notation $\alpha$-block encoding if the error is $0$. Note that the matrix $U_A \in \mathbb{R}^{(N+\kappa) \times (N+\kappa)}$ has the matrix $A$ encoded in the top-left part:
-
+\begin{equation}
+U_A = \begin{pmatrix}
+A & . \\
+. & .
+\end{pmatrix}.
+(\#eq:blockencoding)
+\end{equation}
-
+
-
+### Angle encoding {#sec:angle-encoding}
+Another way to encode vectors, as defined by [@schuld2021machine], is with angle encoding. This technique encodes information as angles of the Pauli rotations $\sigma_x(\theta)$, $\sigma_y(\theta)$, $\sigma_z(\theta)$. Given a vector $x \in \mathbb{R}^n$, with all elements in the interval $[0,2\pi]$; the technique seeks to apply $\sigma_{\alpha}^i(x_i)$, where $\alpha \in \{x,y,z \}$ and $i$ refers to the target qubit. The resulting state is said to be an angle encoding of $x$ and has a form given by
-
+\begin{equation}
+ \ket{x} = \prod_{i=1}^{n} \sigma_{\alpha}^{i}(x_i)\ket{0}^{\otimes n}
+ (\#eq:angle-encoding)
+\end{equation}
+
+This technique's advantages lies in its efficient resource utilization, which scales linearly for number of qubits. One major drawback is that it is difficult to perform arithmetic operations on the resulting state, making it difficult to apply to quantum algorithms.
+
+
-
+### Graph encoding {#sec:graph-encoding}
+A graph is as a tuple $G =(V,E)$, where $V$ are the vertices in the graph and $E$ are the edges, where $E \subseteq V \times V$. For graph encoding we require unidirected graphs in which if $( v_i, v_j ) \in E$ then $( v_j, v_i ) \in E$. Unidirected graphs can be either simple or multigraphs. A simple unidirected graph is one without self loops and at most a single edge connecting two vertices, whilst a unidirected multigraph can have self loops or multiple edges between two vertices. Graph encoding is possible for unidirected multigraphs with self loops but at most a single edge between two vertices.
-
+A graph $G$ will be represented as an $N=|V|$ qubit pure quantum state $\ket{G}$ such that
+
+\begin{equation}
+ K_G^v\ket{G} = \ket{G}, \;\; \forall v \in V
+ (\#eq:graph-encoding)
+\end{equation}
-
+where $K_G^v = \sigma_x^v\prod_{u \in N(v)}\sigma_z^u$, and $\sigma_x^u$ and $\sigma_z^u$ are the Pauli operators $\sigma_x$ and $\sigma_z$ applied to the $u^{th}$ qubit.
-
+Given a graph $G$ with $V$ vertices and edges $E$, take $N=|V|$ qubits in the $\ket{0}^{\otimes N}$ state, apply $H^{\otimes N}$, producing the $\ket{+}^{\otimes N}$ state where $\ket{+} = \frac{\ket{0} + \ket{1}}{\sqrt{2}}$. Then apply a controlled $Z$ rotation between qubits connected by an edge in $E$. It is worth noting that 2 different graphs can produce the same graph state $\ket{G}$. In particular if a graph state $\ket{\tilde{G}}$ can be obtained from a graph state $\ket{G}$ by only applying local Clifford group operators, the 2 graphs are said to be LC-equivalent. The work by [@graph_encoding] has interesting application of this type of encoding.
-## Access models {#sec:quantum-memory-models}
+
-Now we focus on how to input classical data in quantum computers. As discussed in the Section \@ref(measuring-complexity) of the previous chapter, in quantum computing we often work in a **oracle model**, also called **black-box model** of quantum computation. This section is devoted to the formalization and implementation of some of the oracles that are commonly used to load classical data (numbers, vectors, matrices). The word "oracle", (a word referencing concepts in complexity theory), is used to imply that an application of the oracle has $O(1)$ cost, i.e. that at first sight, we do not care about the cost of implementing the oracle in our algorithm. A synonym of quantum oracle model is **quantum query model**, which stress the fact that we can use this oracle to perform queries. A query to an oracle is any unitary that performs the mapping:
+### One-hot encoding{#sec:onehot-encoding}
+Another possible way of encoding vectors as quantum states introduced by [@mathur2022medical] is **one-hot amplitude encoding**, also known as **unary amplitude encoding**, which encodes a normalized vector $x \in \mathbb{C}^n$ onto $n$ qubits. The vector values $x_i \in \mathbb{C}$ will be stored in the amplitudes of the states that form the $2^n$-dimensional canonical basis, i.e, the states with only one $1$ and the rest $0$s. This corresponds to preparing the state:
\begin{equation}
-\ket{i}\ket{0}\mapsto \ket{i}\ket{x_i},
-(\#eq:querytooracle)
+\ket{x} = \frac{1}{||x||} \sum_{i=1}^n x_i \ket{e_i},
+(\#eq:one-hot-encoding)
+\end{equation}
+
+where, for some integer $i$, the states $e_i$ take the form $e_i = 0^{i-1}10^{n-i}$.
+
+
+
+## Quantum memory{#sec:quantum-memory}
+
+Having seen possible ways to represent data on a quantum computer, we will now take the first step toward understanding how to create quantum states that are representing numbers using these encodings. The first step involves understanding quantum memory, which plays a key role in various quantum algorithms/problems such as: Grover’s search, solving the dihedral hidden subgroup problem, collision finding, phase estimation for quantum chemistry, pattern recognition, machine learning algorithms, cryptanalysis, and state preparation.
+
+To work with quantum memory we need to define a quantum memory model of computation, which enables us to accurately calculate the complexity of quantum algorithms. In this framework we divide a quantum computation into a data pre-processing step and a computational step. Quantum memory allows us to assume that the pre-processed data can be easily accessed (as in classical computers). In this model, since the pre-processing is negligible, and has to be performed only once, the complexity of a quantum algorithm is fully characterized by the computational step. This understanding formalizes a quantum computation in two distinct components: a quantum processing unit and a quantum memory device. Two notable examples of quantum memory devices are the quantum random access memory ($\mathsf{QRAM}$) and the quantum random access gates ($\mathsf{QRAG}$). It is important to note that having access to a quantum memory device is associated with fault tolerant quantum computers.
+
+This section will first introduce the quantum memory model of computation (\@ref(sec:quantum-memory-model)). This will be followed by the formalization of a quantum computation in the memory model via a quantum processing unit ($\mathsf{QPU}$) and a quantum memory device ($\mathsf{QMD}$) (\@ref(sec:QPU-QMD)), where the $\mathsf{QRAM}$ and $\mathsf{QRAG}$ will be presented as possible implementation of the $\mathsf{QMD}$.
+
+
+
+
+
+
+### The quantum memory model of computation{#sec:quantum-memory-model}
+
+As discussed in the Section \@ref(measuring-complexity) of the previous chapter, in quantum computing we often work in a oracle model, also called black-box model of quantum computation. This section is devoted to the formalization of this model of computation. The word “oracle”, (a word referencing concepts in complexity theory), is used to imply that an application of the oracle has $\mathcal{O}(1)$ cost, i.e. we do not care about the cost of implementing the oracle in our algorithm. A synonym of quantum oracle model is quantum query model, which stresses the fact that we can only use the oracle to perform queries.
+
+To appreciate the potential of quantum algorithms, it is important to understand the quantum memory model. This is because we want to compare quantum algorithms with classical algorithms. Understanding the quantum memory model makes sure that we not favor any of the approaches, ensuring a fair evaluation of their performance across different computational models and memory architectures. Understanding classical memory is also important for classical algorithms. Memory limitations make the analysis of big datasets challenging. This limitation is exacerbated when the random-access memory is smaller than the dataset to analyze, as the bottleneck of computational time switch from being the number of operations to the time to move data from the disk to the memory. Hence, algorithms with super linear runtime (such as those based on linear algebra) become impractical for large input size.
+
+As we will formalize later, the runtime for analyzing a dataset represented by a matrix $A \in \mathbb{R}^{n \times d}$ using a quantum computer is given by the time to preprocess the data (i.e., creating quantum accessible data structures) and the runtime of the quantum algorithm. Importantly, the pre-processing step needs to be done only once, allowing one to run (different) quantum algorithms on the same matrix. We can see this pre-processing step as a way of encoding and/or storing the data: once the matrix is pre-processed, we can always retrieve the matrix in the original representation (i.e. it is a loss less encoding). This step bears some similarities with the process of loading the data from the disk in RAM Therefore, because the pre-processing step is analyzed differently from the runtime, when we work with a quantum algorithm that has quantum access to some classical data, we have the following model in mind.
+
+```{definition, costing-of-quantum-memory-model, name="Costing in the quantum memory model"}
+An algorithm in the quantum memory model that processes a data-set of size $m$ has two steps:
+
+ * A pre-processing step with complexity $\widetilde{O}(m)$ that constructs an efficient quantum access to the data
+ * A computational step where the algorithm has quantum access to the data structures constructed in step 1.
+
+The complexity of the algorithm in this model is measured by the cost for step 2.
+```
+
+Let's consider an example. We will see that many of the quantum algorithms considered in this book have a computational complexity expressed (in number of operations of a certain kind) as some functions of the matrix and the problem. Consider a classical algorithm with a runtime of $\widetilde{O} \left( \frac{\norm{A}_0\kappa(A)}{\epsilon^2}\log(1/\delta) \right)$ calls to the classical memory (and coincidentally, CPU operations). Here $\epsilon$ is some approximation error in the quantity we are considering, $\kappa(A)$ is the condition number of the matrix, $\delta$ the failure probability. The quantum counterpart of this algorithm has a runtime of $O(\norm{A}_0)$ classical operation for pre-processing and
+
+\begin{equation}
+\widetilde{O}(poly(f(A)), poly(\kappa(A), poly(1/\epsilon), poly(\log(nd), poly(\log(1/\delta)) )
+\end{equation}
+
+queries to the quantum memory (and coincidentally, number of operations). Here, $f(A)$ represents some size-independent function of the matrix that depends on the properties of $A$ which can be chosen to be $\|A\|_F$: the Frobenius norm of the matrix. Importantly, note that in the runtime of the quantum algorithm there is no dependence on $\|A\|_0$.
+
+The first step, i.e., loading data (for example a matrix $A$) onto a quantum memory gives an additive price of $\widetilde{O}(\norm{A}_0)$, and is computationally easy to implement. In some cases this can be done on the fly, with only a single pass over the dataset, for example while receiving each of the rows of the matrix. For more complex choices of $f(A)$, the construction of the data structure needs only a few (constant) numbers of passes over the dataset. As pre-processing the data is negligible, we expect quantum data analysis to be faster than classical data analysis. However, there is no need to employ quantum data analysis for small datasets if classical data analysis is sufficient.
+
+In the quantum memory model, we assumed that the pre-processing step to be negligible in cost, and thereby claim that there is significant practical speedup when using quantum algorithms compared to classical algorithms. However, a proper comparison for practical applications needs to include the computational cost of the loading process, which may or may not remove the exponential gap between the classical and the quantum runtime. Nevertheless, even when the pre-processing step is included, we expect the overall computational cost to largely favor the quantum procedure. This analysis can be done only with a proper understanding of the quantum memory model.
+
+Having a clear and deep understanding of the quantum memory model can help us understand the power and limitations of classical computers as well. The past few years saw a trend of works proposing "dequantizations" of quantum machine learning algorithms. These algorithms explored and sharpened some ideas [@tang2018quantum] to leverage a classical data structure to perform importance sampling on input data to have classical algorithm with polylogarithmic runtimes in the size of the input. This data structure is very similar to the one used in many quantum machine learning algorithms (see Section \@ref(sec:implementation-KPtrees)). As a result, many quantum algorithms which had an exponential separation with their classical counterpart now have at most a polynomial speedup compared to the classical algorithm. However, these classical algorithms have a worse dependence in other parameters (like condition number, Frobenius norm, rank, and so on) that will make them disadvantageous in practice (i.e., they are slower than the fastest classical randomized algorithms [@arrazola2020quantum]). With that said, having small polynomial speedup is not something to be critical about: even constant speedups matter a lot in practice! Overall, dequantizations and polynomial speedups highlight the importance of clearly understanding the techniques behind loading classical data in quantum computers.
+
+### The quantum processing unit and quantum memory device {#sec:QPU-QMD}
+
+In this section, we formally define a model of a quantum computer with quantum access to memory. We can intuitively understand this model by separating the available Hilbert space in two: a part dedicated to computing, the Quantum Processing Unit ($\mathsf{QPU}$) and a part dedicated to storing, the Quantum Memory Device ($\mathsf{QMD}$).
+
+The qubits which comprise the $\mathsf{QPU}$ are assigned to either an input register or a workspace register; whilst the qubits which comprise the $\mathsf{QMD}$ are assigned to either a ancillary register or a memory register. Two other registers, the address register and the target register, are shared by the $\mathsf{QPU}$ and $\mathsf{QMD}$ and allow for communication between the two Hilbert spaces. A depiction of the architecture of a $\mathsf{QPU}$ with access to a $\mathsf{QMD}$ can be seen in Figure \@ref(fig:quantum-architecture). Before defining a model of a quantum computer with quantum access to memory, we will first formally define a computation with only the quantum processing unit $\mathsf{QPU}$.
+
+(ref:allcock2023quantum) [@allcock2023quantum]
+
+```{definition, QPU, name="Quantum Processing Unit (ref:allcock2023quantum)"}
+A Quantum Processing Unit($\mathsf{QPU}$) of size $m$ is defined as a tuple $(\mathtt{I}, \mathtt{W},\mathcal{G})$ consisting of
+
+- an $m_{\mathtt{I}}$-qubit Hilbert space called \emph{input register} $\mathtt{I}$;
+- an $(m-m_{\mathtt{I}})$-qubit Hilbert space called \emph{workspace} $\mathtt{W}$;
+- a constant-size universal gate set $\mathcal{G}\subset\mathcal{U}(\mathbb{C}^{4\times 4})$.
+
+The qubits in the workspace $\mathtt{W}$ are called ancillary qubits or simply ancillae. An input to the $\mathsf{QPU}$, or quantum circuit, is a tuple $(T,|\psi_{\mathtt{I}}\rangle,C_1,\dots,C_T)$ where $T\in\mathbb{N}$, $|\psi_{\mathtt{I}}\rangle\in\mathtt{I}$, and, for each $t\in\{1,\dots,T\}$, $C_t\in\mathcal{I}(\mathcal{G})$ is a set of instructions from a set $\mathcal{I}(\mathcal{G})$ of possible instructions. Starting from the state $|\psi_0\rangle := |\psi_\mathtt{I}\rangle|0\rangle_{\mathtt{W}}^{\otimes (m-m_\mathtt{I})}$, at each time step $t\in\{1,\dots, T\}$ we obtain the state $|\psi_t\rangle = C_t|\psi_{t-1}\rangle\in\mathtt{I}\otimes\mathtt{W}$. The instruction set $\mathcal{I}(\mathcal{G})\subset\mathcal{U}(\mathbb{C}^{2^m\times 2^m})$ consists of all $m$-qubit unitaries on $\mathtt{I}\otimes\mathtt{W}$ of the form
+
+\begin{equation}
+\prod_{i=1}^k (\mathsf{U}_i)_{\to I_i}
\end{equation}
-where the $x_i$ could be a binary encoding or amplitude encoding of something. In the following image we have schematized two different kinds of access model that are commonly used in literature. In the first case we use a binary encoding (for numbers), in the second one we use an amplitude encoding (for vectors and matrices).
+for some $k\in\mathbb{N}$, $\mathsf{U}_1,\dots,\mathsf{U}_k\in\mathcal{G}$ and pair-wise disjoint non-repeating sequences $I_1,\dots,I_k\in[m]^{\leq 2}$ of at most $2$ elements. We say that $\sum_{i=1}^k |I_i|$ is the \emph{size} of the corresponding instruction. We say that $T$ is the \emph{depth} of the input to the $\mathsf{QPU}$, while its \emph{size} is the sum of the sizes of the instructions $C_1,\dots,C_T$.
+```
+
+Note that in this definition the circuit size differs from the standard notion of circuit size, which is the number of selected gates from $\mathcal{G}$, up to a factor of at most $2$.
+
-```{r, echo=FALSE, fig.width=6, fig.cap="This table describes different types of oracles. An oracle for numbers gives you quantum access to elements in a list of numbers. This oracle can be implemented in at least two ways: either with a QRAM, or with particular circuits. An oracle for getting amplitude encoding is usually called quantum sampling access, needs a quantum oracle for numbers to be implemented.", }
-knitr::include_graphics("images/oracle-models.png")
+```{exercise, standardzied}
+Can you explain why the circuit size differs from the standard notion of circuit size by up to a factor of at most $2$
```
-An oracle for numbers gives you quantum access to elements in a list of numbers, as we describe in Section \@ref(sec:numbers). This oracle can be implemented in at least two ways: either with a QRAM (see Section \@ref(subsec:qram-model), or with particular circuits (see next Section \@ref(sec:accessmodel-circuits) ). An oracle for getting amplitude encoding, which is more and more often called quantum sampling access for reasons that will be evident later (see Section \@ref(subsec-stateprep-matrices) ) needs a quantum oracle for numbers to be implemented."
-## Implementations
+Moreover, in this framework, the locations of the address and target registers are fixed. One could imagine a more general setting where the address and target registers are freely chosen from the workspace. This case can be handled by this model with minimal overhead, e.g. by performing $\ell$-$\mathsf{SWAP}$ gates to move the desired workspace qubits into the address or target register locations.
-### Quantum memory models {#subsec:qram-model}
+Adding access to a $\mathsf{QMD}$ changes how we define the model of computation. In practice, a call to the $\mathsf{QMD}$ sees the address register selecting a unitary from a set of unitaries $\mathcal{V}$ and applying it to both the target and memory register. It is important to stress that even though a call to the $\mathsf{QMD}$ might require gates from a universal gate set, the underlying quantum circuit implementing such a call is \emph{fixed}, i.e., does not change throughout the execution of a quantum algorithm by the $\mathsf{QPU}$, or even between different quantum algorithms. Below we find the full definition of a quantum computation of a $\mathsf{QPU}$ with access to a $\mathsf{QMD}$.
-#### The QRAM
-Along with a fully fledged quantum computer, it is often common to assume that we access to a quantum memory, i.e. a classical data structure that store classical information, but that is able to answer queries in quantum superposition. This model is commonly called the **QRAM model** (and is a kind of query model). There is a catch. As we will see in greater details soon, the task of building the data structure classically requires time that is linear (up to polylogarithmic factors) in the dimension of the data (this observation is better detailed in definition \@ref(def:QRAM-model) ). If we want to have quantum access to a dense matrix $M \in \mathbb{R}^{n \times d}$ the preprocessing time *mush* be at least $O(nd \log (nd))$, as we need to do some computation to create this data structure. To stress more the fact that we are linear in the effective number of elements contained in the matrix (which can often be sparse) can write that the runtime for the preprocessing is $O(\norm{A}_0\log(nd))$. The name QRAM is meant to evoke the way classical RAM works, by addressing the data in memory using a tree structure. Note that sometimes, QRAM goes under the name of QROM, as actually it is not something that can be written during the runtime of the quantum algorithm, but just queried, i.e. read. Furthermore, a QRAM is said to be *efficient* if can be updated by adding, deleting, or modifying an entry in polylogarithmic time w.r.t the size of the data it is storing. Using the following definition, we can better define the computational model we are working with. Remember that assuming to have access to a large QRAM in your algorithms is something that is often associated with more long-term quantum algorithms, so it is a good idea to limit as much as possible the dependence on QRAM on your quantum algorithms.
+```{definition, QPUQMD, name="Quantum Processing Unit and Quantum Memory Device (ref:allcock2023quantum)"}
+We consider a model of computation comprising a Quantum Processing Unit($\mathsf{QPU}$) of size $\poly\log(n)$ and a Quantum Memory Device ($\mathsf{QMD}$) of $n$ memory registers, where each register is of $\ell$-qubit size (for $n$ a power of $2$). A $\mathsf{QPU}$ and a $\mathsf{QMD}$ are collectively defined by a tuple $(\mathtt{I}, \mathtt{W}, \mathtt{A}, \mathtt{T}, \mathtt{Aux}, \mathtt{M}, \mathcal{G}, \mathsf{V})$ consisting of
-
+- two $(\operatorname{poly}\log{n})$-qubit Hilbert spaces called \emph{input register} $\mathtt{I}$ and \emph{workspace} $\mathtt{W}$ owned solely by the $\mathsf{QPU}$;
+- a $(\log{n})$-qubit Hilbert space called \emph{address register} $\mathtt{A}$ shared by both $\mathsf{QPU}$ and $\mathsf{QMD}$;
+- an $\ell$-qubit Hilbert space called \emph{target register} $\mathtt{T}$ shared by both $\mathsf{QPU}$ and $\mathsf{QMD}$;
+- a $(\poly{n})$-qubit Hilbert space called \emph{auxiliary register} $\mathtt{Aux}$ owned solely by the $\mathsf{QMD}$;
+- an $n\ell$-qubit Hilbert space called \emph{memory} $\mathtt{M}$ comprising $n$ registers $\mathtt{M}_0, \ldots, \mathtt{M}_{n-1}$, each containing $\ell$ qubits, owned solely by the $\mathsf{QMD}$;
+- a constant-size universal gate set $\mathcal{G}\subset\mathcal{U}(\mathbb{C}^{4\times 4})$;
+- a function $\mathsf{V} : [n] \to \mathcal{V}$, where $\mathcal{V}\subset \mathcal{U}(\mathbb{C}^{2^{2\ell}\times 2^{2\ell}})$ is a $O(1)$-size subset of $2\ell$-qubit gates.
-(ref:giovannetti2008quantum) [@giovannetti2008quantum]
+The qubits in $\mathtt{W}$, $\mathtt{A}$, $\mathtt{T}$, and $\mathtt{Aux}$ are called ancillary qubits or simply ancillae. An input to the $\mathsf{QPU}$ with a $\mathsf{QMD}$, or quantum circuit, is a tuple $(T,|\psi_\mathtt{I}\rangle,|\psi_{\mathtt{M}}\rangle,C_1,\dots,C_T)$ where $T\in\mathbb{N}$, $|\psi_{\mathtt{I}}\rangle\in\mathtt{I}$, $|\psi_{\mathtt{M}}\rangle\in\mathtt{M}$, and, for each $t\in\{1,\dots,T\}$, $C_t\in\mathcal{I}(\mathcal{G},\mathsf{V})$ is an instruction from a set $\mathcal{I}(\mathcal{G},\mathsf{V})$ of possible instructions. The instruction set $\mathcal{I}(\mathcal{G},\mathsf{V})$ is the set $\mathcal{I}(\mathcal{G})$ acting on $\mathtt{I}\otimes\mathtt{W}\otimes\mathtt{A}\otimes\mathtt{T}$ augmented with the call-to-the-$\mathsf{QMD}$ instruction that implements the unitary
+
+\begin{equation}
+|i\rangle_{\mathtt{A}}|b\rangle_{\mathtt{T}}|x_i\rangle_{\mathtt{M}_i}|0\rangle^{\otimes \poly{n}}_{\mathtt{Aux}} \mapsto|i\rangle_{\mathtt{A}}\big(\mathsf{V}(i)|b\rangle_{\mathtt{T}}|x_i\rangle_{\mathtt{M}_i}\big)|0\rangle^{\otimes \poly{n}}_{\mathtt{Aux}}, \qquad \forall i\in[n],b,x_i\in\{0,1\}^\ell.
+\end{equation}
+
+Starting from $|\psi_0\rangle|0\rangle^{\otimes \poly{n}}_{\mathtt{Aux}}$, where $|\psi_0\rangle := |\psi_\mathtt{I}\rangle|0\rangle^{\otimes\poly\log{n}}_{\mathtt{W}}|0\rangle_{\mathtt{A}}^{\otimes \log{n}}|0\rangle_{\mathtt{T}}^{\otimes \ell}|\psi_\mathtt{M}\rangle$, at each time step $t\in\{1,\dots, T\}$ we obtain the state $|\psi_t\rangle|0\rangle^{\otimes \poly{n}}_{\mathtt{Aux}} = C_t(|\psi_{t-1}\rangle|0\rangle^{\otimes \poly{n}}_{\mathtt{Aux}})$, where $|\psi_t\rangle\in \mathtt{I}\otimes \mathtt{W}\otimes \mathtt{A}\otimes \mathtt{T}\otimes \mathtt{M}$.
-```{definition, qram, name="Quantum Random Access Memory (ref:giovannetti2008quantum)"}
-A quantum random access memory is a device that stores indexed data $(i,x_i)$ for $i \in [n]$ and $x_i \in \R$ (eventually truncated with some bits of precision). It allows query in the form $\ket{i}\ket{0}\mapsto \ket{i}\ket{x_i}$, and has circuit depth $O(polylog(n))$.
```
-We say that a dataset is efficiently loaded in the QRAM, if the size of the data structure is linear in the dimension and number of data points and the time to enter/update/delete an element is polylogarithmic in the dimension and number of data points. More formally, we have the following definition (the formalization is taken from [@kerenidis2017quantumsquares] but that's folklore knowledge in quantum algorithms).
-```{definition, QRAM-model, name="QRAM model"}
-An algorithm in the QRAM data structure model that processes a data-set of size $m$ has two steps:
+```{r, quantum-architecture, echo=FALSE, out.width="50%", fig.cap="The architecture of a Quantum Processing Unit($\\mathsf{QPU}$) with access to a quantum memory device($\\mathsf{QMD}$). The $\\mathsf{QPU}$ is composed of a $\\poly \\log(n)$-qubit input register $\\mathtt{I}$ and workspace $\\mathtt{W}$. The $\\mathsf{QMD}$ is composed of an $nl$-qubit memory array $\\mathtt{M}$, composed of $n$ memory cells each of size $l$-qubits, and a $\\poly(n)$-qubit auxiliary register $\\mathtt{Aux}$. Two registers, the $\\log(n)$-qubit address register $\\mathtt{A}$ and an $l$-qubit target register $\\mathtt{T}$, are shared between the $\\mathsf{QPU}$ and the $\\mathsf{QMD}$."}
+knitr::include_graphics("algpseudocode/quantum_architecture.png")
+```
- * A pre-processing step with complexity $\widetilde{O}(m)$ that constructs efficient QRAM data structures for storing the data.
- * A computational step where the quantum algorithm has access to the QRAM data structures constructed in step 1.
+This model can be seen as a refined version of the one described in [@buhrman2022memory], where the authors divide the qubits of a quantum computer into work and memory qubits. Given $M$ memory qubits, their workspace consists of $O(\log M)$ qubits, of which the address and target qubits are always the first $\lceil\log M\rceil + 1$ qubits. However, address and target qubits are not considered to be shared by the $\mathsf{QMD}$, and there is no mention of ancillary qubits mediating a call to the $\mathsf{QMD}$. The inner structure of the $\mathsf{QMD}$ is abstracted away by assuming access to the unitary of a $\mathsf{QRAG}$ (see Definition \@ref(def:qrag) later). This model, in contrast, "opens" the quantum memory device, and allows for general fixed unitaries, including $\mathsf{QRAM}$ and $\mathsf{QRAG}$.
-The complexity of the algorithm in this model is measured by the cost for step 2.
+In addition, this model does not include measurements. These can easily be performed on the output state $\ket{\psi_T}$ if need be. Furthermore the position of the qubits is not fixed within the architecture, allowing for long-range interactions through, for example, multi-qubit entangling
+gates. This feature is not always possible in physical the real world since some quantum devices, such as superconducting quantum computers, don't allow for long-range interactions between qubits. For a model of computation which take in consideration physically realistic device interactions we suggest the work by [@Beals_2013].
+
+We stress the idea that call to the $\mathsf{QMD}$ is defined by the function $\mathsf{V}$ and quantum memory device is defined by the unitary that it implements. In many applications, one is interested in some form of reading a specific entry from the memory, which corresponds to the special cases where the $\mathsf{V}(i)$ unitaries are made of controlled single-qubit gates, and to which the traditional $\mathsf{QRAM}$ belongs.
+
+#### The QRAM{#sec:qram}
+
+
+A type of $\mathsf{QMD}$ of particular interest is the $\mathsf{QRAM}$, which is the quantum equivalent of a classical Random Access Memory (RAM), that stores classical or quantum data and allows for superposition-based queries. More specifically, a $\mathsf{QRAM}$ is a device comprising a memory register that stores data, an address register that points to the memory cells to be addressed, and a target register into which the content of the addressed memory cells is copied. If necessary, it also includes an auxiliary register supporting the overall operation, which is reset to its initial state at the end of the computation. Formally, we define it as:
+
+```{definition, qram, name="Quantum Random Access Memory"}
+Let $n\in\mathbb{N}$ be a power of $2$ and $f(i) = \mathsf{X}$ for all $i\in[n]$. A \emph{quantum random access memory} $\mathsf{QRAM}$ of memory size $n$ is a $\mathsf{QMD}$ with $\mathsf{V}(i) = \mathsf{C}_{\mathtt{M}_i}$-$\mathsf{X}_{\to\mathtt{T}}$. Equivalently, it is a $\mathsf{QMD}$ that maps
+
+\begin{equation}
+\ket{i}_{\mathtt{A}}\ket{b}_{\mathtt{T}}\ket{x_0,\dots,x_{n-1}}_{\mathtt{M}} \mapsto \ket{i}_{\mathtt{A}}(f(i)^{x_i}\ket{b}_{\mathtt{T}}) \ket{x_0,\dots,x_{n-1}}_{\mathtt{M}} \quad\quad \forall i\in[n], b,x_0,\dots,x_{n-1}\in\{0,1\}.
+\end{equation}
```
-Equipped with this definition we will see how we can load all sorts of data in the quantum computer. For example, we can formalize what it means to have quantum query access to a vector $x \in \mathbb{R}^N$ stored in the QRAM.
+A unitary performing a similar mapping often goes under the name of quantum read-only memory ($\mathsf{QROM}$) The difference with $\mathsf{QRAM}$ is that that this term stresses that they don't allow data to be added or modified. Oftentimes, the authors using this term are considering a circuit, as described in section \@ref(sec:multiplexer).
+
+
+Instead assuming to have access to a $\mathsf{QRAM}$ requires a protocol for the pre-processing of the data and creation of a data structure in time which is asymptotically linear in the data size (as indicated by definition \@ref(def:quantum-memory-model)).
+
+Equipped with definition \@ref(def:qram)) we can formalize what it means to have quantum query access, which is also referred to as $\mathsf{QRAM}$ access or as having "$x$ is in the $\mathsf{QRAM}$". We will formalize the case of having a vector $x \in (\{0,1\}^m)^N$ stored in the $\mathsf{QRAM}$.
```{definition, quantum-query-access-vector, name="Quantum query access to a vector stored in the QRAM"}
-Given $x \in (\{0,1\}^m)^N$, we say that we have quantum query access to $x$ stored in the QRAM if we have access to a unitary operator $U_x$ such that $U_x\ket{i}\ket{b} = \ket{i}\ket{b \oplus x_i}$ for any bit string $b\in\{0,1\}^m$. One application of $U_x$ costs $O(1)$ operations.
+Given $x \in (\{0,1\}^m)^N$, we say that we have quantum query access to $x$ stored in the $\mathsf{QRAM}$ if we have access to a unitary operator $U_x$ such that $U_x\ket{i}\ket{b} = \ket{i}\ket{b \oplus x_i}$ for any bit string $b\in\{0,1\}^m$.
```
-Other common names for this oralce is "QRAM access", or we simply say that "$x$ is in the QRAM". Note that this definition is very similar to Definition \@ref(def:quantum-oracle-access). The difference is that in the case of most boolean functions we know how to build an efficient classical (and thus quantum) boolean circuit for calculating the function's value. If we have just a list of numbers, we need to resort to a particular hardware device, akin to a classical memory, which further allows query in superposition. Most importantly, when using this oracle in our algorithm, we consider the cost of a query to a data structure of size $N$ to be $O(polylog(N))$. We will see in Section \@ref(sec:qramarchitectures) how, even if the number of quantum gates is $N$, they can be arranged and managed in a way such that the depth and the execution time sill remains polylogarithmic.
+In practical terms when analyzing the complexity of a quantum algorithm with a $\mathsf{QRAM}$ we need to take in consideration three factors: the circuit size of the quantum algorithm as introduced in definition \@ref(def:QPU), the number of queries to the $\mathsf{QRAM}$ and the complexity each $\mathsf{QRAM}$ query. We emphasize that the complexity that arises due to a query to the $\mathsf{QRAM}$ is still an open question. Details of some possible implementations will be discussed in section \@ref(sec:implementations).
+
+
-Another gate that is standard (but less frequent) in literature is the Quantum Random Access Gate. This gate was introduced in the paper of [@ambainis2007quantumdistinctness]. Given a quantum state that holds a string $z$ of bits (or word of $m$ bits), this gate swaps an $m$-bit target register $\ket{b}$ with the $i$-th position of the string $z_i$.
+
+
+#### The QRAG{#sec:qrag}
+
+Another type of quantum memory device is the quantum random access gate($\mathsf{QRAG}$). This quantum memory device was introduced in the paper of [@ambainis2007quantumdistinctness] and performs a $\mathsf{SWAP}$ gate between the target register and some portion of the memory register specified by the address register. The $\mathsf{QRAG}$ finds applications in quantum algorithms for element distinctness, collision finding and random walks on graphs. The formal definition is:
```{definition, qrag, name="Quantum Random Access Gate"}
-Given $x \in (\{0,1\}^m)^N = x_0,x_1, \dots x_N$ we say that we have access to a quantum random access gate if we have a unitary operator $U_x$ such that $U_x \ket{i}\ket{b}\ket{x}= \ket{i}\ket{z_i}\ket{z_0,x_1,\dots,x_{i-1},b,x_{i+1}, \dots x_M}$.
+Let $n\in\mathbb{N}$ be a power of $2$. A \emph{quantum random access gate} $\mathsf{QRAG}$ of memory size $n$ is a $\mathsf{QMD}$ with $\mathsf{V}(i) = \mathsf{SWAP}_{\mathtt{M}_i\leftrightarrow \mathtt{T}}$, $\forall i\in[n]$. Equivalently, it is a $\mathsf{QMD}$ that maps
+
+\begin{equation}
+ |i\rangle_{\mathtt{A}}|b\rangle_{\mathtt{T}}|x_0,\dots,x_{n-1}\rangle_{\mathtt{M}} \mapsto |i\rangle_{\mathtt{A}}|x_i\rangle_{\mathtt{T}} |x_0,\dots,x_{i-1},b,x_{i+1},\dots,x_{n-1}\rangle_{\mathtt{M}} \quad \forall i\in[n], b,x_0,\dots,x_{n-1}\in\{0,1\}.
+\end{equation}
```
-It is natural to ask if this model is more or less powerful than a QRAM model. It turns out that with a QRAG you can have a QRAM. This is the sketch of the proof. Set $b=0$ in the second quantum register, and adding another ancillary register, we have:
+It turns out that the $\mathsf{QRAG}$ can be simulated with a $\mathsf{QRAM}$, but the $\mathsf{QRAM}$ can be simulated with the $\mathsf{QRAG}$ by requiring single qubit operations (which are not present in the model of computation of definition \@ref(def:QPUQMD)). We will present the proof for the simulation of the $\mathsf{QRAM}$ with a $\mathsf{QRAG}$ and leave the opposite proof as exercise.
-$$U_x \ket{i}\ket{0}\ket{b}\ket{x}= \ket{i}\ket{0}\ket{x_i}\ket{x_0,x_1,\dots,x_{i-1},b,x_{i+1}, \dots x_M}$$ now we copy with a CNOT the register $x_i$ in an ancilla register, i.e. we perform this mapping: $$\ket{i}\ket{0}\ket{x_i}\ket{x_0,x_1,\dots,x_{i-1},b,x_{i+1}, \dots x_M} \mapsto \ket{i}\ket{x_i}\ket{ x_i}\ket{x_0,x_1,\dots,x_{i-1},b,x_{i+1}, \dots x_M}$$
+```{theorem, sim-qram-with-qrag, name="Simulating QRAM with QRAG."}
+A query to a $\mathsf{QRAM}$ of memory size $n$ can be simualted using 2 queries to a $\mathsf{QRAG}$ of memory size $n$, 3 two-qubit gates, and $1$ workspace qubit.
+```
-and lastly we undo the query to the QRAG gate to obtain $\ket{i}\ket{x_i}\ket{ b}\ket{x}$ This shows that with access to a gate that performs QRAG queries, we can symulate a QRAM query. We will see in Section \@ref(sec:qramarchitectures) how the hardware architectures for performing a QRAG gate do not differ much from the architectures required to implement a QRAM gate (Thanks to Patrick Rebentrosts and our group meetings at CQT for useful discussions).
+```{proof}
+Start with the input $\ket{i}_{\mathtt{A}}\ket{0}_{\mathtt{Tmp}}\ket{b}_{\mathtt{T}}\ket{x_0,\dots,x_{n-1}}_{\mathtt{M}}$ by using an ancillary qubit $\mathtt{Tmp}$ for the workspace. Use the $\mathtt{SWAP}_{\mathtt{T} \leftrightarrow \mathtt{Tmp}}$ gate to obtain $\ket{i}_{\mathtt{A}}\ket{b}_{\mathtt{Tmp}}\ket{0}_{\mathtt{T}}\ket{x_0,\dots,x_{n-1}}_{\mathtt{M}}$. A query to the $\mathsf{QRAG}$ then leads to $\ket{i}_{\mathtt{A}}\ket{b}_{\mathtt{Tmp}}\ket{x_i}_{\mathtt{T}}\ket{x_0,\dots,x_{n-1}}_{\mathtt{M}}$. Use a $\mathtt{C}_{\mathtt{T}}$-$\mathtt{X}_{\rightarrow \mathtt{Tmp}}$ from register $\mathtt{T}$ to register $\mathtt{Tmp}$, and query again the $\mathsf{QRAG}$, followed by a $\mathtt{SWAP}_{\mathtt{T} \leftrightarrow \mathtt{Tmp}}$ gate, to obtain the desired state $\ket{i}_{\mathtt{A}}\ket{b \oplus x_i}_{\mathtt{T}}\ket{x_0,\dots,x_{n-1}}_{\mathtt{M}}$ after discarding the ancillary qubit.
+```
-##### Memory compression in sparse QRAG models
+```{exercise}
+Assuming that single-qubit gates can be freely applied onto the memory register $\mathtt{M}$ of any $\mathsf{QRAM}$, then show that a $\mathsf{QRAG}$ of memory size $n$ can be simulated using $3$ queries to a $\mathsf{QRAM}$ of memory size $n$ and $2(n+1)$ Hadamard gates.
+```
-It has been observed that the memory dependence of a few algorithms - which are sparse in certain sense which we will make more precise later - can be compressed. The sparsity of these algorithm consist in a memory of size $M$ which is used only in quantum states whose amplitude is non-zero only in computational basis of Hamming weight bounded by $m \lll M$. This idea was historically first proposed by Ambainis in [@ambainis2007quantumdistinctness], elaborated further in [@jeffery2014frameworks], [@bernstein2013quantum], and finally formalized as we present it here in [@buhrman2022memory].
+
+
-We split the qubits of our quantum computer as organized in two parts: $M$ memory qubits and $W=O(\log M)$ working qubits. We are only allowed to apply quantum gates in the working qubits only, and we apply the QRAG gate using always in a fixed position (for example, the first $\log M$ qubits), and the target register used for the swap is the $\log M+1$ qubit\footnote{Confusingly, the authros of [@buhrman2022memory] decided to call a machine that works under this model as QRAM: quantum random-access machine.}.
+
-The formal definition of an $m$-sparse algorithm is the following.
+#### Memory compression in sparse QRAG models{#sec:memory-compression}
+Assuming that the data is sparse is a common assumption when developing quantum algorithms since it significantly simplifies computations. Applying it to compress quantum algorithms with access to a $\mathsf{QRAG}$ was first proposed by [@ambainis2007quantumdistinctness], elaborated further in [@jeffery2014frameworks], [@bernstein2013quantum], and finally formalized in [@buhrman2022memory].
+Informally, a quantum algorithm is considered sparse if the number of queries to a $\mathsf{QRAG}$ of size $M$ are made with a small number of quantum states. More formally, we start by recalling that in a quantum computational model of definition \@ref(def:QPUQMD) we split the qubits in several registers including a $M$ qubit memory register and a $W$ qubit working register. If throughout the computation the queries to the $\mathsf{QRAG}$ are made using only a constant set of quantum states which have a maximum [Hamming weight](https://en.wikipedia.org/wiki/Hamming_weight)(which is the number of $1$'s in the bit string) of $m \lll M$, then the algorithm is said to be $m$-sparse. The trick to making this definition work is realizing that the $\mathtt{SWAP}$ gate can be used to exchange states between the working register and the states with low hamming weight of the target register.
+
+Confusingly, the authors of [@buhrman2022memory] decided to call a machine that works under this model as $\mathsf{QRAM}$: quantum random-access machine. The formal definition of an $m$-sparse quantum algorithm with a $\mathsf{QRAG}$ is the following:
(ref:buhrman2022memory) [@buhrman2022memory]
```{definition, sparseQRAGalgorithm, name="Sparse QRAG algorithm (ref:buhrman2022memory)"}
-Let $\mathcal{C} = (n,T, W, M, C_1, \ldots, C_T)$ be a QRAG algorithm using time $T$, $W$ work qubits, and $M$ memory qubits. Then, we say that $C$ is $m$-sparse, for some $m \le M$, if at every time-step $t \in \{0, \ldots, T\}$ of the algorithm, the state of the memory qubits is supported on computational basis vectors of Hamming weight $\le m$. I.e., we always have
-\[
-\ket{\psi_t} \in \text{spam} \left( \ket{u}\ket{v} \;\middle|\; u \in \{0,1\}^W, v \in \binom{[M]}{\le m} \right)
-\]
+Let $\mathcal{C} = (n,T, W, M, C_1, \ldots, C_T)$ be a $\mathsf{QRAG}$ algorithm using time $T$, $W$ work qubits, and $M$ memory qubits. Then, we say that $C$ is $m$-sparse, for some $m \le M$, if at every time-step $t \in \{0, \ldots, T\}$ of the algorithm, the state of the memory qubits is supported on computational basis vectors of Hamming weight $\le m$. i.e., we always have
+
+\begin{equation}
+\ket{\psi_t} \in \text{span} \left( \ket{u}\ket{v} \;\middle|\; u \in \{0,1\}^W, v \in \binom{[M]}{\le m} \right)
+\end{equation}
+
In other words, if $\ket{\psi_t}$ is written in the computational basis:
-\[
+\begin{equation}
\ket{\psi_t}=\sum_{u \in \{0,1\}^W} \sum_{v \in \{0,1\}^M} \alpha^{(t)}_{u,v} \cdot \underbrace{\ket{u}}_{\text{Work qubits}}\otimes \underbrace{\ket{v}}_{\text{Memory qubits}},
-\]
+\end{equation}
+
then $\alpha^{(t)}_{u,v} = 0$ whenever $|v| > m$, where $|v|$ is the Hamming weight of $v$.
```
+Now that we have seen sparse $\mathsf{QRAG}$ algorithms, we can look at how memory compression is performed. In particular, any $m$-sparse quantum algorithm running in time $T$ and utilizing $M$ memory qubits can be simulated up to an additional error $\epsilon$ by a quantum algorithm running in time $O(T \log (\frac{T}{\epsilon})\log(M))$ using $O(m\log(M))$ qubits.
-The proof of the following theorem (which the reader can use withouth looking at the details of the proof) goes as follow:
+```{theorem, name="Memory compression for m-sparse QRAG algorithms (ref:buhrman2022memory)"}
+Let $T$, $W$, $m < M = 2^\ell$ be natural numbers, with $M$ and $m$ both powers of $2$, and let $\epsilon \in [0, 1/2)$. Suppose we are given an $m$-sparse $\mathsf{QRAG}$ algorithm using time $T$, $W$ work qubits and $M$ memory qubits, that computes a Boolean relation $F$ with error $\epsilon$.
-- first we need to present a data structure, accessible through a QRAG gate, which
-- then we need to show that through this data structure we can implement all the operations we need from a QRAG gate.
+Then we can construct a $\mathsf{QRAG}$ algorithm which computes $F$ with error $\epsilon' > \epsilon$, and runs in time $O(T \cdot \log(\frac{ T}{\epsilon' - \epsilon}) \cdot \gamma)$, using $W + O(\log M)$ work qubits and $O(m \log M)$ memory qubits.
+```
+
-
+with a maximum Hamming weight(number of $1$s) of $m \lll M$.
-
-
+an algorithm is considered $m$-sparse -->
+## Implementations{#sec:implementations}
+In this section we'll be creating oracles that can perform the encodings that were presented in section \@ref(sec:representing-data). Of the presented oracles only 3 will make use of the quantum memory device introduced in definition \@ref(def:QPUQMD): the bucket brigade, KP-trees and the block encoding from data structure. It is interesting to note that these oracles actually aid each other. In fact, KP-trees rely on the existence of a $\mathsf{QMD}$ that can perform binary encoding and similarly block encoding from data structures requires the existence of a $\mathsf{QMD}$ that can perform amplitude encoding. The other oracles will either make use of specific properties of the input data, such as sparsity, or will encode a probability distribution as a quantum state. The key insight lies in the fact that all oracles, with or without $\mathsf{QMD}$, have a constant complexity which allows us to work in the quantum memory model of computation of definition \@ref(def:costing-of-quantum-memory-model). All the presented oracles with their interconnection can be seen in figure \@ref(fig:oracle-models), where the oracle which require a $\mathsf{QMD}$ have been indicated with a *.
+```{r, oracle-models, echo=FALSE, fig.width=4, fig.cap="This figure shows the different types of data encoding techniques with the corresponding oracles. The vertical lines on the right hand side indicate (possible) dependencies between oracles."}
+knitr::include_graphics("algpseudocode/oracle_models.png")
+```
-
-
-
-
+### Binary encoding{#sec:implementation-binary}
+
+
+
+In this section we are discussing implementations a unitary giving query access to a list of $m$-bits values. A possible way of reading this section is throught the lenses of finding the "best" gate decomposition of that unitary, which has the following form:
+\begin{align*}
+ U = \sum_{i=0}^{N-1} \ket{i}\bra{i} \otimes U_i =
+ \begin{bmatrix}
+ U_0 & & & \\
+ & U_1 & & \\
+ & & \ddots & \\
+ & & & U_{N-1}
+\end{bmatrix},
+\end{align*}
+where $U_i\ket{0}=\ket{x_i}$ and $x_i \in \{0,1\}^m$.
-### Circuits {#sec:accessmodel-circuits}
-There are two cases when we can ease our requirements on having quantum access to a particular hardware device (the QRAM). If we have knowledge about the structure of the mapping, we can just build a circuit to perform $\ket{i}\ket{0}\mapsto \ket{i}\ket{x_i}$. We will see two cases. The first one is when we have an analytic formula for $x_i$, i.e. $x_i = f(i)$ for a function $f$ that we know. The second is when most of the $x_i$ are $0$, so we can leverage the sparsity to keep track of a limited amount of entries.
-#### Functions
+#### Circuits: oracle synthesis{#sec:implementation-oracle-synthesis}
-If we have a function that maps $x_i = f(i)$ we can create a circuit for getting query access to the list of $x_i$ on a quantum computer, as we briefly anticipated in Section \@ref(measuring-complexity). Before discussing how to use this idea for data, we will recap a few concepts in quantum computing, which are useful to put things into perspective. The idea of creating a quantum circuit from a classical boolean function is relatively simple and can be found in standard texts in quantum computation ([@NC02] or the this section on the Lecture notes of [Dave Bacon](https://courses.cs.washington.edu/courses/cse599d/06wi/lecturenotes6.pdf)). There is a simple theoretical trick that we can use to see that for any (potentially irreversible) boolean circuit there is a reversible version for it. This observation is used to show that non-reversible circuits are *not* more powerful than reversible circuits. To recall, a reversible boolean circuit is just bijection between domain and image of the function. Let $f : \{0,1\}^m \mapsto \{0,1\}^n$ be a boolean function (which we assume is surjective, i.e. the range of $f$ is the whole $\{0,1\}^n$). We can build a circuit $f' : \{0,1\}^{m+n} \mapsto \{0,1\}^{m+n}$ by adding some ancilla qubits, as it is a necessary condition for reversibility that the dimension of the domain matches the dimension of the range of the function. We define $f'$ as the function performing the mapping $(x, y) \mapsto (x, y \oplus f(x))$. It is simple to see by applying twice $f'$ that the function is reversible (check it!).
+As we briefly anticipated in Section \@ref(measuring-complexity), if we know a function that maps $x_i = f(i)$ we can create a circuit for getting query access to $x_i$. If our data is represented by the output of a function, we can consider these techniques for data loading.
+
-Now that we have shown that it is possible to obtain a reversible circuit from any classical circuit, we can ask: what is an (rather inefficient) way of getting a quantum circuit? Porting some code (or circuit) from two similar level of abstraction is often called *transpiling*. Again, this is quite straightforward (Section 1.4.1 [@NC02]). Every boolean circuit can be rewritten in any set of universal gates, and as we know, the NAND port is universal for classical computation. It is simple to see (check the exercise) that we can use a Toffoli gate to simulate a NAND gate, so this gives us a way to obtain a quantm circuit out of a boolean circuit made of NAND gates. With these two steps we described a way of obtaining a quantum circuit from any boolean function $f$.
+The idea of creating a quantum circuit from a classical Boolean function is relatively simple and can be found in standard texts in quantum computation ([@NC02] or the this section on the Lecture notes of [Dave Bacon](https://courses.cs.washington.edu/courses/cse599d/06wi/lecturenotes6.pdf)). There is a simple theoretical trick that we can use to see that for any (potentially irreversible) Boolean circuit there is a reversible version for it. This observation is used to show that non-reversible circuits are *not* more powerful than reversible circuits. To recall, a reversible Boolean circuit is just bijection between domain and image of the function. Let $f : \{0,1\}^m \mapsto \{0,1\}^n$ be a Boolean function (which we assume is surjective, i.e. the range of $f$ is the whole $\{0,1\}^n$). We can build a circuit $f' : \{0,1\}^{m+n} \mapsto \{0,1\}^{m+n}$ by adding some ancilla qubits, as it is a necessary condition for reversibility that the dimension of the domain matches the dimension of the range of the function. We define $f'$ as the function performing the mapping $(x, y) \mapsto (x, y \oplus f(x))$. It is simple to see by applying twice $f'$ that the function is reversible (check it!).
+
+Now that we have shown that it is possible to obtain a reversible circuits from any classical circuit, we can ask: what is an (rather inefficient) way of getting a quantum circuit? Porting some code (or circuit) from two similar level of abstraction is often called *transpiling*. Again, this is quite straightforward (Section 1.4.1 [@NC02]). Every Boolean circuit can be rewritten in any set of universal gates, and as we know, the NAND port is universal for classical computation. It is simple to see (check the exercise) that we can use a Toffoli gate to simulate a NAND gate, so this gives us a way to obtain a quantum circuit out of a Boolean circuit made of NAND gates. With these two steps we described a way of obtaining a quantum circuit from any Boolean function $f$.
```{exercise, name="Toffoli as NAND"}
-Can you prove that a Toffoli gate, along with an ancilla qubit, can be used to obtain a quantum version of the NAND gate?
+Prove that a Toffoli gate, along with an ancilla qubit, can be used to obtain a quantum version of the NAND gate?
```
However, an application of the quantum circuit for $f$, will result in a garbage register of some unwanted qubits. To get rid of them we can use this trick:
-
\begin{equation}
-\ket{x}\ket{0}\ket{0}\ket{0} \mapsto \ket{x}\ket{f(x)}\ket{k(f, x)}\ket{0}\mapsto \ket{x}\ket{f(x)}\ket{k(f, x)}\ket{f(x)} \mapsto \ket{x}\ket{f(x)}
+\ket{x}\ket{0}\ket{0}\ket{0} \mapsto \ket{x}\ket{f(x)}\ket{k(f, x)}\ket{0}\mapsto \ket{x}\ket{f(x)}\ket{k(f, x)}\ket{f(x)} \mapsto \ket{x}\ket{f(x)}.
+(\#eq:bennetstrick)
\end{equation}
-Let's explain what we did here. In the first step we apply the circuit that computes $f'$. In the second step we perform a controlled NOT operation (controlled on the third and targetting the fourth register), and in the last step we undo the application of $f'$, thus obtaining the state $\ket{x}\ket{f(x)}$ with no garbage register.
+Let's explain what we did here. In the first step we apply the circuit that computes $f'$. In the second step we perform a controlled NOT operation (controlled on the third and targeting the fourth register), and in the last step we undo the application of $f'$, thus obtaining the state $\ket{x}\ket{f(x)}$ with no garbage register.
-Importantly, the techniques described in the previous paragraph are far from being practical, and are only relvant didactically. The task of obtaining an efficient quantum circuit from a boolean function is called "oracle synthesis". Oracle synthesis is far from being a problem of only theoretical interest, and it has received a lot of attention in past years [@soeken2018epfl] [@schmitt2021boolean] [@shende2006synthesis]. Today software implementations can be easily found online in most of the quantum programming languages/library. For this problem we can consider different scenarios, as we might have access to the function in form of reversible Boolean functions, non-reversible Boolean function, or the description of a classical circuit. The problem of oracle syntheses is a particular case of quantum circuit synthesis (Table 2.2 of [@de2020methods] ) and is a domain of active ongoing research.
+Importantly, the idea of obtaining a quantum circuit from a classical reversible circuit is not practical, and is only relevant didactically. The task of obtaining an efficient quantum circuit from a Boolean function is called "oracle synthesis". Oracle synthesis is far from being a problem of only theoretical interest, and it has received a lot of attention in past years [@soeken2018epfl] [@schmitt2021boolean] [@shende2006synthesis]. Today software implementations can be easily found online in most of the quantum programming languages/library. For this problem we can consider different scenarios, as we might have access to the function in form of reversible Boolean functions, non-reversible Boolean function, or the description of a classical circuit. The problem of oracle syntheses is a particular case of quantum circuit synthesis (Table 2.2 of [@de2020methods] ) and is a domain of active ongoing research.
-
+
-Long story short, if we want to prove the runtime of a quantum algorithm in terms of gate complexity (and not only number of queries to an oracle computing $f$) we need to keep track of the gate complexity of the quantum circuits we use. For this we can use the following theorem.
+If we want to prove the runtime of a quantum algorithm in terms of gate complexity (and not only number of queries to an oracle computing $f$) we need to keep track of the gate complexity of the quantum circuits we use. For this we can use the following theorem.
(ref:buhrman2001time) [@buhrman2001time]
@@ -260,17 +501,24 @@ Long story short, if we want to prove the runtime of a quantum algorithm in term
For a probabilistic classical circuit with runtime $T(n)$ and space requirement $S(n)$ on an input of length $n$ there exists a quantum algorithm that runs in time $O(T(n)^{\log_2(3)}$ and requires $O(S(n)\log(T(n))$ qubits.
```
-What if we want to use a quantum circuit to have quantum access to a vector of data? It turns out that we can do that, but the simplest circuit that we can come up with, has a depth that is linear in the length of the vector. This circuit (which sometimes goes under the name QROM [@hann2021resilience] or multiplexer, is as follow:
+#### Circuits: the multiplexer{#sec:implementation-multiplexer}
+What if we want to use a quantum circuit to have quantum access to a vector of data? It turns out that we can do that, but the simplest circuit that we can come up with, has a depth that is linear in the length of the vector. This kind of circuit is often used in literature, e.g. for computing functions using space-time trade-offs [@krishnakumar2022aq;@gidney2021factor]. This circuit (which sometimes goes under the name QROM, or circuit for table lookups [@hann2021resilience]) or multiplexer, is as follow:
-```{r, echo=FALSE, fig.width=3, fig.cap="This is the multiplexer circuit for the list of values x=[1,1,0,1]. Indeed, if we initialize the first two qubits with zeros, the output of the previous circuit will be a 1 in the third register, and so on.", }
+```{r, multiplexer, echo=FALSE, fig.width=10, fig.cap="This is the example of a multiplexer circuit for the list of values x=[1,1,0,1]. Indeed, if we initialize the first two qubits with zeros, the output of the previous circuit will be a 1 in the third register, and so on."}
knitr::include_graphics("images/multiplexer.png")
```
-The idea of the circuit is the following: controlled on the index register being in the state $\ket{0}$, we write (using CNOTS) in the output register the value of our vector in position $x_0$, controlled in the index register being $\ket{1}$, we write on the output register the value of our vector in position $x_1$, etc.. This will result in a circuit with a depth that is linear in the length of the vector that we are accessing, however this circuit won't require any ancilla qubit. We will discuss more some hybrid architecture that allows a tradeoff between depth and ancilla qubits in Section \@ref(sec:qramarchitectures).
+```{r, multiplexer-babbush, echo=FALSE, fig.width=10, fig.cap="The implementation of the multiplexer of [@babbush2018encoding]"}
+knitr::include_graphics("images/decomposed-lookup.pdf")
+```
+
+The idea of the circuit is the following: controlled on the index register being in the state $\ket{0}$, we write (using CNOTS) in the output register the value of our vector in position $x_0$, controlled in the index register being $\ket{1}$, we write on the output register the value of our vector in position $x_1$, etc.. This will result in a circuit with a depth that is linear in the length of the vector that we are accessing, however this circuit won't require any ancilla qubit. We will discuss more some hybrid architecture that allows a trade off between depth and ancilla qubits in Section \@ref(sec:qramarchitectures). The Toffoli count of this circuit can be improved in various ways [@babbush2018encoding;@zhu2024unified]. Importantly, the depth of this circuit depends on the number of oracle entries $N$, and in simple implementations depends also linearly in $m$.
+
+
-#### Sparse access model {#subsec:sparse-access-model}
-The sparse access model is often used to work with matrices and graphs. Sparse matrices are very common in quantum computing and quantum physics, so it is important to formalize a quantum access for sparse matrices. This model is sometimes called in literature "sparse access" to a matrix, as sparsity is often the key to obtain an efficient circuit for encoding such structures without a QRAM. Of course, with a vector or a matrix stored in a QRAM, we can also have efficient (i.e. in time $O(\log(n))$ if the matrix is of size $n \times n$) query access to a matrix or a vector, even if they are not sparse. It is simple to see how we can generalize query access to a list or a vector to work with matrices by introducing another index register to the input of our oracle. For this reason, this sparse access is also called quite commonly "query access".
+#### Circuits: sparse access{#sec:implementation-sparse-access}
+Sparse matrices are very common in quantum computing and quantum physics, so it is important to formalize a quantum access for sparse matrices. This model is sometimes called in literature "sparse access" to a matrix, as sparsity is often the key to obtain an efficient circuit for encoding such structures without a $\mathsf{QRAM}$. Of course, with a vector or a matrix stored in a $\mathsf{QRAM}$, we can also have efficient (i.e. in time $O(\log(n))$ if the matrix is of size $n \times n$) query access to a matrix or a vector, even if they are not sparse. It is simple to see how we can generalize query access to a list or a vector to work with matrices by introducing another index register to the input of our oracle. For this reason, this sparse access is also called quite commonly "query access".
```{definition, oracle-access-adjacencymatrix, name="Query access to a matrix"}
Let $V \in \mathbb{R}^{n \times d}$. There is a data structure to store $V$, (where each entry is stored with some finite bits of precision) such that, a quantum algorithm with access to the data structure can perform $\ket{i}\ket{j}\ket{z} \to \ket{i}\ket{j}\ket{z \oplus v_{ij}}$ for $i \in [n], j \in [d]$.
@@ -283,146 +531,227 @@ Let $V \in \mathbb{R}^{n \times d}$, there is an oracle that allows to perform t
- $\ket{i}\mapsto\ket{i}\ket{d(i)}$ where $d(i)$ is the number of non-zero entries in row $i$, for $i \in [n]$, and
- $\ket{i,l}\mapsto\ket{i,l,\nu(i,l)}$, where $\nu(i,l)$ is the $l$-th nonzero entry of the $i$-th row of $V$, for $l \leq d(i)$.
-
```
-The previous definition is also called *adiacency array* model. The emphasis is on the word *array*, contrary to the adjacency list model in classical algorithms (where we usually need to go through all the list of adjacency nodes for a given node, while here we can query the list as an array, and thus use superposition) [@Durr2004].
+The previous definition is also called *adjacency array* model. The emphasis is on the word *array*, contrary to the adjacency list model in classical algorithms (where we usually need to go through all the list of adjacency nodes for a given node, while here we can query the list as an array, and thus use superposition) [@Durr2004].
-It's important to recall that for Definition \@ref(def:oracle-access-adjacencymatrix) and \@ref(def:oracle-access-adjacencylist) we could use a QRAM, but we also expect **not** to use a QRAM, as there might be other efficient circuit for performing those mapping. For instance, when working with graphs (remember that a generic weighted and directed graph $G=(V,E)$ can be seen as its adjacency matrix $A\in \mathbb{R}^{|E| \times |E|}$), many algorithms call Definition \@ref(def:oracle-access-adjacencymatrix) **vertex-pair-query**, and the two mappings in Definition \@ref(def:oracle-access-adjacencylist) as **degree query** and **neighbor query**. When we have access to both queries, we call that **quantum general graph model** [@hamoudi2018quantum]. This is usually the case in all the literature for quantum algorithms for Hamiltonian simulation, graphs, or algorithms on sparse matrices.
+It's important to recall that for Definition \@ref(def:oracle-access-adjacencymatrix) and \@ref(def:oracle-access-adjacencylist) we could use a $\mathsf{QRAM}$, but we also expect **not** to use a $\mathsf{QRAM}$, as there might be other efficient circuit for performing those mapping. For instance, when working with graphs (remember that a generic weighted and directed graph $G=(V,E)$ can be seen as its adjacency matrix $A\in \mathbb{R}^{|E| \times |E|}$), many algorithms call Definition \@ref(def:oracle-access-adjacencymatrix) **vertex-pair-query**, and the two mappings in Definition \@ref(def:oracle-access-adjacencylist) as **degree query** and **neighbor query**. When we have access to both queries, we call that **quantum general graph model** [@hamoudi2018quantum]. This is usually the case in all the literature for quantum algorithms for Hamiltonian simulation, graphs, or algorithms on sparse matrices.
-The interested reader can watch [here](http://www.ipam.ucla.edu/abstract/?tid=17251&pcode=QL2022) how to create block-encodings from sparse access.
-### Quantum sampling access {#q-sampling-access}
+#### Bucket brigade circuits {#sec:implementation-bbrigade}
+The Bucket brigade(BB) architecture (Fig 10 [@hann2021resilience]) is another possible implementation of binary encoding which, differently to the other presented methods, requires the $\mathsf{QMD}$ model of computation of definition \@ref(def:QPUQMD). This protocol was originally developed to be implemented with qutrits [@giovannetti2008quantum] but recent work has shown it can be implemented with qubits as well [@hann2021resilience]. We'll focus on the explanation with qutrits since it is more intuitive to understand.
-Let's suppose now that we want to have an oracle that can be used to create quantum states proportional to a set of vectors that we have. In other words, we are considering an amplitude encoding of vectors, as discussed in Section \@ref(subsec-stateprep-matrices). We can have two similar models, that we call both *quantum sampling access*. This name comes from the fact that measuring the output state can be interpreted as sampling from a probability distribution. Historically, it was first discussed the procedure to create quantum state proportional to (discretized) probability distribution, and then this idea was reused in the context of creating quantum states proportional to vectors (the generalization to matrices follows very simply). We treat first the case where we want to create quantum sampling access to rows of a matrix, as it is much simpler to understand.
+Using the terminology introduced in definition \@ref(def:QPUQMD), we'll have: an address register, a target register (which will be referred to as a bus register) and a memory register. The input will be a vector of binary numbers $X \in (\{0,1\}^m)^n$ and the aim is to load a specific entry $X_i \in \{0,1\}^m$ on the bus register.
-#### Sampling access to vectors and matrices
+The BB protocol will use the memory register in such a way that it has access to tree like structure. The tree saves the entries of $X$ in the leaves, which are referred to as memory cells. Each memory cell is connected to a parent node which form a set of intermediate nodes up until the root of the tree. Each intermediate node (up to the root) is called a quantum router (Figure 1b of [@hann2021resilience]) and is a qutrit (i.e., a three level quantum system), which can be in state $\ket{0}$ (route left), $\ket{1}$ (route right), and $\ket{W}$ (wait).
-Let's recall that for a matrix $X \in \mathbb{R}^{n \times d}$ (where we assume that $n$ and $d$ are powers of $2$, otherwise we can just consider the matrix padded with zeros) with rows $x_i$, want to create the state
+When we want to perform a query, we prepare the address register with the index of the memory cell that we want to reach and we set all the router registers to the $\ket{W}$ state. Conditioned on the first qubit of the address register, the root of the tree changes from $\ket{W}$ to either $\ket{0}$(left) or $\ket{1}$(right). This is followed by a similar operation which uses as control the second qubit of the address register to change the state of the next node in the tree to either $\ket{0}$ or $\ket{1}$. The process of changing the state of the routers is repeated until the last layers of the tree(i.e. the memory cell) is reached. Now, the memory register will be in the state of the binary number $X_i$. This can be copied to the bus register by simply applying a series of CNOT gates (and thus we do not violate the no-cloning theorem).
+
+Studying an error model of the BB architecture is hard. An attempt was first made in [@arunachalam2015robustness] which gave initial, but rather pessimistic result. More recently, a series of developments in [@hann2021resilience] and [@hann2021practicality] (accessible [here](https://www.proquest.com/openview/c5caf76bb490e4d3abbeca2cea16b450/1?pq-origsite=gscholar&cbl=18750&diss=y)) have shone light on the noise resilience of the BB $\mathsf{QRAM}$. The results presented in these more recent works are much more positive. Some resource estimations can be found in [@di2020fault], which do not take into account the new developements in the study of the error.
+
+The metric of choice to test whether a quantum procedure has faithfully recreated a desired state is the fidelity $F$, with the infidelity defined as $1-F$. Given a addressable memory of size $N$(i.e. $\log N$ layers in the binary tree) and a bucket brigade which requires $T$ time-steps with a probability of error per time step of $\epsilon$, the infidelity of the bucket brigade scales as:
\begin{equation}
-\frac{1}{\sqrt{\sum_{i=1}^n {\left \lVert x_i \right \rVert}^2 }} \sum_{i=1}^n {\left \lVert x_i \right \rVert}\ket{i}\ket{x_i}
- (\#eq:matrix-state)
+1-F \approx \sum_{l=1}^{\log N} (2^{-l}) \epsilon T2^{l} = \epsilon T \log N,
+(\#eq:qramfidelity)
\end{equation}
-We will do it using two mappings:
+```{exercise}
+Calculate $\sum_{l=1}^{\log N} l$
+```
+The time required to perform a query, owing to the tree structure of the BB, is $T=O(\log N)$. This can be seen trivially from the fact that $T \approx \sum_{l=0}^{\log N -1 } l = \frac{1}{2}(\log N)(\log N +1)$, but can be decreased to $O(\log N)$ (Appendix A of [@hann2021resilience]). This leaves us with the sought-after scaling of the infidelity of $\widetilde{O}(\epsilon)$ where we are hiding in the asymptotic notation the terms that are polylogarithmic in $N$. The error that happen with probability $\epsilon$ can be modeled with Kraus operators makes this error analysis general and realistic (Appendix C [@hann2021resilience]), and is confirmed by simulations. For a proof of Equation \@ref(eq:qramfidelity) see Section 3 and Appendix D of [@hann2021resilience].
-\begin{equation}
-\ket{i}\mapsto \ket{i}\ket{x_i}
-\end{equation}
-\begin{equation}
-\ket{0}\mapsto \ket{N_X}
-\end{equation}
-where $N_X$ is the vector of $\ell_2$ norms of the rows of the matrix $X$, i.e. $\ket{N_X}=\frac{1}{\|X\|_F} \sum_{i=0}^n \|x_i\| \ket{i}$. Note that these two quantum states are just amplitude encodings of vectors of size $d$ and a vector of size $n$. It is very simple to see that if we are given two unitaries performing these two mappings, we can obtain equation \@ref(eq:matrix-state) by applying the two unitaries sequentially:
+```{r, bb-qram-image, echo=FALSE, fig.width=5, fig.cap="A possible implementation of the bucket-brigade QRAM in the circuit model."}
+knitr::include_graphics("algpseudocode/circuit_bb.png")
+```
-\begin{equation}
-\ket{0}\ket{0}\mapsto\ket{N_X}\ket{0}\mapsto \frac{1}{\|X\|_F} \sum_{i=1}^n {\left \lVert x_i \right \rVert}\ket{i}\ket{x_i}
-\end{equation}
+(ref:doriguello2024practicality) [@doriguello2024practicality]
-This reduces our problem to create an amplitude encoding of a given vector. In the PhD thesis of Prakash [@PrakashPhD] we can find the first procedure to efficiently create superpositions corresponding to vectors, and the generalization on how to do this for the rows of the matrices, i.e. encoding the values of the components of a matrix' row in the amplitudes of a quantum state. This this data structure, which sometimes could go under the name KP-trees [@rebentrost2018quantum], but is more and more often called **quantum sampling access**, assumes and extends definition \@ref(def:qram). Confusingly, in some papers both are called QRAM, and both rely on two (different) tree data structure for their construction. One is a hardware circuit arranged as a tree that allows to perform the mapping in \@ref(def:qram), the other is a classical data structure arranged as a tree that stores the partial norms of the rows of the matrix, which we will discuss now. We will use the following as definition, but this is actually a theorem. For the original proof, we refer to [@PrakashPhD], the appendix A of [@KP16], and the proof of Theorem 1 of [@CGJ18].
-(ref:KP16) [@KP16]
+
+
+
-```{definition, KP-trees, name="Quantum access to matrices using KP-trees - Quantum sampling access (ref:KP16)"}
-Let $V \in \mathbb{R}^{n \times d}$, there is a data structure (sometimes called KP-trees) to store the rows of $V$ such that,
+
-- The size of the data structure is $O(\|V\|_0 \log^(nd))$.
-- The time to insert, update or delete a single entry $v_{ij}$ is $O(\log^{2}(n))$.
-- A quantum algorithm with access to the data structure can perform the following unitaries in time $O(\log^{2}n)$.
- - $\ket{i}\ket{0} \to \ket{i}\ket{v_{i}} \forall i \in [n].$
- - $\ket{0} \to \sum_{i \in [n]} \norm{v_{i}}\ket{i}.$
+The following statement gives a resource estimation for a QRAM of logarithmic depth using the quantum architecture proposed in [@litinski2022active].
+```{lemma, bb-qram-resources, name="Complexity of QRAM using (ref:doriguello2024practicality)"}
+One bucket-brigade $\mathsf{QRAM}$ call of size $2^n$ and precision $\kappa$ requires (already including its uncomputation) $2^n - 2$ $\mathsf{Toffoli}$ gates, $2^{n+1} - n - 1$ dirty ancillae (plus $n+\kappa$ input/output qubits), and has $\mathsf{Toffoli}$-width of $2^{n-1}$, reaction depth of $2(n-1)$, and active volume of $(25 + 1.5\kappa + C_{|CCZ\rangle})2^n$.
```
-```{proof}
-We start our proof by assuming that we have already given the whole matrix $V$, and at the end we comment on the second point of the theorem (i.e. the time needed to modify the data structure).
-The data structure is composed of $n+1$ binary trees, one for each row of the matrix, and an additional one for the vectors of norms of the rows. Each tree is initialyl empty, and is constructed in a way so to store the so-called partial norms (squared) of a row of $V$. For the $i$-th row $v_i$ we build the binary tree $B_i$. Assume, w.l.o.g. that $d$ is a power of $2$, so that there are $\log(d)$ layers of a tree. In the leaves we store the tuple $(v_{ij}^2, \text{sign}(v_{ij}))$, and in each of the internal nodes we store the sum of the two childrens. It is simple to see that the root of the tree stores $\|v_i\|_2^2$. The layer of a tree is stored as a\footnote{ordered? typo in original paper?} list
-We show how to perform the first unitary. For a tree $B_i$, the value stored in a node $k \in \{0,1\}^k$ at level $k$ is $\sum_{j \in [d], j_{1:t}=k} A_{ij}^2$.
+### Amplitude encoding{#sec:implementation-amplitude}
-The rotation can be performed by querying the values $B_{ik}$ from the QRAM as follows:
-
- \begin{align}
-\ket{i}\ket{k}\mapsto\ket{i}\ket{k}\ket{\theta_{ik}}\mapsto \\
-\ket{i}\ket{j}\ket{\theta_{ik}}\left(\cos(\theta_{ik})\ket{0} + \sin(\theta_{ik})\ket{1} \right) \mapsto \\
-\ket{i}\ket{j}\left(\cos(\theta_{ik})\ket{0} + \sin(\theta_{ik})\ket{1} \right)
-\end{align}
+We now move our attention to amplitude encoding, which was first introduced in section \@ref(sec:amplitude-encoding). In amplitude encoding, we encode a vector of numbers in the amplitude of a quantum state. Implementing a quantum circuit for amplitude encoding can be seen as preparing a specific quantum states, for which we know the amplitudes. In other words, this is actually a *state preparation problem* in disguise, and we can use standard state preparation methods to perform amplitude encoding. However, note that amplitude encoding is a specific example of state preparation, when the amplitudes of the state are known classically or via an oracle. There are other state preparation problems that are not amplitude encoding, like ground state preparation, where the amplitudes of the quantum state is not known and only the Hamiltonian of the system is given. In the following, we briefly discuss the main techniques developed in the past decades for amplitude encoding.
+
+
+
+
+
+What are the lower bounds for the size and depth complexity of circuits performing amplitude encoding? Since amplitude encoding can be seen as a quantum state preparation, without assuming any kind of oracle access, we have a lower bound of $\Omega\left(2^n\right)$ [@plesch2011quantum;@shende2004minimal]. For the depth, we have a long history of results. For example, there is a lower bound of $\Omega(\log n)$ that holds for some states (and hence puts a lower bound on algorithms performing generic state preparation ) using techniques from algebraic topology [@aharonov2018quantum]. Without ancilla qubits [@plesch2011quantum] proposed a bound of $\Omega(\frac{2^n}{n})$. The bound on the depth has been refined to a $\Omega(n)$, but only when having arbitrarily many ancilla qubits [@zhang2021lowdepth]. The more accurate bound is of $\Omega\left( \max \{n ,\frac{4^n}{n+m} \} \right )$ (Theorem 3 of [@STY-asymptotically]), where $m$ is the number of ancilla qubits. The algorithms of [@yuan2023optimal], which we discuss later, saturates this bound.
+
+
+We can also study the complexity of the problem in the oracle model. For example, if we assume an oracle access to $f : \{0,1\}^n \mapsto [0,1]$, using amplitude amplification techniques on the state $\sum_x \ket{x}\left(f(x)\ket{0} + \sqrt{1-f(x)}\ket{1} \right)$, there is a quadratic improvement in the number of queries to the oracle, yielding $\widetilde{O}(\sqrt{N})$ complexity [@grover2000synthesis], where $N = 2^n$. This can be seen if we imagine a vector with only one entry with the value $1$, where the number of queries to amplify the subspace associated with the rightmost qubit scales with $\sqrt{N}$. Few years later, we find another work by [@Grover2002] which, under some mildly stronger assumptions improved the complexity of the algorithms for a very broad class of states. This algorithm is better discussed in Section \@ref(sec:implementation-grover-rudolph).
+
+
+Alternatively, we can assume a direct oracle access to the amplitudes [@sanders2019black]. Under this assumption, we have access to an oracle storing the $i$th amplitude $\alpha_i$ with $n$ bits, (actually, they use a slightly different model, where the oracle for the amplitude $\alpha_i$ is $\ket{i}\ket{z}\mapsto \ket{i}\ket{z \oplus \alpha_i^{(n)}}$ where $\alpha_i^{(n)}=\lfloor 2^n\alpha_i \rfloor$). Ordinarily, the circuit involves the mapping $\ket{i}\ket{\alpha_i^{(n)}}\ket{0} \mapsto \ket{i}\ket{\alpha_i} \left(\sin(\theta_i)\ket{0} + \cos(\theta_i)\ket{1}\right)$, which requires control rotations and arithmetic circuits to compute the angles $\theta_i = \arcsin(\alpha_i/2^n)$. However, by substituting the arithmetic circuit by a comparator operator [@gidney2018halving;@cuccaro2004new;@luongo2024measurement], the circuit can be implemented either with $2n$ non-Clifford gates or $n$ non-Clifford gates and $n$ ancilla qubits. This scheme can even be extended to encode complex amplitudes in both Cartesian and polar forms, or apply to the root coefficient problem of real amplitudes, where we have an oracle access to the square of the amplitude $\alpha_i^2$ instead of $\alpha_i$. For positive or complex amplitudes, this algorithm involves $\frac{\pi}{4}\frac{\sqrt{N}}{\|\alpha\|_2}$ exact amplitude amplifications, so it has a runtime of $\frac{\pi}{4}\frac{t\sqrt{N}}{\|\alpha\|_2} + O(1)$ non-Clifford gates, where $t$ is the number of bits of precision used to specify an amplitude (the authors preferred to count the number of non-Clifford gates, as they are the most expesive one to implement in (most of) the error corrected architectures, and serves as a lower bound for the size complexity of a circuit. For the root coefficient problem, the runtime becomes $\frac{\pi}{4} \frac{n\sqrt{N}}{\|\alpha\|_1} + O\left(n \log \left(\frac{1}{\epsilon}\right)\right)$ non-Clifford gates. For certain sets of coefficients, this model can further be improved to reduce the number of ancilla qubits needed per bits of precision from a linear dependence [@sanders2019black] to a log dependence (Table 2 of [@bausch2022fast]). Also the work of [@mcardle2022quantum] doesn't use arithmetic, and uses $O(\frac{n d_\epsilon}{\mathcal{F}_{\widetilde{f}^{[N]}} })$ (where $\widetilde{f}^{[N]}$ is called "discretized $\ell_2$-norm filling-fraction", and $d_\epsilon$ is the degree of a polynomial approximation that depends on $\epsilon$, the approximation error in the quantum state ) and uses only $4$ ancilla qubits. define $f$ before here.
+
+
+
+
+
+
+
+Instead of treating the problem as a state preparation problem, we can also perform amplitude encoding using multivariable quantum signal processing (M-QSP) (See Chapter \@ref(chap:5)). The principle behind this method is to interpret the amplitude of a quantum state as a function of a multivariable polynomial [@mcardle2022quantum;@mori2024efficient;@rosenkranz2024quantum]. Particularly, using Linear Combination of Unitaries techniques, we can approximate the quantum state as a multivariable function by a truncated Fourier or Chebyshev series [@rosenkranz2024quantum]. The truncated Fourier series approximation requires $O(d^D + Dn \log d)$ two-qubit gates, while the truncated Chebyshev series approximation requires $O(d^D + Ddn \log n)$ two-qubit gates, where $D$ is the number of dimensions and $d$ is the degree of the polynomial used in the approximation. The number of qubits in both techniques scales as $O(Dn + D \log d)$. The exponential dependence in $D$ can further be improved by the following theorems.
-The second unitary can be obtained exactly as the first one, observing that we are just doing an amplitude encoing of a vector of norms. Hence, we just need to build the $i+1$-th binary tree storing the partial norms of the vector of the amplitudes, where the number of leaves is $n$.
+
+
-We now focus on the modification of the data structure.
-# TODO
+(ref:mori2024efficient) [@mori2024efficient]
+```{theorem, bivariate-sp, name="Bivariate state preparation (ref:mori2024efficient)"}
+Given a Fourier series $f$ of degree $(d_1, d_2)$ that can be constructed with M-QSP, we can prepare a quantum state $\ket{\psi_f}$ using $O((n_1d_1 + n_2d_2)/\mathcal{F}_f)$ gates, where $\mathcal{F}_f = \mathcal{N}_f/( \sqrt{N_1N_2}|f|_{max})$, $\mathcal{N}_f = \sqrt{\sum_{i,j} (f(x_1^{(i)}), f(x_2^{(j)}) )}$, whereas $n_1$ and $n_2$ are the number of bits used to specify the value of the variable $x_1$ and $x_2$, respectively.
+```
+
+```{theorem, multivariate-sp, name="Multivariate state preparation (ref:mori2024efficient)"}
+Given a Fourier series $f$ of degree $(d_1, \dots, d_D)$ that can be constructed with multivariate quantum signal processing, we can prepare a quantum state $\ket{\psi_f}$ using $O(n d D/\mathcal{F}_f)$ gates, where $n=\max (n_1, \dots, n_D)$ and $d=\max (d_1, \dots, d_D)$
```
+
+check asymptotics in following paragraph
-The following exercise might be helpful to clarify the relation between quantum query access to a vector and quantum sampling access.
+Meanwhile, if trade-offs are allowed for state preparation, we can further improve the complexity of the algorithm. For example, we can build a state over $n$ qubits with depth $\widetilde{O}\left(\frac{2^n}{m+n} +n\right)$ and size $\widetilde{O}\left(2^n \right)$ if we have $m$ available ancillas [@STY-asymptotically]. On the other hand, we can reduce the number of $T$-gates to $O(\frac{N}{\lambda} + \lambda \log \frac{N}{\epsilon}\log \frac{\log N}{\epsilon})$ if we allow a tunable number of $\lambda \frac{\log N}{\epsilon}$ dirty qubits [@low2018trading]. A dirty qubit is an auxiliary qubit that is left entangled with another register at the end of computation. It cannot be reused by subsequent computations without being disentangled.
-```{exercise}
-Suppose you have quantum access to a vector $x = [x_1, \dots, x_N]$, where each $x_i \in [0,1]$. What is the cost of creating quantum sampling access to $x$, i.e. the cost of preparing the state $\frac{1}{Z}\sum_{i=1}^N x_i \ket{i}$. Hint: query the state in superposition and perform a controlled rotation. Can you improve the cost using amplitude amplification? What if $x_i \in [0, B]$ for a $B > 1$?
+
+
+
+
+
+
+
+
+
+In addition to the algorithms [@STY-asymptotically;@rosenthal2021query], trade-offs can introduce additional circuits that can achieve the lower bound of the depth complexity. For example, using $O \left(2^n\right)$ ancilla qubits, we can perform amplitude encoding with circuit depth $\Theta \left( n \right)$, which further relaxes the connectivity requirements for M-QSP [@zhang2022quantum]. This technique also improves upon sparse state preparation, with a circuit depth $\Theta \left( \log k N \right)$, where $k$ is the sparsity. This represents an exponential improvement in circuit depth over previous works [@gleinig2021efficient;@de2022double]. This leads to a deterministic algorithm that achieves the lower bounds in circuit depth if we allow $m$ ancilla qubits, which is summarized in the following theorem.
+
+
+
+(ref:yuan2023optimal) [@yuan2023optimal]
+
+
+```{theorem, circuit-csp, name="Circuit for controlled state preparation (Theorem 1 of (ref:yuan2023optimal)) "}
+For any $k \in \mathbb{N}$ and quantum states $\{ \ket{\psi_i} | i \in \{0,1\}^k \}$ there is a circuit performing
+$$ \ket{i}\ket{0} \mapsto \ket{i}\ket{\psi_i}, \forall i \in \{0,1\}^k, $$
+ which can be implemented by a circuit of depth $O(n + k + \frac{2^{n+k}}{n+k+m})$ and size $O(2^{n+k})$ with $m$ ancillary qubits. These bounds are optimal for any $m,k \geq 0$.
```
-Lower bounds in query complexity can be used to prove that the worst case for performing state preparation with the technique used in the exercise (i.e. without KP-trees/quantum sampling access) are $O(\sqrt{N})$.
+```{theorem, circuit-sp, name="Circuit for state preparation (Theorem 2 of (ref:yuan2023optimal)) "}
+For any $m > 0$, any $n$-qubit quantum state $\ket{\psi_v}$ can be generated by a quantum circuit using single qubit gates and CNOT gates, of depth $O(n+ \frac{2^n}{n+m}$) and size $O(2^n)$ with $m$ ancillary qubits. These bounds are optimal for any $m \geq 0$.
+```
-In [@PrakashPhD] Section 2.2.1, Prakash shows subroutines for generating $\ket{x}$ for a sparse $x$ in time $O(\sqrt{\|x\|_0})$.
+
+There are also other trade-off techniques that can be used, like probabilistic state preparation via measurements [@zhang2021lowdepth] or approximate state preparation problem [@zhang2024parallel]. However, these techniques are beyond the scope of this chapter and will not be discussed. Interested readers can refer to the respective articles.
-#### Sampling access to a probability distribution
+In summary, there are many methods to perform amplitude encoding, each with different complexities through various trade-offs. In general, the data set that can be encoded using amplitude encoding lies under two main categories: (i) those discrete data that come from a vector or a matrix, or (ii) those that come from a discretized probability distribution. In literature, amplitude encoding of vectors or matrices is called state preparation via a KP-tree, while amplitude encoding of discretized probability distribution is called Grover-Rudolph (GR) state preparation [@Grover2002]. The main difference between the KP-tree method and the GR state preparation is that the KP-tree method requires a quantum memory to store some precomputed values in a data structure or a tree, while the GR state preparation does not require a quantum memory. In fact, GR state preparation is designed such that there are efficient circuits to implement the oracle in the quantum computer using \@ref(sec:implementation-oracle-synthesis).
-We start with a very simple idea of state preparation, that can be traced back to two pages paper by Lov Grover and Terry Rudolph [@Grover2002]. There, the authors discussed how to efficiently create quantum states proportional to functions satisfying certain integrability condition. Let $p$ be a probability distribution. We want to create the state
-\begin{equation}
-\ket{\psi} = \sum_{i\in \{0,1\}^n} \sqrt{p_i}\ket{i}
-\end{equation}
-where the value of $p_i$ is obtained from discretizing the distribution $p_i$. (the case when $p$ is discrete can be solved with the circuits described in the previous section). We discretize the sample space $\Omega$ (check Definition \@ref(def:measure-space) ) in $N$ intervals, so that we can identify the samples $\omega$ of our random variable with the set $[N]$. To create the state $\ket{\psi}$ we proceed in $M$ steps from initial state $\ket{0}$ to a state $\ket{\psi_M}= \ket{\psi}$ that approximates $\psi$ with $2^M = N$ discretizing intervals.
+
+
+
+
-To go from $\ket{\psi_m} = \sum_{i=0}^{2^m-1} \sqrt{p_i^{(m)}}\ket{i}$ to $\ket{\psi_{m+1}} = \sum_{i=0}^{2^{m+1}-1} \sqrt{p_i^{(m+1)}}\ket{i}$
+
-We proceed as in the previous section, i.e. we query an oracle that gives us the angle $\theta_i$, that are used to perform the controlled rotation:
+: Table of different methods to implement amplitude encoding, together with the their gate count and ancilla complexity, along with the function type needed. This table is adapted from [@mori2024efficient],
+ to integrate with Table 1 of (https://arxiv.org/pdf/1812.00954.. first 3 rows). 1 [@mori2024efficient], 2 [@Grover2002], 3 [@rattew2022preparing-arbitrary] 4 [@sanders2019black;@bausch2022fast] 5 [@moosa2023linear] 6 [@rosenkranz2024quantum] 7 [@shende2006synthesis] 8 [@STY-asymptotically].
+
+
+| | Method | Gate count | Ancilla | Depth | Function type |
+|:-:|:-----------:|:--------------------------------------:|:---------------:|:-------------------:|:---:|
+| 1 | M-SQP | $O(\frac{ndD}{\mathcal{F}})$ | 1 | | |
+| 2 | GR | $O(nT_{oracle})$ | $O(t_{oracle})$ | log | Eff. int. |
+| 3 | Adiabatic | $O(\frac{T_{oracle}}{\mathcal{F}^4})$ | $O(t_{oracle})$ | - | Arb. |
+| 4 | Black-box | $O(\frac{T_{oracle}}{\mathcal{F}})$ | $O(t_{oracle})$ | - | Arb. |
+| 5 | FSL | $O(d^D + Dn^2)$ | 0 | - | Arb. |
+| 6 | LCU-based | $O(d^D + Dn\log d)$ | $O(D \log d)$ | - | Arb. |
+| 7 | Circuit | $N\log\left(\frac{N}{\epsilon}\right)$ | $O(n)$ | - | Arb. |
+| 8 | Circuit | $O\left(...\right)$ | $O(m)$ | - | |
+
+
+By looking at different models of quantum computation, we find that state preparation can be performed in constant depth assuming unbounded Fan-Out circuits (Corollary 4.2) [@rosenthal2021query]. The key idea behind this work was to link state preparation to a DNF (disjunctive normal form) boolean formula, which is evaluated in the quantum algorithm. Compiling Fan-Out gates using CNOT gates, this leads to a circuit of depth $O(n)$ and $O(n2^n)$ ancillas. State preparation can also be studied under a different lens in complexity theory, leading to new interesting insights.
+For example, people studied the complexity of generating quantum states [@rosenthal2021interactive;@metger2023stateqip]. For example, the states that can be generated by a (space-uniform) polynomial-sized quantum circuit, forms the class of $\mathsf{StatePSPACE}$. This class has been proven to be equivalent to $\mathsf{StateQIP}$ (the class of states that a polynomial-time quantum verifier can generate with interactions with a all-powerful and untrusted quantum prover), echoing the equivalence between the complexity classes $\mathsf{QIP}$ and $\mathsf{PSPACE}$.
+
+There are many other works in state preparation, and we refer the interested reader to [@bergholm2005quantum;@plesch2011quantum;@araujo2021divide;@bausch2022fast;@rattew2022preparing-arbitrary;@plesch2011quantum;@rosenthal2021query;@zhang2022quantum;@bouland2023state;@rosenthal2023efficient;@gleinig2021efficient;@holmes2020efficient;@moosa2023linear;@zhao2021smooth]. Now we consider two very didactic and general models of quantum state preparation. The former is known as Grover-Rudolph state preparation [@Grover2002] whilst the latter is known as a state prepration via a precomputed data structure that is quantum accessible, called KP-tree. A difference between the Grover-Rudolph and the KP-tree method is that GR is assuming a query access to an oracle which does not need to be necessarily needs to be implemented with a quantum memory. While the KP-tree method assumes some precomputation of a data structure (a tree) which is specifically stored into the QRAM. In fact, for the kinds of quantum states that GR was designed to create, there are efficient circuits for the implementing the oracle, that can be implemented in a quantum computer using Section\@ref(sec:implementation-oracle-synthesis). For both, the total depth of the circuit (considering the QMD as part of the quantum computer) is $O(\log^2(N))$, while the size of the circuit is $O(N\log N)$.
-\begin{equation}
-\ket{i}\ket{\theta_i}\ket{0}\mapsto \ket{i}\ket{\theta_i}\left(\cos \theta_i\ket{0} + \sin \theta_i \ket{1}\right)
-(\#eq:grover-rudolph)
-\end{equation}
-In this case, the value $\theta_i$ is obtained as $\arccos \sqrt{ f(i)}$, where $f : [2^m] \mapsto [0,1]$ is defined as:
+Finally we note that in [@PrakashPhD] (Section 2.2.1), Prakash shows subroutines for generating $\ket{x}$ for a sparse $x$ in time $O(\sqrt{\|x\|_0})$.
+
+
+
+
+
+
+
+
+#### Grover-Rudolph{#sec:implementation-grover-rudolph}
+In [@Grover2002] the authors discussed how to efficiently create quantum states proportional to functions satisfying certain integrability condition, i.e. the function considered must be square-integrable. An example of functions with this properties are [log-concave probability distributions](https://sites.stat.washington.edu/jaw/RESEARCH/TALKS/Toulouse1-Mar-p1-small.pdf). Let $p(x)$ be a probability distribution over $\mathbb{R}$. We denote by $x_i^n$ is the points of the discretization over the domain, i.e $x_i^{(n)} = -w + 2w \frac{i}{2^n}$ for $i=0,\dots,2^n$, and $[-w,w]$ is the window of discretization, for a constant $w\in\mathbb{R}_+$. In this case, $n$ acts as the parameter that controls how coarse or fine is the discretization. Consider referencing the appendix for more informations about measure theory and probability distributions. We want to create the quantum state
+
+\begin{align}
+ |\psi_n\rangle = \sum_{i=0}^{2^n-1}\sqrt{p^{(n)}_i}|i\rangle
+\end{align}
+with
+
+\begin{align}
+ p_i^{(n)} = \int_{x_i^{(n)}}^{x_{i+1}^{(n)}}p(x)\text{d}x.
+\end{align}
+
+Actually, the probabilities $p_i^{(n)}$ will be normalized by $\int_{-w}^w p(x)\text{d}x$. This is equivalent to discretizing the sample space $\Omega$ in $N=2^n$ intervals with $N+1$ points, so that we can identify the samples $\omega$ of our discretized random variable with the elements of the set $[N]$. To create the state $\ket{\psi_n}$ we proceed recursiveliy in $n$, starting from initial state $\ket{0}$. To go from $\ket{\psi_m} = \sum_{i=0}^{2^m-1} \sqrt{p_i^{(m)}}\ket{i}$ to $\ket{\psi_{m+1}} = \sum_{i=0}^{2^{m+1}-1} \sqrt{p_i^{(m+1)}}\ket{i}$ we proceed by performing a query to an oracle that gives us an angle $\theta_i$, for $i \in [2^m]$, which is used to perform the following rotation:
\begin{equation}
-f(i) = \frac{\int_{x^i_L}^{\frac{x_L^i+x^i_R}{2} } p(x)dx} {\int_{x^i_L}^{x^i_R} p(x)dx}
+\ket{i}\ket{\theta_i}\ket{0}\mapsto \ket{i}\ket{\theta_i}\left(\cos \theta_i\ket{0} + \sin \theta_i \ket{1}\right),
+(\#eq:grover-rudolph-rotation)
\end{equation}
-After the rotation, we undo the mapping that gaves us the $\theta_i$, i.e. we perform $\ket{i}\ket{\theta_i}\mapsto \ket{i}$. These operations resulted in the following state:
+In this case, the value $\theta_i$ is defined as $\arccos \sqrt{f(i)}$, where the function $f : [2^m] \mapsto [0,1]$ is defined as:
\begin{equation}
-\sum_{i=0}^{2^m-1} \sqrt{p_i^{(m)}}\ket{i}\left(\cos \theta_i\ket{0} + \sin \theta_i \ket{1}\right) = \ket{\psi_{m+1}}
+f(i) = \frac{\int_{x^i_L}^{\frac{x_L^i+x^i_R}{2} } p(x)dx} {\int_{x^i_L}^{x^i_R} p(x)dx},
+(\#eq:doubleintegral)
\end{equation}
The value of $f(i)$ is the probability that the $i$-th sample $x^i$ (which lies in the interval $[x^i_L, x^i_R]$) lines in the leftmost part of this interval (i.e. $[x^i_L, x^i_R+x^i_L/2]$).
+After the rotation, we undo the mapping that gives us the $\theta_i$. These operations resulted in the following state:
+
+\begin{equation}
+\sum_{i=0}^{2^m-1} \sqrt{p_i^{(m)}}\ket{i}\left(\cos \theta_i\ket{0} + \sin \theta_i \ket{1}\right) = \ket{\psi_{m+1}},
+(\#eq:partial-state)
+\end{equation}
-This method works only for efficiently integrable probability distributions, i.e. for probabiliy distribution for which the integral in Equation \@ref(eq:grover-rudolph) can be approximated efficiently. A broad class of probability distributions is the class of [log-concave probability distributions](https://sites.stat.washington.edu/jaw/RESEARCH/TALKS/Toulouse1-Mar-p1-small.pdf).
+Computing the mapping for the angles $\theta_i$ can be done efficiently only for square-integrable probability distributions, i.e. for probability distribution for which the integral in Equation \@ref(eq:grover-rudolph-rotation) can be approximated efficiently. Fortunately, this is the case for most of the probability distribution that we care about.
-##### The problem with Grover-Rudolph.
+##### The problem (and solutions) with Grover-Rudolph{#sec:implementation-problem-gr}
Creating quantum sample access to a probability distribution is a task often used to obtain quadratic speedups. A recent work [@herbert2021no] pointed out that in certain cases, the time needed to prepare the oracle used to create $\ket{\psi}$ might cancel the benefits of the speedup. This is the case when we don't have an analytical formulation for integrals of the form $\int_a^b p(x)dx$, and we need to resort to numerical methods.
-Often quantum algorithms we want to estimate expected values of integrals of this form $\mathbb{E}[x] := \int_x x p(x) dx$ (e.g. see Chapter \@ref(chap-montecarlo)), Following a garbage-in-garbage-out argument, [@herbert2021no] was able to show that if we require a precision $\epsilon$ in $\mathbb{E}[x]$, we also need to require the same kind of precision for the state preparation of our quantum computer. In particular, in our quantum Monte Carlo algorithms we have to create a state $\ket{\psi}$ encoding a (discretized) version of $p(x)$ as $\ket{\psi}=\sum_{i=0}^{2^n-1} \sqrt{p(i)}\ket{i}$.
+Often in quantum algorithms we want to estimate expected values of integrals of the form $\mathbb{E}[x] := \int_x x p(x) dx$ (e.g. see Chapter \@ref(chap-montecarlo)), Following a garbage-in-garbage-out argument, [@herbert2021no] was able to show that if we require a precision $\epsilon$ in $\mathbb{E}[x]$, we also need to require the same kind of precision for the state preparation of our quantum computer. In particular, in our quantum Monte Carlo algorithms we have to create a state $\ket{\psi}$ encoding a (discretized) version of $p(x)$ as $\ket{\psi}=\sum_{i=0}^{2^n-1} \sqrt{p(i)}\ket{i}$.
Let's define $\mu$ as the mean of a probability distribution $p(x)$ and $\widehat{\mu}=\mathbb{E(x)}$ be an estimate of $\mu$. The error of choice for this kind of problem (which comes from applications that we will see in Section \@ref(chap-montecarlo) ) is called the Root Mean Square Error (RMSE), i.e. $\widehat{\epsilon} = \sqrt{\mathbb{E}(\widehat{\mu}- \mu)}$.
-The proof shows that an error of $\epsilon$ in the first rotation of the GR algorithm, due to an error in the computation of the first $f(i)$, would propagate in the final error of the expected value of $\mu$. To avoid this error, we should compute $f(i)$ with accuracy at least $\epsilon$. The best classical algorithms allows us to perform this step at a cost of $O(\frac{1}{\epsilon^2})$, thus canceling the benefits of a quadratic speedup.
+The proof shows that an error of $\epsilon$ in the first rotation of the GR algorithm, due to an error in the computation of the first $f(i)$, would propagate in the final error of the expected value of $\mu$. To avoid this error, we should compute $f(i)$ with accuracy at least $\epsilon$. The best classical algorithms allows us to perform this step at a cost of $O(\frac{1}{\epsilon^2})$, thus canceling the benefits of a quadratic speedup. Mitigating this problem is currently active area of research.
@@ -430,138 +759,455 @@ The proof shows that an error of $\epsilon$ in the first rotation of the GR algo
+
-Mitigating this problem is currently active area of research.
-
+If we resrict ourselves to considering loading probabilities from a Gaussian distributions then we can use the following approaches.
+##### The solution: Pre-computation
+TODO SAY THAT WE DO GAUSSIAN THINGS.
+We must compute integrals of the form
+\begin{align*}
+ \int_{x_i^{(m)}}^{x_{i+1}^{(m)}}\frac{1}{\sigma\sqrt{\pi}}e^{-x^2/\sigma^2}\text{d}x = \int_{x_i^{(m)}/\sigma}^{x_{i+1}^{(m)}/\sigma}\frac{1}{\sqrt{\pi}}e^{-x^2}\text{d}x
+\end{align*}
-## Block encodings
+for $x_i^{(m)} = -w\sigma + 2w\sigma\frac{i}{2^m}$ with $i=0,\dots,2^m$ and $m=1,\dots,n$. But this is equivalent to computing $\int_{x_i^{(m)}}^{x_{i+1}^{(m)}}\frac{1}{\sqrt{\pi}}e^{-x^2}\text{d}x$ for $x_i^{(m)} = -w + 2w\frac{i}{2^m}$, i.e., for $\sigma=1$, which can be done beforehand with high precision and classically stored. The above iterative construction is thus efficient.
-In this section we discuss another kind of model for working with a matrix in a quantum computer. More precisely, we want to encode a matrix into a unitary (for which we have a quantum circuit). As it will become clear in the next chapters, being able to perform such encoding unlocks many possibilities in terms of new quantum algorithms.
+
+
+
+
+
+
+
-```{definition, name="Block encodings"}
-Let $A \in \mathbb{R}^{N \times N}$ be a square matrix for $N = 2^n$ for $n\in\mathbb{N}$, and let $\alpha \geq 1$. For $\epsilon > 0$, we say that a $(n+a)$-qubit unitary $U_A$ is a $(\alpha, a, \epsilon)$-block-encoding of $A$ if
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+#### KP-Trees{#sec:implementation-KPtrees}
+TODO need some introduction
+
+Let's recall that for a matrix $X \in \mathbb{R}^{n \times d}$ (where we assume that $n$ and $d$ are powers of $2$, otherwise we can just pad the matrix with zeros) with rows $x_i$, an amplitude encoding is the state:
+
+\begin{equation}
+\ket{X} = \frac{1}{\sqrt{\sum_{i=1}^n {\left \lVert x_i \right \rVert}^2 }} \sum_{i=1}^n {\left \lVert x_i \right \rVert}\ket{i}\ket{x_i},
+ (\#eq:matrix-state)
+\end{equation}
+
+Where $\sqrt{\sum_{i=1}^n \left \lVert x_i \right \rVert^2} = \left \lVert X \right \rVert$.
+
+
+
-$$\| A - \alpha ( \bra{0} \otimes I)U_A (\ket{0} \otimes I) \| \leq \epsilon$$
+
+Note that equation \@ref(eq:matrix-state) is just an amplitude encodings of vectors of size $d$ and a vector of size $n$, which reduces the problem of creating an amplitude encoding of a matrix to creating the amplitude encoding of a series of vectors. The PhD thesis by Prakash [@PrakashPhD] introduced the first procedure to efficiently perform the amplitude encoding of a matrix using a tree like classical data structure. Given an incoming matrix $V \in \mathbb{R}^{n \times d}$ the procedure, which was named KP-trees by Rebentrost in [@rebentrost2018quantum] after the authors Kerenidis and Prakash, creates a data structure with size scaling $\widetilde{O}(|V|_0)$ and where the time to update the tree with a new entry scales as $O(\text{poly} \log(nd))$. In addition, an algorithm to perform the amplitude encoding of a vector is given and scales as $O(\log(nd))$. The proof will make use of the following lemma on cascades of rotations:
+
+
+
+
+```{lemma, cascade-controlled-rotations, name="Implementing Rotations with Cascades of Controlled Unitary Gates"}
+Given: a register $\mathtt{A}$ composed of $t$ qubits with the binary encoding of the fixed-point representation of a number $\theta \in (0,2\pi]$, a target qubit $\mathtt{b}$, and a single qubit rotation parameterized by a single angle $\mathtt{R}(\theta) \in \mathbb{C}^{2 \times 2}$, then the unitary
+$$\mathtt{C}_{\mathtt{A}}\mathtt{R}_{\mapsto \mathtt{b}}(\theta) = \prod_{i = 0}^{t-1} \mathtt{C}_i \mathtt{R}_{\mapsto \mathtt{b}}\left( 2^{\lfloor\log_2(\theta)\rfloor-i} \right)$$
+is equivalent to applying $\mathtt{R}(\theta)$ on the target qubit if the rotation holds the property $R(\theta_1 + \theta_2) = R(\theta_1)R(\theta_2)$.
```
-Note an important (but simple) thing. An $(\alpha, a, \epsilon)$-block encoding of $A$ is just a $(1, a, \epsilon)$-block-encoding of $A/\alpha$.
-
+```{proof}
+For a number $\theta \in (0, 2\pi]$ on $t$ qubits, the fixed point representation will be of the form $\mathcal{Q}(a, b, 0):= \mathcal{Q}(z, 0)$, with $c_1 = \lfloor\log_2(\theta)\rfloor + 1$ and $c_2 = t - c_1$. \\
+The application of $\mathtt{C}_{\mathtt{A}}\mathtt{R}_{\mapsto \mathtt{b}}(\theta)$ is equivalent to $t$ single qubit application of $\mathtt{R}(\theta)$ on the target qubit $\mathtt{b}$, with the identity on all other qubits, where the angles have to be adjusted.
+%with parameters $z_i2^{\lfloor\log(\theta)\rfloor-i}$.
+In particular:
+\begin{equation*}
+ \prod_{i = 0}^{t-1} \mathtt{C}_i \mathtt{R}_{\mapsto \mathtt{b}}\left( 2^{\lfloor\log_2(\theta)\rfloor-i} \right) = \prod_{i = 0}^{t-1} \mathtt{R}_{\mapsto \mathtt{b}}\left( z_i 2^{\lfloor\log_2(\theta)\rfloor-i} \right)
+\end{equation*}
+where $z_i \in \{ 0,1 \}$ is the state of qubit $i$ and we have omitted the tensor product with the identity on all other qubits of register $\mathtt{A}$ for simplicity. Then:
+\begin{equation*}
+\prod_{i = 0}^{t-1} \mathtt{R}_{\mapsto \mathtt{b}}\left( z_i 2^{\lfloor\log_2(\theta)\rfloor-i} \right) = R_{\mapsto \mathtt{b}}\left( \sum_{i=0}^{t-1} z_i 2^{\lfloor\log_2(\theta)\rfloor-i} \right) = R_{\mapsto \mathtt{b}}(\theta),
+\end{equation*}
+where we have used that $R(\phi_1 + \phi_2) = R(\phi_1)R(\phi_2)$ and
+observing that
+$\sum\limits_{i=0}^{t-1} z_i2^{\lfloor\log(\theta)\rfloor-i}$ is
+the fixed-point encoding of $\theta$.\\
+Since the proof only makes use of the fact that the parameterized gate
+needs to have the property $R_n(\phi_1 + \phi_2) = R_n(\phi_1)R_n(\phi_2)$,
+we can easily extend it to the Phase gate $P$ and the y rotation gate $R_y$.
+```
-We report this result from [@gilyen2019quantum].
-(ref:gilyen2019quantum) [@gilyen2019quantum]
+
+
+
+
+
+
+
+
+
+
+
+
+```{exercise, compositionofrotations}
+Prove that $\sigma_y(\theta + \phi) = \sigma_y(\theta)\sigma_y(\phi)$ for the $\sigma_y$ rotation given by
+
+\begin{equation}
+\sigma_y(\theta) =
+\begin{pmatrix}
+\cos(\frac{\theta}{2})& -\sin(\frac{\theta}{2})\\
+\sin(\frac{\theta}{2}) & \cos(\frac{\theta}{2})
+\end{pmatrix}
+\end{equation}
+
+Prove the same property for the phase gate $P(\theta + \phi) = P(\theta)P(\phi)$ defined by
+\begin{equation}
+P(\theta) =
+\begin{pmatrix}
+1 & 0\\
+0 & e^{i \theta}
+\end{pmatrix}
+\end{equation}
+```
-```{definition, name="Block encoding from sparse access (ref:gilyen2019quantum)"}
-Let $A \in \mathbb{C}^{2^w \times 2^w}$ be a matrix that is $s_r$-row-sparse and $s_c$-column-sparse, and each element of $A$ has abolute value at most $1$. Suppose that we have access to the following sparse access oracles acting on two $(w+1)$ qubit registers:
- $$O_r: \ket{i}\ket{k} \mapsto \ket{i}\ket{r_{ik}} \forall i \in [2^w] - 1, k \in [s_r], \text{and}$$
+The original work developed a procedure to perform the amplitude encoding of a matrix $V \in \mathbb{R}^{n \times d}$ with KP-trees in 2 steps: a pre-processing step and circuit implementation. In the data pre-processing step, for each row $v_i \in \mathbb{R}^{n}$, which is composed of elements $v_{ij} \in \mathbb{R}$, a binary tree is created where the leaves contain the values $v_{ij}^2$ and the sign of $v_{ij}$. Each intermediate node contains the sum of the leaves of the sub tree rooted at that node. The circuit implementation makes use of the $\mathsf{QMD}$ to access the next layer of the desired tree $v_i$ until the leaves are reach.
- $$O_c: \ket{l}\ket{j} \mapsto \ket{c_lj}\ket{j} \forall l \in [s_c], j \in [2^w]-1, \text{where}$$
+Here we'll present an optimized version of KP-trees for state preparation of complex matrices. In this version each tree is pruned such that it only saves the angles that are required for the state preparation rather than the partial norms, which halves the size of the memory. Furthermore, we clearly lay out the circuits for the implementation of the state preparation, which can be seen in \@ref(fig:KP-trees-ry). In addition the complex matrices are handled by storing the phase of each complex number. The circuit implementation of the phase makes use of a circuit as depicted in figures \@ref(fig:KP-trees-phase).
-$r_{ij}$ is the index of the $j$-th non-zero entry of the $i$-th row of $A$, or if there are less than $i$ non-zero entries, than it is $j+2^w$, and similarly $c_ij$ is the index for the $i$-th non-zero entry of the $j-th$ column of $A$, or if there are less than $j$ non-zero entries, than it is $i+2^w$. Additionally assume that we have access to an oracle $O_A$ that returns the entries of $A$ in binary description:
- $$O_A : \ket{i}\ket{j}\ket{0}^{\otimes b} \mapsto \ket{i}\ket{j}\ket{a_{ij}} \forall i,j \in [2^w]-1 \text{where}$$
+
- $a_{ij}$ is a $b$-bit description of the $ij$-matrix element of $A$. Then we can implement a $(\sqrt{s_rs_c}, w+3, \epsilon)$-block-encoding of $A$ with a single use of $O_r$, $O_c$, two uses of $O_A$ and additionally using $O(w + \log^{2.5(\frac{s_rs_c}{\epsilon})})$ one and two qubit gates while using $O(b,\log^{2.5}\frac{s_rs_c}{\epsilon})$ ancilla qubits.
+```{theorem,KP-tree-state-preparation,name="Optimized KP-trees for complex vectors"}
+Let $V \in \C^{n \times d}$, where we assume that $n$ and $d$ are powers of $2$, then there is a data structure to store the rows of $V$ such that:
+ * The size of the data structure is $O(\|V\|_0 \log^2(nd))$.
+ * The time to insert, update or delete a single entry $v_{ij}$ is $O(\log^{2}(nd))$.
+ * A quantum algorithm that has quantum access to the data structure can perform the mapping $\tilde{U}: \ket{i}\ket{0} \rightarrow \ket{i}\ket{V_i}$ for $i \in [m]$, corresponding to the rows of the matrix currently stored in memory and the mapping $\tilde{V}: \ket{0}\ket{j} \rightarrow \ket{\tilde{V}}\ket{j}$, for $j \in [n]$, where $\tilde{V} \in \mathbb{R}^{m}$ has entries $\tilde{V}_i = ||V_i||$, in time $(2+t)\log(nd) + 2t$, where $t$ is the precision of the result.
```
-The previous theorems can be read more simply as: "under reasonable assumptions (**quantum general graph model** for rows and for columns - see previous section), we can build $(\sqrt{s_rs_c}, w+3, \epsilon)$-block-encodings of matrices $A$ with circuit complexity of $O(\log^{2.5(\frac{s_rs_c}{\epsilon})})$ gates and constant queries to the oracles".
+```{proof}
+The data structure is composed of $n+1$ binary trees, one for each row of the matrix, and an additional one for the vector of norms of the rows. For the $i^{th}$ row of $V$(i.e. $V_i$) we build the binary tree $B_i$. Given that that $d$ is a power of 2, there will be $\log(d)$ layers to a tree $B_i$.
-Now we turn our attention to another way of creating block-encodings, namely leveraging the quantum data structure that we discussed in Section \@ref(q-sampling-access). In the following, we use the shortcut notation to define the matrix $A^{(p)}$ to be a matrix where each entry $A^{(p)}_{ij} = (A_{ij})^p$.
+For an incoming element $j$ of the vector $V_i$ of the form $v_{ij}e^{i\theta_{ij}}$, the $j^{\text{th}}$ leaf of the $i^{th}$ tree will store the tuple $(v_{ij}^2, \theta_{ij})$. Once a leaf is updated, the internal nodes will also be updated such that an internal node $l$ will store the sum of the moduli $v_{ij}$ of the leaves of a sub tree rooted at $l$. This procedure continues until all the matrix is loaded.
-```{definition, mu, name="Possible parameterization of μ for the KP-trees"}
-For $s_{p}(A) = \max_{i \in [n]} \sum_{j \in [d]} A_{ij}^{p}$, we chose $\mu_p(A)$ to be:
-$$\mu_p(A)=\min_{p\in [0,1]} (\norm{A}_{F}, \sqrt{s_{2p}(A)s_{(1-2p)}(A^{T})}).$$
+At the end of the procedure, we represent an internal node $l$ in the $i^{th}$ tree at depth $d$ as $B_{id}^{l}$. If $j_b$ represents the $b^{th}$ bit of $j$ then:
+
+\begin{equation}
+ B_{id}^{l} = \sum\limits_{\substack{j_1,...,j_d=l\\j_{d+1},...,j_{\log(n)} \in \{0,1\}}} v_{ij}^2,
+\end{equation}
+
+This implies that the first $d$ bits of $j$ written in binary are fixed to $l$, indicating that we are at depth $d$. This procedure requires $O (\log^2(nd))$ time.
+
+After the tree has been filled it will be pruned. The pruning involves changing the values in the internal nodes such that they store the angle required to perform the state preparation. This will allow to remove $v_{ij}^2$ from the leaves which will now only store the phases ${\theta_{ij}}$.
+
+Given a node $B_{id}^{l}$, the pruned tree will have nodes given by:
+
+\begin{equation}
+B_{i,d}^{' l} =
+ \begin{cases}
+ 2\arccos \left( \sqrt{\frac{B_{i,d+1}^{2l}}{B_{i,d}^{l}}}\right), & \text{if } d < \log(n)\\
+ \theta_{i,\log(n)}^l, & \text{if } d=\log(n)
+ \end{cases},
+\end{equation}
+
+The size of the data structure scales as $O(|V|_0\log^2(nd))$.
+
+Since the first $\log(n)-1$ layers are concerned with preparing the real amplitudes, $B_{i,k}^{' l} \in [0, \pi]$ for $k \in [\log(n)-1]$, whilst the last layer will be bounded by $B_{i,\log(n)}^{' l} \in [0, 2\pi]$.
+Each angle in the tree will be stored in its fixed-point representation. Since the first $\log(n)-1$ layers are concerned with preparing the real amplitudes, $B_{i,k}^{' l} \in [0, \pi]$ for $k \in [\log(n)-1]$ meaning their fixed-point representation $\mathcal{Q(z)}$ will have $c_1 = 1$ and $c_2 = t$ for some precision $t$. The last layer contains the phases which are bounded by $B_{i,\log(n)}^{' l} \in [0, 2\pi]$, meaning their fixed-point representation $\mathcal{Q(z)}$ will have $c_1 = 2$ and $c_2 = t-1$ for some precision $t$ . This means that for some precision $t$ the address register will require $t'=t+1$ qubits.
+
+The circuit will require quantum access to the data structure $B_{i,d}^{'}$ as well as 3 registers: an index register $\ket{i}$ which contains $\lceil \log(i) \rceil$ qubits and the binary representation of $i$; a $t'$-qubit address(angle) register which will load the fixed-point representation of the angles up to a precision $t$; and the main register composed of $\log(n)$ qubits.
+
+A quantum query to the data structure will load from $B_{i}^{'}$ the fixed-point representation of the angle on the address register which will be followed by a cascade of controlled rotations, as seen in lemma \@ref(lem:Rotations), in order to produce the next layer of the data structure $B_{i}$. Given the first $k$ qubits of the register being in a state $\Psi_k$, then the action on the $k+1$ qubit be as follows:
+
+\begin{equation}
+\ket{i}\ket{0}^{\otimes t'}\ket{\Psi_k}\ket{0} \xrightarrow{\text{QMD}}
+\ket{i}\ket{B_{i,d}^{'}}\ket{\Psi_k}\ket{0}\xrightarrow{\sigma_y(B_{i,d}^{'})}
+\ket{i}\ket{B_{i,d}^{'}}\ket{\Psi_k}\frac{1}{\sqrt{B_{i,d}^{l}}}\left(\sqrt{B_{i,d+1}^{2l}}\ket{0} + \sqrt{B_{i,d+1}^{2l+1}}\ket{1}\right),
+\end{equation}
+
+Where $\xrightarrow{\sigma_y(B_{i,k}^{'})}$ indicates a cascade of $\sigma_y$ rotations controlled on the address register and targeted on the $k+1$ qubit of the main register.
+
+Repeating this procedure for the first $n-1$ layers of the tree $B_i^{'}$ will leave the main register in the state $\ket{\Psi_{\log(n)}}$, i.e. the amplitude encoding of the moduli of the vector $V_i$. The addition of the phases will require a quantum query to the classical data structure to access the final layer of the binary tree. Following the query, a cascade of phase gates on the $\log(n)^{th}$ qubit is performed. This is followed by a $\sigma_x$ gate and then another cascade of phase rotations and finally another $\sigma_x$ gate. The $\sigma_x$ gate is required since the phase gates only acts on the state where the last bit is a $1$. The transformation will then be:
+
+\begin{equation}
+\ket{i}\ket{0}^{\otimes t'}\ket{\Psi_{\log(n)}} \xrightarrow{\text{QMD}}
+\ket{i}\ket{B_{i,\log(n)}^{'}}\ket{\Psi_{\log(n)}}\xrightarrow{\text{CP}(B_{i,\log(n)}^{'})}
+\ket{i}\ket{B_{i,\log(n)}^{'}}e^{i\theta_{il}}\ket{\Psi_{\log(n)}},
+\end{equation}
+
+Where $\theta_{il}$ is the phase saved in the leaf $l$. From this we can see that performing $\tilde{U}: \ket{i}\ket{0} \rightarrow \ket{i}\ket{V_i}$ for $i \in [m]$ requires $2\log(n) + 2$ queries to the classical data structure. In addition, a precision of $t$ will require at most $t'(\log(n) + 2)$ controlled rotations.
+
+The final tree is for the implementation of $\tilde{V}: \ket{0}\ket{j} \rightarrow \ket{\tilde{V}}\ket{j}$, for $j \in [n]$. As before we construct the tree such the leaves store $||V_i||^2$ and an internal node $l$ stores the entries of the subtree rooted at $l$. It is important to note that in this case the node does not need to store the phases since we are interesting in the moduli of the rows, which halves the dimension of this tree. Following an analogous procedure of pruning, a circuit of controlled $\sigma_y$ rotations will grant the application of the unitary $\tilde{V}$ to the main register.
```
-(ref:CGJ18) [@CGJ18]
+```{r, KP-trees-ry, echo=FALSE, fig.width=10, fig.cap="A section of the circuit for state preparation with pruned KP-trees. For a vector $V_i \\in \\mathbb{C}^{n}$ we require 3 register: an index register $\\ket{i}$; a $t$-qubit address register which will load the angles up to a precision $t$; and the main register composed of $\\log(n)$ qubits. At a intermediate step of the procedure the main register will hold a state which represents the $k$-th layer of the KP-tree $\\Psi_k$ and the aim of the circuit will be to prepare the $k+1$ layer of the tree, $\\Psi_{k+1}$. This starts with a quantum query to the data structure which loads the angles to the address register. This is followed by a cascade of controlled rotations and finally an inverse call to the data structure. Repeating this circuit $\\log(n)$ times produces the state $\\Psi_{\\log(n)}$ which is a vector which hold the moduli of the components of $V_i$. In total the circuit will have a depth of $t\\log(n)$ and will require $2\\log(n)$ queries to the data structure."}
+knitr::include_graphics("images/KP_Trees_RY.png")
+```
+```{r, KP-trees-phase, echo=FALSE, fig.width=10, fig.cap="The circuit that adds the phase in state preparation with pruned KP-trees. This starts with a quantum query to the data structure which loads the angles to the address register, which is followed by a cascade of phase rotation, a NOT gate and another cascade. The requirmenet of 2 cascades is that the controlled phase rotations only acts on the quantum states with a $1$ as the last bit. The application of this circuit produces the desired vector $V_i$."}
+knitr::include_graphics("images/KP_Trees_RPhase.png")
+```
-```{lemma, kp-block-encodings, name="Block encodings from quantum data structures (ref:CGJ18)"}
-Let $A\in\mathbb{C}^{M\times N}$, and $\overline{A}\in\mathbb{C}^{(M+N)\times (M+N)}$ be the symmetrized matrix defined as
-$$\overline{A}=\left[\begin{array}{cc}0 & A\\ A^\dagger & 0 \end{array}\right].$$
+We now show an example of the state preparation with pruned KP-trees of a vector of size 4. Figure \@ref(fig:KP-trees-example) shows the difference between an original KP-tree and a pruned version, whilst \@ref(fig:example-circuit) shows the full circuit implementation to load the vector.
+```{example}
+As a worked example we will take a $4$ element vector for which we will construct a data structure and then, using a $\mathsf{QMD}$, we will load it into a quantum circuit. The vector is:
- - Fix $p\in [0,1]$. If $A\in\mathbb{C}^{M\times N}$, and $A^{(p)}$ and $(A^{(1-p)})^\dagger$ are both stored in quantum-accessible data structures with sufficient precision, then there exist unitaries $U_R$ and $U_L$ that can be implemented in time $O\left(polylog(MN/\epsilon)\right)$ such that
-$U_R^\dagger U_L$ is a $(\mu_p(A),\lceil\log (N+M+1)\rceil,\epsilon)$-block-encoding of $\overline{A}$.
+\begin{equation}
+V^{T} = \left( 0.4e^{i\frac{\pi}{4}}, 0.4e^{i\frac{\pi}{12}} , 0.8e^{i\frac{\pi}{3}}, 0.2e^{i\frac{\pi}{6}} \right),
+\end{equation}
+
+In particular our aim is to produce the state:
+
+\begin{equation}
+ \ket{\phi} = 0.4e^{i\frac{\pi}{4}}\ket{00} + 0.4e^{i\frac{\pi}{12}}\ket{01} + 0.8e^{i\frac{\pi}{3}}\ket{10} + 0.2e^{i\frac{\pi}{6}}\ket{11},
+\end{equation}
+
+The first step is the construction of the data structure. Figure \@ref(fig:KP-trees-example) shows the initial tree and the pruned tree. The difference between the two is that the initial tree stores the partial norms of the entries, whilst the pruned tree stores the angles required to implement the partial norms as $\sigma_y$ rotations.
+
+Since we are working with a vector we will only have 1 qubit in the index register which will be set to the state $\ket{0}$. The angle will be stored with a precision of $t$ which we will assume is a large number. This will require the angle(address) register to be composed of $t$ qubits. Finally, since the vector has 4 elements, we will require 2 qubits in the main register.
+
+An initial call to the $\mathsf{QMD}$ will load the angle on the address register, this will be followed by a cascade of $\sigma_y$ rotations on the first qubit and an inverse call to the $\mathsf{QMD}$ to remove the angle from the address register. This will produce the state:
+
+\begin{equation}
+ \ket{0}\ket{0}^{\otimes t}\left( \sqrt{0.32}\ket{0} + \sqrt{0.68}\ket{1}\right)\ket{0},
+\end{equation}
+
+A similar process is done is the second step where in this case the call to the $\mathsf{QMD}$ is made using the index register and the first qubit and will access the second row of the pruned KP-tree. This will be followed by a similar cascade of $\sigma_y$ rotations on the second qubit of the main register and inverse call to the $\mathsf{QMD}$ to remove the angle from the address register. This will produce the state:
+
+\begin{equation}
+ \ket{0}\ket{0}^{\otimes t}\left(\sqrt{0.32}\ket{0} \left( \sqrt{\frac{0.16}{0.32}}\ket{0} + \sqrt{\frac{0.16}{0.32}}\ket{1} \right) + \sqrt{0.68}\ket{1} \left( \sqrt{\frac{0.64}{0.68}}\ket{0} + \sqrt{\frac{0.2}{0.68}}\ket{1} \right)\right),
+\end{equation}
+
+\begin{equation}
+ = \ket{0}\ket{0}^{\otimes t}\left(0.4\ket{00} + 0.4\ket{01} + 0.8\ket{10} + 0.2\ket{11}\right),
+\end{equation}
+
+Now we need to add the phases. As already seen, this will be done with a query to the $\mathsf{QMD}$ to access the final row of the pruned KP-tree, then a cascade of phase rotations $P$. This will produce the state:
+
+\begin{equation}
+ \ket{0}\ket{\theta}\left(0.4\ket{00} + 0.4e^{i\frac{\pi}{12}}\ket{01} + 0.8\ket{10} + 0.2e^{i\frac{\pi}{6}}\ket{11}\right),
+\end{equation}
+
+Where the state $\ket{\theta}$ is some superposition containing the binary representation of all the leaves of the pruned KP-tree. Because the nature of the phase gate, a $\sigma_x$ gate followed by another cascade of phase rotations is required to load the remaining phases. Finally, a $\sigma_x$ gate and a inverse query to the $\mathsf{QMD}$ will produce the desired state!
-- On the other hand, if $A$ is stored in a quantum-accessible data structure with sufficient precision, then there exist unitaries $U_R$ and $U_L$ that can be implemented in time $O(polylog(MN)/\epsilon)$ such that $U_R^\dagger U_L$ is a $(\|A\|_F,\lceil\log(M+N)\rceil,\epsilon)$-block-encoding of $\overline{A}$.
+\begin{equation}
+ \ket{0}\ket{0}^{\otimes t} \left(0.4e^{i\frac{\pi}{4}}\ket{00} + 0.4e^{i\frac{\pi}{12}}\ket{01} + 0.8e^{i\frac{\pi}{3}}\ket{10} + e^{i\frac{\pi}{6}}0.2\ket{11}\right),
+\end{equation}
+The full circuit for this can be seen in the figure \@ref(fig:example-circuit).
```
-The second point of the previous theorem is just our good old Definition \@ref(def:KP-trees).
+
+
+```{r, KP-trees-example, echo=FALSE, fig.width=10, fig.cap="The original and pruned tree used for the example. The difference between the two is that the initial tree stores the partial norms of the vector entries, whilst the pruned tree only stores the angles required to implement the required $\\sigma_y$ and phase rotations."}
+knitr::include_graphics("images/KP-trees-example.png")
+```
+```{r, example-circuit, echo=FALSE, fig.width=10, fig.cap="The circuit to perform state preparation of the vector reported in the worked example. An initial call to the $\\mathsf{QMD}$ loads the angle on the address register, which is followed by a cascade of $\\sigma_y$ rotations on the first qubit and an inverse call to the $\\mathsf{QMD}$ to remove the angle from the address register. A similar process is done for the second step where the call to the $\\mathsf{QMD}$ is made using the index register and the first qubit and will access the first layer of the pruned KP-tree. This will be followed by a similar cascade of $\\sigma_y$ rotations on the second qubit of the main register and inverse call to the $\\mathsf{QMD}$. The final step requires adding the phases. This will be done with a query to the $\\mathsf{QMD}$ to access the final row of the pruned KP-tree, then a cascade of phase rotation $P$. Because the nature of the phase gate, a $\\sigma_x$ gate followed by another cascade of phase rotations is required to load the remaining phases. Finally, a $\\sigma_x$ gate and a inverse query to the $\\mathsf{QMD}$ will produce the amplitude encoding."}
+knitr::include_graphics("images/Example_circuit.png")
+```
-
+The following exercise might be helpful to clarify the relation between quantum query access to a vector and quantum sampling access.
-
+```{exercise}
+Suppose you have quantum access to a vector $x = [x_1, \dots, x_N]$, where each $x_i \in [0,1]$. What is the cost of creating quantum sampling access to $x$, i.e. the cost of preparing the state $\frac{1}{Z}\sum_{i=1}^N x_i \ket{i}$. Hint: query the state in superposition and perform a controlled rotation. Can you improve the cost using amplitude amplification? What if $x_i \in [0, B]$ for a $B > 1$?
+```
-
+Lower bounds in query complexity can be used to prove that the worst case for performing state preparation with the technique used in the exercise (i.e. without KP-trees/quantum sampling access) are $O(\sqrt{N})$.
+
+
+### Block encoding {#sec:implementation-block-encoding}
+
+We first turn our attention to creating block encodings from oracles doing amplitude encoding. In the following, we use the shortcut notation to define the matrix $A^{(p)}$ to be a matrix where each entry $A^{(p)}_{ij} = (A_{ij})^p$.
+
+```{exercise, creationstatematrix}
+Let $X \in \mathbb{R}^{n \times d}$. Suppose you have access to $U_R$ and $U_L$ defined as:
+
+ - $U_R\ket{i}\ket{0} = \ket{i}\ket{x_i} = \ket{i} \sum_{j} (x_i)_j \ket{j}$;
+ - $U_L\ket{0}\ket{i} = \ket{\widetilde{x}}\ket{i}$ where $\widetilde{x}$ is the vector of the norms of the rows of the matrix $x$.
-## Importance of quantum memory models
+Using once $U_R$ and $U_L$, build a unitary $U_X$ that performs the mapping $\ket{0} \mapsto \frac{1}{\|X\|_F}\sum_{i,j}x_{ij} \ket{i,j}$.
+```
-To grasp the importance of this model we have to discuss the bottlenecks of doing data analysis on massive datasets in current classical architectures. When the data that needs to be processed surpass the size of the available memory, the dataset can only be analyzed with algorithms whose runtime is almost linear with respect to the size of the dataset. Super-linear computations (like most algorithms based on linear-algebra, which are qubic in the worst case) are too expensive from a computational point of view, as the size of the data is too big to fit in live memory. Under this settings, quantum computers can offer significant advantages. In fact, the runtime of the whole process of performing a data analysis on a matrix $A$ using quantum computers is given by the time of the preprocessing and constructing quantum access to the data, plus the runtime of the quantum procedure. This means that the runtime of data analysis processing has a total computational complexity of $\tilde{O}(\norm{A}_0) + O(poly(f(A)), poly(1/\epsilon), poly(\log(mn)) )$, where $\epsilon$ the error in the approximation factor in the quantum randomized algorithm, and $f(A)$ represent some other function of the matrix that depends on properties of the data, but not on its size (for instance, $\kappa(A)$, the condition number of the matrix). This runtime represents an improvement compared to the runtime of the best classical algorithms in machine learning, which is $O(poly(\norm{A}_0) \times poly(f(A),1/\epsilon))$. As we saw, preparing quantum access to a matrix $A$ is computationally easy to implement, (it requires a single or few passes over the dataset, that we can do when we receive it, and it is can be made in parallel), and thus a quantum data analysis can be considerably faster than the classical one. Needless to say that even if the scaling of the quantum algorithm is sub-linear in the matrix size, if we consider the cost to build quantum access we "lose" the exponential gap between the classical and the quantum runtime. Nevertheless, the overall computational cost can still largely favor the quantum procedure. Moreover, the preprocessing can be made once, when the data is received.
-
+
-The past few years saw a trend of works proposing "dequantizations" of quantum machine learning algorithms. These algorithms explored and sharpened the ingenious idea of Ewin Tang to leverage a classical data structure to perform importance sampling on input data to have classical algorithm with polylogarithmic runtimes in the size of the input. The consequence is that quantum algorithms have now (in the worst scenarios) at most a polynomial speedup compared to the classical dequantizations in the variables of interested (which, in machine larning problems is the size of the input matrix). Hoewver, these classical algorithms have much worsened dependece in other parameters (like condition number, Frobenius norm, rank, and so on) that will make them not advantageous in practice (i.e. they are slower than the fastest classical randomized algorithms [@arrazola2020quantum]). Having said that, even having a good old quadratic speedup is not something to be fussy about.
+```{theorem, creation-be-from, name="Creation of state matrix"}
+Assume to have the unitaries $U_L$ and $U_R$ for a matrix $X \in \mathbb{R}^{n \times n}$. Then $U_R^\dagger U_L$ is a $(\|X\|_F, 0)$-block encoding of $X$.
+```
-## QRAM architecures and noise resilience {#sec:qramarchitectures}
+```{proof}
+Define the matrix $P \in \mathbb{R}^{nm}$ by the column vectors $\ket{i}\ket{x_i}$ for $i \in [m]$, and the matrix $Q$ by $Q \in \mathbb{R}^{nn \times nn}$ defined by the column vectors $\ket{\overline{x}}\ket{i}$.
+One can verify that
+$$(P^\dagger Q)_{ij} = \braket{i, x_i|\overline{x}, j} = \frac{X_{ij}}{\|X\|_F}. $$
+To conclude the proof, it suffices to recall the definition of block encoding:
+ \[
+ \|X - \|X\|_F(\bra{0} \otimes I)U_R^\dagger U_L (\ket{0} \otimes I) \| = \|X - \|X\|_F P^\dagger Q \| = 0.
+\]
+```
-While building a QRAM is something that has nothing to deal with computer science at first sight, the reason why we discuss this subject here is twofold. First, we think it is a good idea to disseminate the knowledge on the state of the art of QRAM, as in the past few works spread (perhaps rightly) some skepticism on the feasibility of the QRAM.
-Historically, we have two concrete proposed architectures for implementing a QRAM. The bucket brigade (BB), from [@giovannetti2008quantum], [@giovannetti2008architectures] and the Fanout architecture. In this document we will neglect to discuss much about the Fanout architecutre (Fig 1 and Fig 2 of [@giovannetti2008architectures] ), as it does not have many of the nice properties of the BB architecture, and thus will never be used in practice, and we will focus on the BB architecture. Hybrid architecture, interpolating between the multiplexer and the bucket-brigade architecture exists [@di2020fault], [@paler2020parallelizing], [@low2018trading], [@berry2019qubitization]. These architecture allow to move from a quantum circuit of log-depth with no ancilla qubits, to circuits with no ancilla qubits and linear depth (as we saw in Section \@ref(sec:accessmodel-circuits) with the multiplexer).
-This unpretentious section is here for pinpointing a few keywords and give the reader just an intuition on how a BB QRAM works. The BB architecture (Fig 10 [@hann2021resilience], ) could be either implemented with qutrits or (as recently shown in [@hann2021resilience] ) with qubits. For now, we focus on the explanation with qutrits, as it's relatively simpler. In the QRAM terminology, we have an address register (which we previously call index register), and a bus register (which we previously called target register, which was just a empty $\ket{0}$ register).
+```{definition, mu, name="Possible parameterization of μ for the KP-trees"}
+For $s_{p}(A) = \max_{i \in [n]} \sum_{j \in [d]} A_{ij}^{p}$, we chose $\mu_p(A)$ to be:
-The BB is a binary tree, where at the leaves we find the content of the memory. Each of the leaves is connected to a parent node. Each internal node (up to the root) is called a quantum router (Figure 1b of [@hann2021resilience]). Each router is a qutrit (i.e. a three level quantum system), which can be in state $\ket{0}$ (route left), $\ket{1}$ (route right), and $\ket{W}$ i.e. (wait). When we want to perform a query, we prepare and index register with the address of the memory cell that we want to query, and we set all the router registers to $\ket{W}$. Then, the first qubit of the address register is used to change the state of the root of the tree from $\ket{W}$ to either $\ket{0}$ or $\ket{1}$. Then, the second address register is routed left or right, conditioned on the state of the quantum router in the root. Then, the state of this router in the first layer is changed according to the value of the second qubit of the address register, from $\ket{W}$ to either $\ket{0}$ or $\ket{1}$. This process of changing the state of the routers and routing the qubits of the address register is repeated until the last layers of the tree. Now, all the qubits of the bus register are routed untill the chosen leaf, and the value of the memory is copied in the bus register. Note that, as the value of the target register is a classical value, to perform a copy is simply necessary to apply a CNOT gate (and thus we do not violate the no-cloning theorem).
+\begin{equation}
+\mu_p(A)=\min_{p\in [0,1]} (\norm{A}_{F}, \sqrt{s_{2p}(A)s_{(1-2p)}(A^{T})}).
+\end{equation}
+```
-We are left to study the impact of errors in our QRAM. Studying a realistic error model of the BB architecture has been a topic of research for long time. Among the first works, [@arunachalam2015robustness] gave initial, but rather pessimistic results, under a not-so-realistic error model, with some resource estimations that can be found in [@di2020fault]. More recently, a series of works by which (IMHO) culminated in [@hann2021resilience] and [@hann2021practicality] (accessible [here](https://www.proquest.com/openview/c5caf76bb490e4d3abbeca2cea16b450/1?pq-origsite=gscholar&cbl=18750&diss=y)) shed more light on the noise resilience of QRAM of the bucket brigate architecture. The results presented there are much more postive, and we report some of the conclusions here. As already discussed, the metric of choice to show how much a quantum procedure prepares a state close to another desired state is the fidelity $F$, with the infidelity defined as $1-F$. The infidelity of the bucket brigade, for an addressable memory of size $N$ (and thus with $\log N$ layers in the tree), where $\epsilon$ is the probability of an error per time step, and $T$ is the time required to perform a query, scales as:
+(ref:CGJ18) [@CGJ18]
+```{lemma, kp-block-encodings, name="Block encodings from quantum data structures (ref:CGJ18)"}
+Let $A\in\mathbb{C}^{M\times N}$, and $\overline{A}\in\mathbb{C}^{(M+N)\times (M+N)}$ be the symmetrized matrix defined as:
+
\begin{equation}
-1-F \approx \sum_{l=1}^{\log N} (2^{-l}) \epsilon T2^{l} = \epsilon T \log N
-(\#eq:qramfidelity)
+\overline{A}=\left[\begin{array}{cc}0 & A\\ A^\dagger & 0 \end{array}\right].
\end{equation}
-```{exercise}
-Can you recall from calcolus what is the value of $\sum_{l=1}^{\log N} l$
+ - Fix $p\in [0,1]$. If $A\in\mathbb{C}^{M\times N}$, and $A^{(p)}$ and $(A^{(1-p)})^\dagger$ are both stored in quantum-accessible data structures with sufficient precision, then there exist unitaries $U_R$ and $U_L$ that can be implemented in time $O\left(polylog(MN/\epsilon)\right)$ such that
+$U_R^\dagger U_L$ is a $(\mu_p(A),\lceil\log (N+M+1)\rceil,\epsilon)$-block encoding of $\overline{A}$.
+
+- On the other hand, if $A$ is stored in a quantum-accessible data structure with sufficient precision, then there exist unitaries $U_R$ and $U_L$ that can be implemented in time $O(polylog(MN)/\epsilon)$ such that $U_R^\dagger U_L$ is a $(\|A\|_F,\lceil\log(M+N)\rceil,\epsilon)$-block encoding of $\overline{A}$.
+
```
-The time required to perform a query, owing to the tree structure of the BB is $T=O(\log N)$. This can be seen trivially from the fact that $T \approx \sum_{l=0}^{\log N -1 } l = \frac{1}{2}(\log N)(\log N +1)$, but can be decreased to $O(\log N)$ using simple tricks (Appendix A of [@hann2021resilience]). This leaves us with the sought-after scaling of the infidelity of $\widetilde{O}(\epsilon)$ where we are hiding in the asymptotic notation the terms that are polylogarithmic in $N$. The error that happen with probability $\epsilon$ can be modelled with Kraus operators makes this error analysis general and realistic (Appendix C [@hann2021resilience]), and is confirmed by simulations. For a proof of Equation \@ref(eq:qramfidelity) see Section 3 and Appendix D of [@hann2021resilience].
+The second point of the previous theorem is equivalent to what we saw in \@ref(sec:implementation-KPtrees).
-For completeness, we recall that there are proposal for "random access quantum memories", which are memories for quantum data that do not allow to address the memory cells in superposition. For the sake of clarity, we won't discuss these results here.
-## Working with classical probability distributions
+#### Block encoding from sparse access
+Using this we can then see how to perform a block encoding. We start by reporting this result from [@gilyen2019quantum].
-We have 4 ways of working with classical probability distributions in a quantum computer:
+(ref:gilyen2019quantum) [@gilyen2019quantum]
+
+```{theorem, name="Block-Encoding from sparse access (ref:gilyen2019quantum)"}
+Let $A \in \mathbb{C}^{2^w \times 2^w}$ be a matrix that is $s_r$-row-sparse and $s_c$-column-sparse, and each element of $A$ has absolute value at most $1$. Suppose that we have access to the following sparse access oracles acting on two $(w+1)$ qubit registers:
+
+\begin{equation}
+ O_r: \ket{i}\ket{k} \mapsto \ket{i}\ket{r_{ik}} \forall i \in [2^w] - 1, k \in [s_r], \text{and}
+\end{equation}
+
+\begin{equation}
+ O_c: \ket{l}\ket{j} \mapsto \ket{c_lj}\ket{j} \forall l \in [s_c], j \in [2^w]-1, \text{where}
+\end{equation}
+
+$r_{ij}$ is the index of the $j$-th non-zero entry of the $i$-th row of $A$, or if there are less than $i$ non-zero entries, than it is $j+2^w$, and similarly $c_ij$ is the index for the $i$-th non-zero entry of the $j-th$ column of $A$, or if there are less than $j$ non-zero entries, than it is $i+2^w$. Additionally assume that we have access to an oracle $O_A$ that returns the entries of $A$ in binary description:
+
+\begin{equation}
+ O_A : \ket{i}\ket{j}\ket{0}^{\otimes b} \mapsto \ket{i}\ket{j}\ket{a_{ij}} \forall i,j \in [2^w]-1 \text{where}
+\end{equation}
+
+ $a_{ij}$ is a $b$-bit description of the $ij$-matrix element of $A$. Then we can implement a $(\sqrt{s_rs_c}, w+3, \epsilon)$-block encoding of $A$ with a single use of $O_r$, $O_c$, two uses of $O_A$ and additionally using $O(w + \log^{2.5}(\frac{s_rs_c}{\epsilon}))$ one and two qubit gates while using $O(b,\log^{2.5}\frac{s_rs_c}{\epsilon})$ ancilla qubits.
+
+```
+
+The previous theorems can be read more simply as: "under reasonable assumptions (**quantum general graph model** for rows and for columns - see previous section), we can build $(\sqrt{s_rs_c}, w+3, \epsilon)$-block encodings of matrices $A$ with circuit complexity of $O(\log^{2.5}(\frac{s_rs_c}{\epsilon}))$ gates and constant queries to the oracles".
+
+
+
+
+
+
+
+
+
+The interested reader can read [@camps2024explicit] to learn how to create block encodings from sparse access.
+
+
+## Use case: working with classical probability distributions
+
+We have $4$ ways of working with classical probability distributions in a quantum computer [@gur2021sublinear]:
- Purified query access
- Sample access
- Query access to a frequency vector of a distribution
- Drawing samples classically and perform some computation on a quantum computer
-Let's start with a formal definition of the frequency vector model.
+Let's start with a formal definition of the frequency vector model. This is a kind of query access, but since it is used only for probability distrubtions we decided to include it in this section.
```{definition, query-access-frequency-vector, name="Quantum query access to a probability distribution in the frequency vector model"}
Let $p=(p_1, p_2, \dots p_n)$ be a probability distribution on $\{1, 2, \dots, n\}$, and let $n\geq S \in \mathbb{N}$ be such that there is a set of indices $S_i \subseteq [S]$ for which $p_i = \frac{\left|S_i\right|}{S}.$
We have quantum access to a probability distribution in the frequency vector model if there is an quantum oracle that, for $\forall s \in [S_i]$ performs the mapping $O_p \ket{s} \mapsto \ket{s}\ket{i}$.
+```
+
+
+```{exercise}
+Given a matrix $M := \sum\limits_{j=0}^{n-1} a_{j}U_{j}$, which is a linear combination of unitary matrices $U_j$, consider the unitary transformation
+
+\begin{equation}
+ U_M = (U_{\text{PREP}}^{\dagger} \otimes I) U_{\text{SEL}} ( U_{\text{PREP}} \otimes I)
+\end{equation}
+
+where $U_{\text{PREP}}\ket{0} = \ket{a} = \sum\limits_{j=0}^{n-1} a_{j}\ket{j}$ and $U_{\text{SEL}} = \sum\limits_{j=0}^{n-1} \ket{j}\bra{j} \otimes U_{j}$. Then show that $U_M$ is a block encoding of $M$:
+
+\begin{equation}
+ \left(\bra{0}\otimes I \right) U_M \left(\ket{0}\otimes I \right) = M = |a_j|^2 \sum\limits_{j=0}^{n-1} b_{j}U_{j}
+\end{equation}
```
-## Retrieving data
-In order to retrieve information from a quantum computer, we are going to use some efficient procedures that allow to reconstruct classically the information stored in a quantum state. These procedures can be thought of as ways of sampling from a pure state $\ket{x}$. The idea for an efficient quantum tomography is that we want to minimize the number of times that the sate $\ket{x}$ is created.
+## Retrieving Data
+
+In order to retrieve information from a quantum computer, we are going to use some efficient procedures that allow to reconstruct classically the information stored in a quantum state. These procedures can be thought of as ways of sampling from a pure state $\ket{x}$. The idea for an efficient quantum tomography is that we want to minimize the number of times that the state $\ket{x}$ is created.
Most of the quantum algorithms discussed here will work with pure quantum states. We assume to have access to the unitary that creates the quantum state that we would like to retrieve, and that we have access to the unitary that creates the state (and that we can control it). Under these conditions, the process of performing tomography is greatly simplified. According to the different error guarantees that we require, we can chose among two procedures.
@@ -569,23 +1215,31 @@ Most of the quantum algorithms discussed here will work with pure quantum states
(ref:KP18) [@KP18]
```{theorem, tomelle2, name="Vector state tomography with L2 guarantees (ref:KP18)"}
-Given access to unitary $U$ such that $U\ket{0} = \ket{x}$ and its controlled version in time $T(U)$, there is a tomography algorithm with time complexity $O(T(U) \frac{ d \log d }{\epsilon^{2}})$ that produces unit vector $\widetilde{x} \in \R^{d}$ such that $\norm{\widetilde{x} - \ket{x} }_{2} \leq \epsilon$ with probability at least $(1-1/poly(d))$.
+Given access to unitary $U$ such that $U\ket{0} = \ket{x}$ and its controlled version, there is a tomography algorithm with calls $U$ and its controlled version for $\frac{ d \log d }{\epsilon^{2}})$ times, that produces a unit vector $\widetilde{x} \in \R^{d}$ such that $\norm{\widetilde{x} - \ket{x} }_{2} \leq \epsilon$ with probability at least $(1-1/poly(d))$.
```
(ref:jonal2019convolutional) [@jonal2019convolutional]
```{theorem, tomellinfinity, name="Vector state tomography with L∞ guarantees (ref:jonal2019convolutional)"}
-Given access to unitary $U$ such that $U\ket{0} = \ket{x}$ and its controlled version in time $T(U)$, there is a tomography algorithm with time complexity $O(T(U) \frac{ \log d }{\epsilon^{2}})$ that produces unit vector $\widetilde{x} \in \R^{d}$ such that $\norm{\widetilde{x} - \ket{x} }_{\ell_\infty} \leq \epsilon$ with probability at least $(1-1/poly(d))$.
+Given access to unitary $U$ such that $U\ket{0} = \ket{x}$ and its controlled version, there is a tomography algorithm with calls $U$ and its controlled version $\frac{ \log d }{\epsilon^{2}})$ that produces a unit vector $\widetilde{x} \in \R^{d}$ such that $\norm{\widetilde{x} - \ket{x} }_{\ell_\infty} \leq \epsilon$ with probability at least $(1-1/poly(d))$.
```
-Note that in both kinds of tomography, dependence on the error in the denominator is quadratic, and this is because of the Hoeffding inequality, i.e. lemma \@ref(lem:Hoeffding). Another remark on the hypothesis of the algorithms for tomography is that they require a unitary $U$ such that $U\ket{0}\mapsto\ket{x}$ for the $\ket{x}$ in question. Oftentimes, due to the random error in the quantum subroutines used inside the algorithms, this state $\ket{x}$ might slightly change every time.
+Note that in both kinds of tomography the dependence on the error in the denominator is quadratic, and this is because of the Hoeffding inequality. Another remark on the hypothesis of the algorithms for tomography is that they require a unitary $U$ such that $U\ket{0}\mapsto\ket{x}$ for the $\ket{x}$ in question. Often times, due to the random error in the quantum subroutines used inside the algorithms, this state $\ket{x}$ might slightly change every time.
```{exercise, fromellinftoell2}
Can you obtain $\ell_2$ tomography with error $\epsilon$ on a $d$ dimensional state if you have have only access to an algorithm that perform $\ell_\infty$ tomography with error $\epsilon^\ast$ on the same state? (I.e. what should you set the value of $\epsilon^{\ast}$?).
```
-### Denisty matrices
+
+
+```{exercise, tomellep}
+Consider Theorem \@ref(thm:tomelle2) and Theorem \@ref(thm:tomellinfinity). What is the sample complexity of a tomography algorithm returning error in norm $\ell_p$? In other words, find the sample complexity to return a vector such that $\norm{\widetilde{x} - \ket{x} }_{\ell_p} \leq \epsilon$ with probability at least $(1-1/poly(d))$.
+```
+
+
+
+### Density matrices
Much of the current literature in quantum tomography is directed towards reconstructing a classical description of density matrices. This problem is considerably harder than reconstructing a pure state.
@@ -600,5 +1254,30 @@ Different techniques have been recently developed in [@zhang2020efficient]. Ther
(ref:zhang2020efficient) [@zhang2020efficient]
```{theorem, tomography-trick, name="Improved quantum tomography (ref:zhang2020efficient)"}
-For the state $\ket{v}$ lies in the row space of a matrix $A \in \R^{n \times d}$ with rank $r$ and condition number $\kappa(A)$, the classical form of $\ket{v}$ can be obtained by using $O(r^3\epsilon^2)$ queries to the state $\ket{v}$, $O(r^{11}\kappa^{5r}\epsilon^{-2}\log(1/\delta))$ queries to QRAM oracles of $A$ and $O(r^2)$ additional inner product operations between rows, such that the $\ell_2$ norm error is bounded in $\epsilon$ with probability at least $1-\delta$.
+For the state $\ket{v}$ lies in the row space of a matrix $A \in \R^{n \times d}$ with rank $r$ and condition number $\kappa(A)$, the classical form of $\ket{v}$ can be obtained by using $O(r^3\epsilon^2)$ queries to the state $\ket{v}$, $O(r^{11}\kappa^{5r}\epsilon^{-2}\log(1/\delta))$ queries to $\mathsf{QRAM}$ oracles of $A$ and $O(r^2)$ additional inner product operations between rows, such that the $\ell_2$ norm error is bounded in $\epsilon$ with probability at least $1-\delta$.
```
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
From beda6adc063eaafd4eca0f07e940399bb7f2badc Mon Sep 17 00:00:00 2001
From: Scinawa <2940017+Scinawa@users.noreply.github.com>
Date: Wed, 20 Nov 2024 10:51:35 +0100
Subject: [PATCH 02/22] Add quantum_architectures.tex
---
algpseudocode/quantum_architecture.tex | 65 ++++++++++++++++++++++++++
1 file changed, 65 insertions(+)
create mode 100644 algpseudocode/quantum_architecture.tex
diff --git a/algpseudocode/quantum_architecture.tex b/algpseudocode/quantum_architecture.tex
new file mode 100644
index 0000000..ffd65cd
--- /dev/null
+++ b/algpseudocode/quantum_architecture.tex
@@ -0,0 +1,65 @@
+\documentclass{article}
+\usepackage[utf8]{inputenc}
+
+\usepackage{multirow}
+\usepackage{multicol}
+\usepackage{array}
+\usepackage{graphicx}
+
+\usepackage{tikz}
+
+\usetikzlibrary{decorations.pathreplacing}
+\usetikzlibrary{shapes.misc}
+\definecolor{RoyalBlue}{RGB}{65, 105, 225}
+
+\usetikzlibrary{quantikz}
+
+\usepackage[landscape, paperwidth=15cm, paperheight=30cm, margin=0mm]{geometry}
+
+\title{\vspace{-5ex}}
+\date{\vspace{-5ex}}
+
+\begin{document}
+
+\begin{tikzpicture}
+ % Outer box
+ \draw[thick] (0,1) rectangle (6,10);
+
+ % Quantum processing unit
+ \draw[dashed, thick] (0.2,4.45) rectangle (5.8, 9.6);
+ \node at (3,9.2) {\textbf{Quantum processing unit}};
+
+ % Input
+ \draw (0.6,8.0) rectangle (5.4,8.9);
+ \node at (3,8.45) {Input $\left|\psi\right\rangle_\mathtt{I}$};
+
+ % Workspace
+ \draw (0.6,6.9) rectangle (5.4,7.8);
+ \node at (3,7.35) {Workspace $\left|0\right\rangle_\mathtt{W}^{\otimes \text{poly}(\log n)}$};
+
+ % Inner box for address and target
+ % \draw[dashed] (1,5.5) rectangle (5,7.5);
+
+ % Address
+ \draw (0.6,5.7) rectangle (5.4,6.6);
+ \node at (3,6.15) {Address $\left|i\right\rangle_\mathtt{A}$};
+
+ % Target
+ \draw (0.6,4.6) rectangle (5.4,5.5);
+ \node at (3,5.05) {Target $\left|b\right\rangle_\mathtt{T}$};
+
+ % Quantum memory device
+ \draw[dashed, thick] (0.4,1.4) rectangle (5.6, 6.75);
+ \node at (3,1.8) {\textbf{Quantum memory device}};
+
+ % Ancillae
+ \draw (0.6,3.4) rectangle (5.4,4.3);
+ \node at (3,3.85) {Ancillae $\left|0\right\rangle_{\mathtt{Aux}}^{\otimes \text{poly}(n)}$};
+
+ % Memory
+ \draw (0.6, 2.3) rectangle (5.4,3.2);
+ \node at (3, 2.75) {Memory $\left|x_0, x_1, \ldots, x_{n-1}\right\rangle_\mathtt{M}$};
+
+\end{tikzpicture}
+
+\end{document}
From 761d4e8a23713a85365a57118171f3d4c4f07d6e Mon Sep 17 00:00:00 2001
From: Scinawa <2940017+Scinawa@users.noreply.github.com>
Date: Wed, 20 Nov 2024 10:53:17 +0100
Subject: [PATCH 03/22] Add oracle-models.tex
---
algpseudocode/oracle-models.tex | 64 ++++++++++++++++-----------------
1 file changed, 32 insertions(+), 32 deletions(-)
diff --git a/algpseudocode/oracle-models.tex b/algpseudocode/oracle-models.tex
index 21ee9e8..579fdd1 100644
--- a/algpseudocode/oracle-models.tex
+++ b/algpseudocode/oracle-models.tex
@@ -1,32 +1,32 @@
-\documentclass{article}
-\usepackage[utf8]{inputenc}
-
-\usepackage{multirow}
-\usepackage{multicol}
-\usepackage{array}
-\usepackage{graphicx}
-
-
-\usepackage[landscape, paperwidth=15cm, paperheight=30cm, margin=0mm]{geometry}
-
-\title{\vspace{-5ex}}
-\date{\vspace{-5ex}}
-
-\begin{document}
-
-\maketitle
-
-
-\begin{table}[]
-\centering
-\begin{tabular}{ccclclcl}
-\cline{2-7}
-\multicolumn{1}{c|}{Oracle} & \multicolumn{3}{c|}{Numbers} & \multicolumn{3}{c|}{Quantum sampling access} & \\ \cline{2-7}
-\multicolumn{1}{c|}{\multirow{2}{*}{Implementation}} & \multicolumn{1}{c|}{\multirow{2}{*}{QRAM}} & \multicolumn{2}{c|}{Circuits} & \multicolumn{1}{c|}{KP-trees} & \multicolumn{1}{c|}{Grover-Rudolph} & \multicolumn{1}{c|}{Other} & \\ \cline{3-6}
-\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Sparse access} & \multicolumn{1}{c|}{Functions} & \multicolumn{2}{c|}{Oracle for numbers} & \multicolumn{1}{c|}{} & \\ \cline{2-7}
-\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} &
-\end{tabular}
-\end{table}
-
-
-\end{document}
+\documentclass{article}
+\usepackage[utf8]{inputenc}
+
+\usepackage{multirow}
+\usepackage{multicol}
+\usepackage{array}
+\usepackage{graphicx}
+
+
+\usepackage[landscape, paperwidth=15cm, paperheight=30cm, margin=0mm]{geometry}
+
+\title{\vspace{-5ex}}
+\date{\vspace{-5ex}}
+
+\begin{document}
+
+\maketitle
+
+
+\begin{table}[]
+\centering
+\begin{tabular}{ccclclcl}
+\cline{2-7}
+\multicolumn{1}{c|}{Oracle} & \multicolumn{3}{c|}{Numbers} & \multicolumn{3}{c|}{Quantum sampling access} & \\ \cline{2-7}
+\multicolumn{1}{c|}{\multirow{2}{*}{Implementation}} & \multicolumn{1}{c|}{\multirow{2}{*}{QRAM}} & \multicolumn{2}{c|}{Circuits} & \multicolumn{1}{c|}{KP-trees} & \multicolumn{1}{c|}{Grover-Rudolph} & \multicolumn{1}{c|}{Other} & \\ \cline{3-6}
+\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Sparse access} & \multicolumn{1}{c|}{Functions} & \multicolumn{2}{c|}{Oracle for numbers} & \multicolumn{1}{c|}{} & \\ \cline{2-7}
+\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} &
+\end{tabular}
+\end{table}
+
+
+\end{document}
From ddb15076a9412144fd4421a0e1b035604a1ea482 Mon Sep 17 00:00:00 2001
From: Scinawa <2940017+Scinawa@users.noreply.github.com>
Date: Wed, 20 Nov 2024 10:59:23 +0100
Subject: [PATCH 04/22] Swapped oracle-models for oracle_modles (the right
image)
---
algpseudocode/oracle-models.png | Bin 17825 -> 0 bytes
algpseudocode/oracle-models.tex | 32 -----------
algpseudocode/oracle_models.tex | 98 ++++++++++++++++++++++++++++++++
3 files changed, 98 insertions(+), 32 deletions(-)
delete mode 100644 algpseudocode/oracle-models.png
delete mode 100644 algpseudocode/oracle-models.tex
create mode 100644 algpseudocode/oracle_models.tex
diff --git a/algpseudocode/oracle-models.png b/algpseudocode/oracle-models.png
deleted file mode 100644
index b53acdc8b4e3d81176d95a0acca4f827cc7de38d..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001
literal 17825
zcmYhi1z42d^9M>P(kUg)(nxoABPm_d-Q8W%EG&(5cS(cv0*iEqq_lLyUH$(4_ul7u
zH{UpO&cvDd%uKY(CmA$kB4ijC7&JLqNi`Ulx4qEs>F*Gr?|-^>K0`kcEX9<>U|=J@
zvHeang8qgxQIL^@d5ugM?}7e8a+1|?g@J)(`uB2;I)FX{2GLDUNeXcf9tRcy@dD!z
z9r_oh>6cF)&`}nkLmb1vzz0D;0i9O8L@+Rt7;=(g8eYpMU0pRtkcISHrfs8#@IJCc
ze(nS;gahK-AF{|itDxS{E7)xx07HR_pS&^cGs`-R<
zvH$&|D
zWxQr86_1~T2uc2cJYTQT^R_YP+7VPpdfWr%`8b6$(WY`D?
zt-{j;?=T^G@i^{P64CN7UPkhegTzT!DcBvQ@8aTf>0)U2CZe!j3J%{6HaLZ02b@Dl
zgWfwzkDHFyHpIV*;j_|%6;y6N29b%=!G5(7*BeyXKVaXzj^U^0^hM0{N&xFtkgISrMI?Mr{`DW
zbC%BK&^g}&TIg9_;-4w@Cj_4dhv;V`G*;+?A%7qI&dakF!sYwo1Q3;J#NFlMyFd&I
zS|#Z{*3qLnnG%e|jVVBd66fP_o&M4B&((7TDpxP*aQk&nf&Sz3&eLt~Y?*X#ts(v%y(3JJf_k5@xrl9phI
zxd{8sE-9s2v%hlnA^E6K1$Womiypp8Q?xLp*TQy9q7tkI#Fqk5FvPfUZ4_*f9|lRN
zb`!F=VKyRoAO|@5LDrYB;qxW;4HE$5Auc$(_xnSx;wIvoBRC3jqyO
zR1~H>1!(MiiCB%sv;uOz66o{(WJ1&^e2I0I6GS7_rCvj9zEa+BHHSQkbBs3&QvqSM
zFR)0|DF$nkBSmF3V}NYE%QnIVRbl^XZ!`Pfv+n>YBno&t{h94pGB(spi`%F)`=(Sa
z+Q=SE+I7HCGi+JuzTEdn3M}3hXXOuJL`SVYJMAq7O51|k4KdLpf&oA^~B`4K0zS@ODD$}DElNOEOxYHYGQW~ocsp;RUhgWRV=Y2Pp
z(2PV~=6Yg!l3gQPP&Xrv;&=A3mg)*Pl!~xkNY8V2cMs9}ZT>s~=rZ|pBy)F=*GT($
z5`1E`l);2CLMUy${ZsfgV*SM3e6J9gvOZDL6&kR))P9-115e5b&@W+4Uw2{=SVu
zLo@0W$5-PJcBTyP6MQUcfBq6*;pPgy1Xu6XMH~NHZL6#iYP*i)%xa$mciRD`%#HW>!FCNeRtnkagCZP=B6<#0Q|C
zzATuuS^a-fY}0jL_$HmsfeSkz4puY3w%SYFWB+{Vbx6PfRsc0}
zFZ=VYcg>*T7+zzTF1R)r+6{WNqmoj!ox(oM>;mllSf2
z>v!nl^t~L5vL=HpUU4{!?$YOMFo1FXIAn$HDrL`IwUv7K)?dtt
zm{*l5ozEO~D=41?Ycy!`9}gSYF@h6zUPuHt0wlvbPS>}8$E{cilaX+7#W_$P(eEAk
z8VTPM%+t?WfGOZFZ!}~iyqgWS+iL8!BDL6infpJ_`f4=XvzojRx&PMT|8>ku$i;YR
z@e=#JJubo=Tj)SSl|NnRO!Q!VD_6N|?^G=ztoioGf%`^rG?;DVjlgJeE*9rw-qDDb
zs)bqRq$F*?bCl&C*wk6jZPw44vmZ;?T$$PLk97(TI$LIA>EU~D>3=NZ@BGPU?Cv=*
zDO=7kV@|6Lv?P&)lo~XmRsb2?6g^jrYE++ME)3Whb&_DB0ej8T<0h4D7Uho?EN45S
zD)D;ARobsRG>WQk+w6A7nNI!~i)^j1E6E56$Bi3;F-A!&6ttL}K!aa%g(RSB50v@e
z+W$4pU~!DDGzzKe4#6Z97rr|Bgn@1Ut$|ju)1)wE;R&vK2C!#|Zp0nG8mx=HYk@R{ZE-L#W{@ktcH6jNGmIDvRxOIvv$&A-_>W-g}bb%N&=bsnqekoV}$
znCL+iFWya@_#6^zJdB#jm{_U@ZRsH!2?1PE-ol3g5_I{)E?gaphwX#To`*7mY}@>Y
z!zELQZ(zULV5CVF^*%Ng12N>f{$*=zj-2&n!?jG>3|DO`%!=HprWb?#}~{xsPjhK
zLz8I)6>)!Y?I!MxgTl={_6$
zc$Iy<0%8Yaaphqx{ra?xWwyQE#&x#KPM<0VA~s<1#AW+?REAO++At{PDvZI
zpOKAh9aFH7exQ5jyjbI$Kd8LLgr>(n=Hj`mdB}RxV-)}bL{lZ`FXnn=!=ETV%8f_$)pWqyPi#wxgM@C
zCu!$8AdBY_Zs)!%cHnS1LKgp%8;+XNIo%)av&oI03J*e@?@W2T?EBpg3Rt$OpWmzb
zZrqZl%Y+BuX~YvI^5k?txyx=JSLyie4F5`cmEqEgN--xbKuLqM;EE
z1wG&M^IOL*_F^)&q#O3SGCs^}%!!4`p!oJLL~%3Iv)GC7{`T^$zx0eW5$uY1J6uKq
z*J}anfqyNHx`kjRYQ$bx2H6bkqz#v1JECJRp;-{a2E|ozJ)U}aq+d)v_|s$
zt(%-4wI-)q8Og15Vix$J{L$10Y*Q$`TJ&6=4>FASMRe)Ptk;+p7S|$Je{Ij||8xa)
zgiSJE>d%>!u@iH9`f!o&1PCq5&58gxcH6fWJI8bo4WnCYoul<^c*Ha_CcZn(F0H@5h3)T(quBB|)|n67
z#uC$yw&47l!LbIz+(CyZy+DxG7xGi^Ph@}IaBhMTu(Bks-N?uYU!5(F6WS8u;Y)gM
zfw6?>k9cZALc$Lo&(_d+zvb#ZbfbGjGk^XX9et$P$(MdqrYh_+xVey!UJ_{637Zk;
z>Fr>pl;XrZ!IL;d$`7uss>KrrKc5_CN)8|#FU6B>`&h4m
z(yPnUz71^Na_*K=5uym6eV^QocE0++KEGFuWnr2f+5d2uTK@dbaY@2N{N~wrf-L`@
zAKUAoOs`lqL1TAn(9r@78w*QV+*5uiq_!qc2}w>4X_07BBHD3A{TgZ4;XiexF#!k0
zxFF8Ls6(So1e+kVDTge8ndQ!u8ojHFLy(YSM3{L`G&R>kaqrGUALHYI%#<(EA5gm{gUczZ0f9eZZ0^`~8AC-I*-ihw7a$EmgBu4;lZP5tL}p
zUSgffLVT$lr~iN)==&-XSMUE}F{-CwM8<#T_V4f%{~tBb(?Eg$8wPp~oP&93g#;uE
zc_MqdecE_3N=Vy|{Nwqg@RAhh)kFY|>tSC>UeR8|U+=8gfLJf@|J)|viK`0UgPurU
zdqsqGV?jHCn(&HkAJno*BFp|aoCvlMqGB8EYgYW}T6EyxE97-_Ysv(=&InJnPb^OY
zFDHeAD$;bK7-5=*5y~$zCvyg}{{c;g%6>fZTBpjvjBZn?DM1s!S
zAwCQMki218Y}l%Fut>9mf>^Q8ODN&?F&-`U4fxU1z~*w;*yti+AeOZlthkiMR|t`1
z(awpZqo2H}8A(ET)x^RZHpwvVS4!b!nycrn?-vUD=@pyFl5uI7JB&Lbs`
zSJa!UJb%oGjg^!P?$8NkisG{c0F1>(z*VHo_cO8uf?-v|o+p--mxgZQ?_dtnslLU3
z`8F0@DazykK+Me
zUjE*sa#!vAWQ||X;^~fusa2KJ!=LLY>?0}6tVJxU&v0B&BP3cqo(51tGbqyJw~OM9
z(2Ef4<1P(YJV&VjG}}SHdbfrqCs&m72qXsH_J1aV?hVb(R|A3)wedm3{e7ysrsxF7
zA^Sc@8PgEs*}s|$o=D6_)jsbK_-ZEXO;z|ic8dDubvQ4Zhc224kc|ntC`kV}C*D`v
zy(_qf!+SCO#GS)xq0AN2o|-&G+#!v~Vlveh5puZB0)5M+@B)*YZXV$5(X?cPrPd(Vjvq7@^
zmGAuWgnMaA;JOnhgeidclfhFA4w-UuO{mWF=INKLXiw79G?D0EWdmlB6Vf|1ktoty
zNO{IRBIZ@yT_T_(=gMI3Mki&58G#%;cx-Jq5B%m+y{xmZc{ZMR*Ub
zV*QyDaWZZ3gYCQJOrHB|HrrsaSL`>*s6KQFtWmgflI=}$@OBdVfF$1
zNv?ldWg(;Hiyp14waGFZdeIbl9iYdsZQaC_S#7|+!rp3V
z1pixX9BQvPsIDFG$c7R2=yIe8u(9gwJ?yE;L#8Q)?k$IBOa+L#&DTyy6;O{?P(%O#
zx53{)Gd7Q2W?8$AFd0n?7T8fK85;)W7=)cLjx3{f39N1EpS3Nrl8P|LI=3Igi7+nj
z8ibX$H#vEkf3h6$tzM#**4Ap^s}fcVAqFeY0OLA3i#i3V$AD#54U*;$gthQ_Cta>3
z+h%||>z>kqlbs+NE4&R&VlqJQe8<$Eb{_Z-i9IDiLaiFt%gz0%5A=?{-&sw4`oA_)jYN@sIJXCX2RN6`}UH->av@0$Y8mTqa6)Bx}F_RC89(*>Dx6^p~=m
zb4c`bbanWS;4_CeDz|i@8Rw!f6e3H+;j{fEx1`Fb#<^1+6+p2+;`rFjC~PK1Q=fw<
z)UaD5Hy_uzgtPSN_Hg!qRL)4boLNfE#Q2ELbB5ELP*3#Jk2Is}6{f{2H2T=09nAf5
z?2!on)Bx<16oqBa3Ow0?*-_5ZS?B&-?Ow79SZkB5YTO!1cy=ApcoW4V?gwWT`=ORZO6TLe55q
zm@BjFLq~92+VHS}S&d+ZQ=py5KueX~@kW-yspE5+Q?C_W?Oe}s=KXF8;#YnvT;IXGm1s`qN_(l3<47p|CqH4T;{1y4*SIS
z6xzP#I?2}OKpeogIu3-Hgt|C@gFXtqHuj&vvn3J<)LA?^Z5X3!A<{fi{f}E0NSBi4
z{^*_v{SVXtc;V$olW|&G%Czf^$BXCU4pi64ee2;^t%rE|T_wtlE*E;U;rJ+w7j_BD
z5~V4iLcuBQqQz12EDtqeJ%{B)hN8MWW}?8^=$&BRheobbwwl$mec}G2>N9~f;{4@-bk0^2K|CI_={_Qe2yN|q`&@i
z9bDDP{&@F)Uja^dbo2#UPQ-l8ib7e0FE^`>gZSpaKfaZ;yK5kcx5@h={Od)TD((p6
z&5_5$(cOD;QoAa!bwQ|g`sD((pCk49P^dOf$LSCsY3y0;Q%Z4XA@>KxwlT(}5~hRu
zq>kJJ0m?aGEBn#I1E*srPV%S)4K41j16!?rUUD-2FU~;3ovg9^QO=5|&gU7S^@r2q
zse-@0DY*Lkpg4*rV!2z+igS#u=HA3p+mW2oF5Om0&Q1fSfREM9Vc5%u#B=tY#9NYr9qI
zFvfjY(#GVC;H5DEJ`K`BsMj?2^nmA=y%3
z&^SQ^o0u4$ld;U`<2HEiCH$>XMd)Ejl6yKovLw;7io!`hmgvyK$=>^86-v{)nof{LtW=JFoUQ<2hrp
zeTHlGBU;xOXp*;mLHiM^Li9xYaNw(VLO(RJ1-4`ba*hf2*qIrqnQKnG9*ZWLergqc
zI$_-!wY(k&ckg<6SKlng5o)~-OYGYVLo4hwWvJXlDM2UGV^1H4a)yET##Q-^xyR#ujY9Xr>m
z5u*>;ilY*SN4^OP{C`o@b4{SH{+y=%=^`)nE^p^~=cQe3sr%uMJ_M91V5&JU=T6b{
zc%cI4B5X1GOBq{S?SRy-!^-aanZUkf=2?|}Wd|0|mJ)wXRAS`ED-{QbfB;O|#gW6y
zYlOmPdglkY;UB!(f6O;}X;B3~>(p{V)Yg4`l|e+KWOhz>sK6kE#mrN3wcRKx5B|+R
zN*gPx2cws;4Y89x=ea#4PXKD;U#vO$U$;t1&B&a6^YaI8()U45(Wb%xa%$tzEP*hy
z7Vft7eIm4+k
z)i^~5D*a9~;@s;*fHV_u0}#D+jZS=EKIBE{Bufn%y-@2AXpA7dc!R$k@S9;qRe~BJE?XapfaVdf%v7Mp&g^5N_
zQO(9E+kW}{oqcHEcg}1_(pUwqWTMZbu0Z{e_UdX=N1+uxb=|d<0MM-Pm!Z8!wo*M)ZAU>WW>*{{2}`7R-EkfRA^qOR}5!uq}S?arxr6qsOp7^$3Fh4$TZ>D)(`v|2IM((7*x+4iN9AupsM+eR
z@3uU=>%GocjyoOg(nrJ@Ms;j@Ae-k%~*G7AMtZ(!gY4N8B
z2vbL*qSTtUg={O=3vrxB#_lsUq)U95K(7Q;_yuLf${wsCIA0&$u;UTfvIli)SKzKwKMme!r1OLsmqH92ZaJG42idr27`Iq78
z?xTn`4hZ-)`^W_TSc3*V?!tL}B6hNy&BE;w~Ov^hNJFG|}Y%-nHu
z==4_-Ezse|H)nbII@CdVRY{TRR|=56Zb$2Y5_mNP;)n%*)Z*G64I%0m4HxCxR>V%c
zmJQ66FRM3Iz{ry0BBhLV0hbVqlk~LY_}fX{GkhGiML|RKp{c|z#fqFGrA~z<{i96r
zl|o(Jej1AXAm_OeO(JyLplC<`%IV+0HRAyd#*e1RFzdN}o-t-jxl(C_8{qM4>1Q@M
zhqZ2DgJn4}zc>wS6-?N(u8uBB?p~~Y=CFKhX4ov3;`(zzVRdwm_jglIyIA}so1iu7
zb6cyPgs>eyLnD0Z5@Y*y<}RZQvS`_;puc&qDu>qL&nA>`H>kYkY3YfHR7Yy)a(pgk
z)438z(MU^qXKX3y_V=K*ggVR04sWurW=wZ}asCuI8{?{Qv_mTMsUKJR+P6?SJazEl
z_hL-jb?dpfAh{>{mML=&=2!OA=jR9e(EVB*PR#@kiWjvluUcm|EQb#+6N`yscW_v*
z%&?p*j?VjA`kO^Zy(7!FtUa0I+Cr8s^Aok>YdGN(sPW&JBl?E3Z6IGr@Tt-G`4T?i
zd-nnAqMhR`+05-4yk=Y?Y6Sh*#L)eUF_I&T6hRVoA8R{f!Tqt1(9@=x4Er?SVF4uQ
zpcrI0=YB%U+I0$4wx}Uw?drb0CidTHv*-=j3hyiX@X*1Q7rLIGK0f7_F4F3@cuxSR
z=kV<}{r)$N*f5GwHsqvM`;)D5c_S&XJC{t654Z6@X!F8e75+G2+*2H^kloDAQ7p&B
zPHkV&I6YHd8ak(b+xI=62^Pt=@m6@at3a@Z?Km(5aB7bHA1Dg;6ho_fOwuPCJZ9^9
zm%M!0G_*hjx|E@^tTd7n?>#)$$-Vy)e-r=mXEarY^&e1NK&Ko}WbFBal=fDGtqU`C
zl)YVW((dHo8YgbHQK#zMzN&uhw5{UxrkJzVXKx$;(6D^-Z!lC==eUH6*ry4WbfBdz
zXgwh0V)akUK2G$z-`C&F&nmVEPz4y2B=y(NZqD7inp}mEMDc_<*ypoQXdMMWg8(eC
z4K;3P!BX4Vk?;mrx1&1?ppFKzZtHF6&F@1u1u!jPFOZ%T1O?UMGq#O#Kx4IJ3b-a!L*V1kVK)xI)PEvgr-YjBmhv+L*o+#
zJAFmvb(5$IvUvP4(vsUQQG%Qrc$`v}Wa^4VJ|4TIi;fdOZGZy5ui_b#`Bs00nC
zN56BvqMToHklQnXF2;S)Pfj;V8D#M(dJ(3FN*zs)(l=L?6ZfvAU2{e17|gC5Gs$)~+Z92zxZI*Bw^ESV?C@ua8)%((VmO)J#$u?Qg^b1}!}
z!j~)@>HDBBzPGk4$$@X#y!4x0hY%G5s=YJ0m@U^m){DKdJD~N1?~Vn3kSEu{bn8|c
zhy*XzC!Ql)TY67(YMs-=(p#dyO=N!TE>(*wHIRMl*QAAJ9f_GB*Hxt9mQC;u^f+5b(qWf7SL_MGZ=2^xz3%@*8#iGjrLNCTa>dc2Ra7X5D_^@+^~b+_cP;TFs@qc`Z-?P9qX*v
z4rcn_Yp#ENU5;m`IfLV6-q_<_J&$mgwVg#XezK(ap6{UC!An8xFu5$vmNEP?JqwkVq`&)10_CA3O?egtfR
zYsaW%4V7ov+x3Cc=UOGk!EC-FGmQ$QU?GUpAZ<8)^bCi#eOO)H4-|;?3Uwmo|Ev
zyC1ZM-3YpnVCS7t&6LVFEiZT;Z0#2UKj-fB<_hAm?L$hZ0fgLV1$y0csT#@3S%H@U
zWS8{x6j)!U_(sCM`3LnbIp%q(=8>h%PtLrF`ftHS!f080g9>lgDQ`PxZ{8xl**KHK
zL()matTP#-^;OI3&$6UyZ$tXy*(=iZ8^SR7tTUv=+%j0s#@(aCSP
z1HBz>y;u?)C9o0d_ulyAaDGc9nuUm@3T7LZ>>#{;>seE}j8vfO_fZ5M21d0Ci5#50
z+*gKs#7hE~q?(&fZd1M$hE_pn>1)_nH$%DsB73qW
zzA_BKVH~^-|5Z(^8vf%bFPLe5gI%*G2oJy*8Vsy`)#^L
z`RnXkg3pHorLI%|)sWEAs&m)TWNzex4I*M^!|ZGGIiES}?EPVa_&5Kn%Dxv9429M7
z(3RoOx6p$GM-_f$gdMvdXxm7!%XV)2Q*8=2Vexy32RWx^Pa$;gVDcNgt!+q7OA?Dt
zhS6T!2Ge0D7z|&^lC5AzV2s_YcoLkWYhep=dgv&q-S2g1fVYzd0KeX
zCRy_O0TC?)InFO1BMRs+w$^=nw{FBGqE9ILwku~gmN0quQSkf>c(uechyU`Em2NM?A^PH$!_X()}=
zb0DN5Mk)J%0ZYLLR18;25?g+`fV<56O{rP=M6Mmt9#BZWmD3YT@GDM8VN`T0*HZ)N
z%8vZ1uysV{^-r6(05!C_kU^=D&R3`V{Vk>QPY3G
zeo5c3{J5)3Lj|q;VFbHN4h%5y8nY63Fhxa2qn12znMd~3QGD#7C8wcj^)wq8K=98W
z`-&R+r7!;b3$sb+7YS{D_2K$@XTU(55e_;5d&oo9n~p>JOF4nFlw%osIT#JqxwW~;_2hLv8+C6Ck(Zu{MW500FSudx~
za^#qx4!%FgHXYI1cw@I%=Um!oxN-Tzn7O^yPA9pYbAKSmR_se)moCRD^~VOSC93vBSf*6J6st+TCOAk(=*pdVJF-!B6jJiQ*E_42&pvuNtr-_-$$BYGf
z*s&u~RZmX)IibhvW8bZZTJ=^Ahe3C|ulWw`CgRB4W@b!m$IHl#@!KlHQ&!yXM);hJ
z<%W{s$2j0kxUKj7)kusYeu+XF*z*oIC~Y-%&CHI$o-}n}p9SPKoEe?vlVxQQ-$D`v
zcH31hwb$gl3GP3qmgOx1
zkNQoyUCwKhduEuP051PzpW^AyZwXLt*W9C@h79<+biEyzIeHHtWn%GG!~zPi2MZiW
z3g_XJ0ViSlUVP;`xt)!+M0Cw#`(>M&Oe{&L#F>e1LUw3Ry^_a}-z$MI)Zag(rr+aXqoP(&$_
z;o_fso&r`bbAMKyb8|K#6#eOZ{CYN8t)`#Rs&zYqdHF3g-4=fpG+ej$Nk{lBWxP9Z
zIDv1=OEWKt6L&^v<2PG6gwF^YN5M}G0sE7P{0i}XwkOXoTS_F@?*beaEJp2!iWxYK
z$tNT9;gdfQWyCd?e|a&mQ*Mr-dX=ZL;342D5(9lHz(i&WC44ptC3KSzj_>(ez@sHN
zon0?&I2X6$Gqnh%1zLBaf=<@T{Uu6~9rOX>iO74?+=s=eya>Sdz7-ra
zEF{rbcYW&+DY=v7ndq}lgArqRu!63}B0I{rIT74v7kT0WwStCpmw2r0%
z(R=%^+oE7K_>s_fa;BGI1GCYQBbhZJWm^(S?cch=g&TkagDvU_V}ex=X*;3-C=^
zvyzuHXHH;7;nEYRD|*`kjg9(9vYlilMXl!6p=B}0jwc-|_rZSL=z4xVlqGqPm^YXz
zQI{%5H7s06@_|5etLS&e-_!J1a-Gc9_N_`{o87#eUfQr|S~%SQC2Qpt{-lkh#R?rA
zWDd(rb3~hVq}Ti!bMq-<%NX8Rp*1I!|M4;od{qxfBaj(QO01*3RW7b!3PlwdA^7C0
z6FZq8E2$#nN`B*BJ2@=JF5=HDjEiJWTxecm1uH`G)Aj6gCRn}V
zCHWu~yGuD;rDnf&SJ)o%h62&WRIs_}l)-e>^es~%d8@&8I(4{@s`}orop&RO>ii+W
zHPQHdiRr*6cEVL0Dg`#RTs=+9V6z6((U9z%;{lx+>cs4jL@Hl#?uHaIa!pZokD?tH
z-aOi*zj<7}v;4Ll+>>*AIWrlt5mMPjIbm~*k?%#-b%+&8FZ2Qn5rKIsrtAqlt;%4*
zYzyHc;(@4qUPm%O^2Lh5XYAwKwsd|s1Z;Uo1Q1x9-YrNf@B6nM<5`bJ%6*PP?QuNL
z-Df>s2$&6zw2Qr_k3Nx|!+fc*vt?TDmXcSmM)-*v?YU_j_v2dxCK}E!$0#C^H^<~~#kjGC=;izV7un>c4=BQNP
z$85tu{johXy|59!5-lPmHmnzmow_kHfC!trbQ>JK^kuOrL4;4;f(zgPnzL6J^@jo{yOKbNRIb{fRop_MCz=A{2?>aK*u~_h6>(zDk7T}
zVjB-eI7Ze+V)C2}x20u1IP^eCJx03xYxLsWa1BW2!hzTYwWP~m?Ye69B{gRhJ7tM#
zgTyB`48G%1!*UAT8qGeL{dMDle6M}Lt5}}IXK5ly_#=lDiq|2`zS#JJ6@#89n1`N!
z*Q3eop=6+UMriE%qYWS5*r97>$YG+Gz~HkhR5JFeJ0|_CYfodEe4Ai=
zo{JPJ524Gz0AJ+*US62a3B@HqMVuaLg++`lq~}lM$$V}2ntMHv2_z}0_|?)Q1rgx>
zC^6t<#a)m4a=y@4v_kTQBWI(bAfR9Z?W9FdoEN1ERhO}NRBsR5!%(bkh
zkdIO&+%t~m)P>geo3dU#gj4XZ|-VH+7rq@|qPPtv7gTy?x0lGvao&4Q*!
z^!U~bATWp|LZY(L;6+OjiHo^$*f${}T^oH|@$F*755o_SN((S!tJGCbwXJ?d@joXZ
z$y(@y&~54+-}f+W46E910^lP%0?aa{=Pgp-{sI-$(_ignt}Kbf!xI2
z`89K|f&x!y!-+uNPhGo0-}xHgD6RR|GtxfP%ucsg
zdi~3^3Z=7LIs{B{58!QWe5CvEdU-OO*NgQUv9{&}?b}$cxj5>72cQ@@IkNd$%V=XH
zH|C(&5e&^3A0D4hsTi^+bhc_m4cD=g-8`5Eu;q!c-uiM0hz9VvVPU)M6;}`593RY>
z1=;Vx)C8geG+FfL3Xh{bplh5>2;C13Qy3&I|5rU>=W4-oS
zP!(QL6(}~oPGr(^w8wngy+oSme}Q2x9Gl)VMBJYD3th)!6aN?hS|;Z#o!J2IpFD2i
zB+TSK0p9!GiLIQK=w$F*mL5KNg&+!a-Zb)
znR0@CZ`1;0+m%vj%_hEJRoxiVs7nh*U3b&wD9rU&8lTMr`WkqPEji+D)|C
zOTPaYt>;>T%7%nZLi~9iZf7O{Q`oAzGQ7kQsq^&r
zjB-&?QHP=HC;@qImOiaY22jb-vN2rV`T7=(Z|fW^KC-eTeZ?3gHaCw9*{T;KE}wQ1
z`Eu{-y)V?%o0=(~ky`000X6TT5NO53w1t)2qz|WLN?JM18P<*#gU!^gFpM0mwhE@Y*hOrwoP1W{KvgknJBVIBlI>T8TAQ=o!fUDuoF@u*fT
z37odO|3=iMpCi=u7<+!2*;;fHmY!=V2S^QUI#&^9Pv~k@jv|XNLN`#%u!T6(XcPzf
zw0?4(0=d*{_nOcp4sfrn)B1E>P5%|*Bd$58aaYi|W7u`TcafDPE}6e`{!gE{gReX+
zi#Q>Etr4L7%9%C>&xsR@*W^zTm2CQ7lq4EIQsYL_LxYg|5n4Ij_;>*#d!)s?$P1wN
zpDrN@K}HDVJWPH6;ja?awc+uvg%4XQ=Yfy*whY*8ZAptmbG9!wV_7&x*CrAEDtU*d
zdfbp;;Am%>-rU6jr!H;!M;@q?WB9kYSRNF96*oT^QoQfFp2Y=e9guKpgeu2aBK5Hc
zNXs?TPCwPiiNe`U>|3Ea7;@pC#olkC5d0=puuykkmccjpxM}pGeN|e$??isjeYGMD
zW5zzGLc=O8#@pqu3n%EVXB&(}UA6PH7*)ynIZi@PnJEKAb}twWT~o1}llk&b;kSFB^)|L_U0u)j
zWU+=mYr6%#m{0&I&jrhru4be>mRLgi=N$PqjIZ7QIHV6r$HljnwcRRmRrxj~d8uxO8b{7&u-0
zWf2pK=2zWsGgzQ?-$@<`c%rp=Zq}c>Y;BAzgHGO|0*+a6&2AXb7-UL&iHzAO-h5zZ
zPb2VHJFN~L@o;u!XZJ8~lvME~r_|Gg)+w*~JEoE#b0Ywbwdbc#Wcig*{@>U;|Mb;O
z?t?Pe4xg3EQs$Q;eS(U_UvJtG>{S}l_7jE?@hfjPy
zReodB+^TO^LOkD2{
z>-A=C_A1#Nb?^6?jvMx(dZ&ALzJ6!Cc<-*$g2DfSr&uot{W>ouxgqw$7Gq|gpK&&a
z7=oWYTP^|2*LV1VNuW=n^w2`+&VjBXbMPW6$k6CcVencah#X{h!%ngPhbKda$SvxD
z&2Hd|K0P5pfk_M-CNwphF>
zdj~OOA2+a54;_dVdFqGQU(r>hj^8CvO=$Zp&Ma09sN)b446upf0uST^_q(V7-BRuX
zH?7X(zx`1Q=2A|dCeW!QswJ)wB`Jv|saDBFsX&Us$iT=**T7iU&@#lp(8|EX%D`OP
zz`)AD;MCmJ<|rC+^HVa@DsgMreL%Pys6iKGLuPWaRdRkoWl?5&MhSy6jHTdMRFavN
zTA>h}pH@vFqp-phiQO#+20Jzopr00{@9jQ{`u
diff --git a/algpseudocode/oracle-models.tex b/algpseudocode/oracle-models.tex
deleted file mode 100644
index 579fdd1..0000000
--- a/algpseudocode/oracle-models.tex
+++ /dev/null
@@ -1,32 +0,0 @@
-\documentclass{article}
-\usepackage[utf8]{inputenc}
-
-\usepackage{multirow}
-\usepackage{multicol}
-\usepackage{array}
-\usepackage{graphicx}
-
-
-\usepackage[landscape, paperwidth=15cm, paperheight=30cm, margin=0mm]{geometry}
-
-\title{\vspace{-5ex}}
-\date{\vspace{-5ex}}
-
-\begin{document}
-
-\maketitle
-
-
-\begin{table}[]
-\centering
-\begin{tabular}{ccclclcl}
-\cline{2-7}
-\multicolumn{1}{c|}{Oracle} & \multicolumn{3}{c|}{Numbers} & \multicolumn{3}{c|}{Quantum sampling access} & \\ \cline{2-7}
-\multicolumn{1}{c|}{\multirow{2}{*}{Implementation}} & \multicolumn{1}{c|}{\multirow{2}{*}{QRAM}} & \multicolumn{2}{c|}{Circuits} & \multicolumn{1}{c|}{KP-trees} & \multicolumn{1}{c|}{Grover-Rudolph} & \multicolumn{1}{c|}{Other} & \\ \cline{3-6}
-\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Sparse access} & \multicolumn{1}{c|}{Functions} & \multicolumn{2}{c|}{Oracle for numbers} & \multicolumn{1}{c|}{} & \\ \cline{2-7}
-\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} &
-\end{tabular}
-\end{table}
-
-
-\end{document}
diff --git a/algpseudocode/oracle_models.tex b/algpseudocode/oracle_models.tex
new file mode 100644
index 0000000..8970468
--- /dev/null
+++ b/algpseudocode/oracle_models.tex
@@ -0,0 +1,98 @@
+\documentclass{article}
+\usepackage[utf8]{inputenc}
+\usepackage{algorithm}
+\usepackage{algpseudocode}
+\usepackage{amsmath, amsfonts, amssymb}
+\usepackage[braket, qm]{qcircuit}
+\usepackage{tikz}
+\usetikzlibrary{calc}
+
+\algrenewcommand\algorithmicrequire{\textbf{Input:}}
+\algrenewcommand\algorithmicensure{\textbf{Output:}}
+
+\makeatletter
+\renewcommand{\fnum@algorithm}{\fname@algorithm}
+\makeatother
+
+\begin{document}
+\pagestyle{empty}
+
+\begin{tikzpicture}
+
+% Encoding
+\node[draw, minimum width=3cm, minimum height=2cm] (encbox) at (0,-5) {
+\begin{tabular}{l}
+ \; \\
+ \; \\
+ Binary\\
+ \; \\
+ \; \\
+ \; \\
+ \; \\
+ \; \\
+ Amplitude\\
+ \; \\
+ \; \\
+ \; \\
+ \; \\
+ \; \\
+ Block \\
+ \; \\
+ \; \\
+ \end{tabular}
+};
+\node[above] at (encbox.north) {\textbf{Representation}};
+
+% Binary
+\node[draw, minimum width=3cm, minimum height=2cm] (numbox) at (7,-2.5) {
+\begin{tabular}{l}
+ Oracle synthesis (multiplexer, sparse access, ...) \\
+ QMD (QRAM, QRAG, ...)
+ \end{tabular}
+};
+\node[above] at (numbox.north) {\textbf{Implementation}};
+
+% Vector
+\node[draw, minimum width=3cm, minimum height=2cm] (vecbox) at (7,-5) {
+\begin{tabular}{l}
+ Grover-Rudolph, KP-trees
+ \end{tabular}
+};
+
+% Matrices
+\node[draw, minimum width=3cm, minimum height=2cm] (matbox) at (7,-7.5) {
+\begin{tabular}{l}
+ Applications of amplitude encodings\\
+ \end{tabular}
+};
+
+% Calculate midpoint of the second arrow
+\coordinate (midup) at ($(encbox.east)!0.7!(encbox.north east)$);
+\coordinate (middown) at ($(encbox.east)!0.7!(encbox.south east)$);
+
+% Arrows
+\draw[->, >=stealth] (vecbox.south) -- (matbox.north) node[midway, above] {};
+\draw[->, >=stealth] (numbox.south) -- (vecbox.north) node[midway, above] {};
+
+\draw[->, >=stealth] (midup) -- (numbox.west) node[midway, above] {};
+\draw[->, >=stealth] (encbox.east) -- (vecbox.west) node[midway, above] {};
+\draw[->, >=stealth] (middown) -- (matbox.west) node[midway, above] {};
+
+% Horizontal Line Numbers
+\def\lineheightNumbers{42}
+%\draw[dashed] ([xshift=-1.5cm,yshift=\lineheightNumbers]encbox.west) node[right, above] {\;\;\;\;\;\;\;\;\;\;\;\;\;Numbers}-- ([xshift=0cm,yshift=0]numbox.south west);
+\draw[dashed] ([yshift=\lineheightNumbers]encbox.west) node[right, above] {}-- ([xshift=0cm,yshift=0]numbox.south west);
+
+
+% Horizontal Line Vectors
+\def\lineheightVectors{72}
+%\draw[dashed] ([xshift=-1.5cm,yshift=\lineheightVectors]encbox.south west)node[right, above] {\;\;\;\;\;\;\;\;\;\;\;\;\;Vectors} -- ([xshift=0cm]vecbox.south west);
+\draw[dashed] ([yshift=\lineheightVectors]encbox.south west)node[right, above] {} -- ([xshift=0cm]vecbox.south west);
+
+% Horizontal Line Matrices
+\def\lineheightMatrices{0}
+%\draw[dashed] ([xshift=-1.5cm,yshift=\lineheightMatrices]encbox.south west)node[right, above] {\;\;\;\;\;\;\;\;\;\;\;\;\;Matrices} -- ([xshift=0cm,yshift=\lineheightMatrices]matbox.south west);
+\draw[dashed] ([yshift=\lineheightMatrices]encbox.south east)node[right, above] {} -- ([xshift=0cm,yshift=\lineheightMatrices]matbox.south west);
+\end{tikzpicture}
+
+\end{document}
From e52ad8e2c115e230464bac5de0e08296fbb3d442 Mon Sep 17 00:00:00 2001
From: Scinawa <2940017+Scinawa@users.noreply.github.com>
Date: Wed, 20 Nov 2024 11:04:45 +0100
Subject: [PATCH 05/22] Add circuit of bucket brigade
---
algpseudocode/circuit_bb.tex | 334 +++++++++++++++++++++++++++++++++++
1 file changed, 334 insertions(+)
create mode 100644 algpseudocode/circuit_bb.tex
diff --git a/algpseudocode/circuit_bb.tex b/algpseudocode/circuit_bb.tex
new file mode 100644
index 0000000..3675751
--- /dev/null
+++ b/algpseudocode/circuit_bb.tex
@@ -0,0 +1,334 @@
+\documentclass{article}
+\usepackage[utf8]{inputenc}
+
+\usepackage{multirow}
+\usepackage{multicol}
+\usepackage{array}
+\usepackage{graphicx}
+
+\usepackage{tikz}
+
+\usetikzlibrary{decorations.pathreplacing}
+\usetikzlibrary{shapes.misc}
+\definecolor{RoyalBlue}{RGB}{65, 105, 225}
+
+\usetikzlibrary{quantikz}
+
+\usepackage[landscape, paperwidth=15cm, paperheight=30cm, margin=0mm]{geometry}
+
+\title{\vspace{-5ex}}
+\date{\vspace{-5ex}}
+
+\begin{document}
+
+\scalebox{0.7}{
+\begin{tikzpicture}
+ % Define drawing element styles
+ \tikzstyle{conn}=[-]
+ \tikzstyle{connThick}=[-, line width = 2]
+ \tikzstyle{whiteCirc}=[circle, draw, fill = white, inner sep = 5]
+ \tikzstyle{grayCirc}=[circle, draw, fill = gray!50, inner sep = 5]
+ \tikzstyle{every node}=[font=\Large]
+ \tikzstyle{swap}=[-, line width = 2, mark = x, mark size = 5pt]
+ \tikzstyle{connSwap}=[-, line width = 1.5]
+ \tikzstyle{wc}=[circle, draw, fill=white, inner sep = 2]
+ \tikzstyle{bc}=[circle, draw, fill=black, inner sep = 2]
+ \tikzstyle{orangeRec}=[orange, thick]
+ \tikzstyle{grayRec}=[gray!30]
+ \tikzstyle{op}=[draw, fill = white, text width = 2em, align = center]
+ \tikzstyle{blueLine}=[-, line width = 3, RoyalBlue]
+
+ % Define variables
+ \def\xend{16.0} % The end of x-coordinate
+
+ % Thick lines that connect the nodes
+ \draw[connThick] (0,0) -- (0.5,3.5) ;
+ \draw[connThick] (0,0) -- (0.5,-3.5) ;
+ \draw[connThick] (0.5,3.5) -- (1.0, 5.25) ;
+ \draw[connThick] (0.5,3.5) -- (1.0, 1.75) ;
+ \draw[connThick] (0.5,-3.5) -- (1.0, -5.25) ;
+ \draw[connThick] (0.5,-3.5) -- (1.0, -1.75) ;
+ \draw[connThick] (1.0,1.75) -- (1.4, 2.25) ;
+ \draw[connThick] (1.0,1.75) -- (1.4, 1.25) ;
+ \draw[connThick] (1.0,5.25) -- (1.4, 5.75) ;
+ \draw[connThick] (1.0,5.25) -- (1.4, 4.75) ;
+ \draw[connThick] (1.0,-1.75) -- (1.4, -2.25) ;
+ \draw[connThick] (1.0,-1.75) -- (1.4, -1.25) ;
+ \draw[connThick] (1.0,-5.25) -- (1.4, -5.75) ;
+ \draw[connThick] (1.0,-5.25) -- (1.4, -4.75) ;
+
+ % Orange Rectangle 1
+ \fill[grayRec] (4.0, -6.0) rectangle (4.8, 7.3);
+ \draw[orangeRec] (3.2, -6.0) rectangle (4.8, 7.3);
+
+ % Orange Rectangle 2
+ \fill[grayRec] (7.6, -6.0) rectangle (8.4, 7.3);
+ \draw[orangeRec] (6.8, -6.0) rectangle (8.4, 7.3);
+
+ % Orange Rectangle 3
+ \fill[grayRec] (10.4, -6.0) rectangle (11.2, 7.3);
+ \draw[orangeRec] (9.6, -6.0) rectangle (11.2, 7.3);
+
+ % Orange Rectangle 4
+ \fill[grayRec] (13.0, -6.0) rectangle (13.8, 7.3);
+ \draw[orangeRec] (12.2, -6.0) rectangle (13.8, 7.3);
+
+ % Blue Line
+ \draw[blueLine] (1.4, 9.25) -- (9.0, 9.25) -- (9.0, 7.0) -- (10.2, 7.0) -- (10.2, 0.5) -- (10.6, 0.5) -- (10.6, 3.0) -- (12.8, 3.0) -- (12.8, 2.25) -- (\xend, 2.25);
+
+
+ % White node at the centre
+ \draw[conn] (0.05, -0.5) -- (\xend, -0.5);
+ \draw[conn] (0., 0) -- (\xend, 0);
+ \draw[conn] (0.05, 0.5) -- (\xend, 0.5);
+ \node[whiteCirc] at (0,0) {};
+
+ % Gray nodes upper
+ \draw[conn] (0.63, 3.0) -- (\xend, 3.0);
+ \draw[conn] (0.5, 3.5) -- (\xend, 3.5);
+ \draw[conn] (0.63, 4.0) -- (\xend, 4.0);
+ \node[grayCirc] at (0.5,3.5) {};
+
+ % Gray nodes lower
+ \draw[conn] (0.63, -3.0) -- (\xend, -3.0);
+ \draw[conn] (0.5, -3.5) -- (\xend, -3.5);
+ \draw[conn] (0.63, -4.0) -- (\xend, -4.0);
+ \node[grayCirc] at (0.5,-3.5) {};
+
+ % white nodes upper upper
+ \draw[conn] (1.4, 5.75) -- (\xend, 5.75);
+ \draw[conn] (1.0, 5.25) -- (\xend, 5.25);
+ \draw[conn] (1.4, 4.75) -- (\xend, 4.75);
+ \node[whiteCirc] at (1.0,5.25) {};
+
+ % white nodes upper lower
+ \draw[conn] (1.4, 1.25) -- (\xend, 1.25);
+ \draw[conn] (1.0, 1.75) -- (\xend, 1.75);
+ \draw[conn] (1.4, 2.25) -- (\xend, 2.25);
+ \node[whiteCirc] at (1.0,1.75) {};
+
+ % white nodes lower lower
+ \draw[conn] (1.4, -5.75) -- (\xend, -5.75);
+ \draw[conn] (1.0, -5.25) -- (\xend, -5.25);
+ \draw[conn] (1.4, -4.75) -- (\xend, -4.75);
+ \node[whiteCirc] at (1.0,-5.25) {};
+
+ % white nodes lower upper
+ \draw[conn] (1.4, -1.25) -- (\xend, -1.25);
+ \draw[conn] (1.0, -1.75) -- (\xend, -1.75);
+ \draw[conn] (1.4, -2.25) -- (\xend, -2.25);
+ \node[whiteCirc] at (1.0,-1.75) {};
+
+ % Router rectangle
+ \node at (0.45, 6.0) {Routers};
+ \draw[dashed] (-0.5, -5.9) rectangle (1.4, 6.3);
+
+ % Input
+ \draw[conn] (1.4, 7.0) -- (\xend, 7.0);
+ \node at (0.25, 7.0) {Input};
+
+ % Address
+ \draw[conn] (1.4, 7.75) -- (\xend, 7.75);
+ \draw[conn] (1.4, 8.25) -- (\xend, 8.25);
+ \draw[conn] (1.4, 8.75) -- (\xend, 8.75);
+ \draw[decorate, decoration = {brace, amplitude = 5pt, raise = 1ex}] (1.4, 7.75) -- (1.4, 8.75);
+ \node at (0.0, 8.25) {Address};
+
+ % Bus
+ \draw[conn] (1.4, 9.25) -- (\xend, 9.25);
+ \node at (0.45, 9.25) {Bus};
+
+ % Column 1
+ \draw[connSwap] (1.8, 7.0) -- (1.8, 7.75);
+ \draw plot[swap] (1.8, 7.0);
+ \draw plot[swap] (1.8, 7.75);
+
+ % Column 2
+ \draw[connSwap] (2.2, 7.0) -- (2.2, 0.0);
+ \draw plot[swap] (2.2, 7.0);
+ \draw plot[swap] (2.2, 0.0);
+
+ % Column 3
+ \draw[connSwap] (2.6, 7.0) -- (2.6, 8.25);
+ \draw plot[swap] (2.6, 7.0);
+ \draw plot[swap] (2.6, 8.25);
+
+ % Column 4
+ \draw[connSwap] (3.4, 7.0) -- (3.4, -0.5);
+ \draw plot[swap] (3.4, 7.0);
+ \draw plot[swap] (3.4, -0.5);
+ \node[wc] at (3.4, 0.0) {};
+
+ % Column 5
+ \draw[connSwap] (3.8, 7.0) -- (3.8, 0.0);
+ \draw plot[swap] (3.8, 7.0);
+ \draw plot[swap] (3.8, 0.5);
+ \node[bc] at (3.8, 0.0) {};
+
+ % Column 6
+ \draw[connSwap] (5.8, 7.0) -- (5.8, 8.75);
+ \draw plot[swap] (5.8, 8.75);
+ \draw plot[swap] (5.8, 7.0);
+
+ \draw[connSwap] (5.8, 3.5) -- (5.8, 0.5);
+ \draw plot[swap] (5.8, 3.5);
+ \draw plot[swap] (5.8, 0.5);
+
+ \draw[connSwap] (5.8, -0.5) -- (5.8, -3.5);
+ \draw plot[swap] (5.8, -3.5);
+ \draw plot[swap] (5.8, -0.5);
+
+ % Column 7
+ \draw[connSwap] (7.0, 7.0) -- (7.0, -0.5);
+ \draw plot[swap] (7.0, 7.0);
+ \draw plot[swap] (7.0, -0.5);
+ \node[wc] at (7.0, 0.0) {};
+
+ % Column 8
+ \draw[connSwap] (7.4, 7.0) -- (7.4, 0.0);
+ \draw plot[swap] (7.4, 7.0);
+ \draw plot[swap] (7.4, 0.5);
+ \node[bc] at (7.4, 0.0) {};
+
+ % Column 9
+ \draw[connSwap] (7.8, 3.5) -- (7.8, 0.5);
+ \draw plot[swap] (7.8, 3.0);
+ \draw plot[swap] (7.8, 0.5);
+ \node[wc] at (7.8, 3.5) {};
+
+ \draw[connSwap] (7.8, -0.5) -- (7.8, -4.0);
+ \draw plot[swap] (7.8, -4.0);
+ \draw plot[swap] (7.8, -0.5);
+ \node[wc] at (7.8, -3.5) {};
+
+ % Column 10
+ \draw[connSwap] (8.2, 0.5) -- (8.2, 4.0);
+ \draw plot[swap] (8.2, 0.5);
+ \draw plot[swap] (8.2, 4.0);
+ \node[bc] at (8.2, 3.5) {};
+
+ \draw[connSwap] (8.2, -0.5) -- (8.2, -3.5);
+ \draw plot[swap] (8.2, -0.5);
+ \draw plot[swap] (8.2, -3.0);
+ \node[bc] at (8.2, -3.5) {};
+
+ % Column 11
+ \draw[connSwap] (9.0, 7.0) -- (9.0, 9.25);
+ \draw plot[swap] (9.0, 7.0);
+ \draw plot[swap] (9.0, 9.25);
+
+ \draw[connSwap] (9.0, 5.25) -- (9.0, 4.0);
+ \draw plot[swap] (9.0, 5.25);
+ \draw plot[swap] (9.0, 4.0);
+
+ \draw[connSwap] (9.0, 3.0) -- (9.0, 1.75);
+ \draw plot[swap] (9.0, 1.75);
+ \draw plot[swap] (9.0, 3.0);
+
+ \draw[connSwap] (9.0, -5.25) -- (9.0, -4.0);
+ \draw plot[swap] (9.0, -5.25);
+ \draw plot[swap] (9.0, -4.0);
+
+ \draw[connSwap] (9.0, -3.0) -- (9.0, -1.75);
+ \draw plot[swap] (9.0, -1.75);
+ \draw plot[swap] (9.0, -3.0);
+
+ % Column 12
+ \draw[connSwap] (9.8, 7.0) -- (9.8, -0.5);
+ \draw plot[swap] (9.8, 7.0);
+ \draw plot[swap] (9.8, -0.5);
+ \node[wc] at (9.8, 0.0) {};
+
+ % Column 13
+ \draw[connSwap] (10.2, 7.0) -- (10.2, 0.0);
+ \draw plot[swap] (10.2, 7.0);
+ \draw plot[swap] (10.2, 0.5);
+ \node[bc] at (10.2, 0.0) {};
+
+ % Column 14
+ \draw[connSwap] (10.6, 3.5) -- (10.6, 0.5);
+ \draw plot[swap] (10.6, 3.0);
+ \draw plot[swap] (10.6, 0.5);
+ \node[wc] at (10.6, 3.5) {};
+
+ \draw[connSwap] (10.6, -0.5) -- (10.6, -4.0);
+ \draw plot[swap] (10.6, -4.0);
+ \draw plot[swap] (10.6, -0.5);
+ \node[wc] at (10.6, -3.5) {};
+
+ % Column 15
+ \draw[connSwap] (11.0, 0.5) -- (11.0, 4.0);
+ \draw plot[swap] (11.0, 0.5);
+ \draw plot[swap] (11.0, 4.0);
+ \node[bc] at (11.0, 3.5) {};
+
+ % Column 16
+ \draw[connSwap] (12.4, 4.0) -- (12.4, 5.25);
+ \draw plot[swap] (12.4, 4.0);
+ \draw plot[swap] (12.4, 4.75);
+ \node[wc] at (12.4, 5.25) {};
+
+ \draw[connSwap] (12.4, 3.0) -- (12.4, 1.25);
+ \draw plot[swap] (12.4, 3.0);
+ \draw plot[swap] (12.4, 1.25);
+ \node[wc] at (12.4, 1.75) {};
+
+ \draw[connSwap] (12.4, -1.75) -- (12.4, -3.0);
+ \draw plot[swap] (12.4, -2.25);
+ \draw plot[swap] (12.4, -3.0);
+ \node[wc] at (12.4, -1.75) {};
+
+ \draw[connSwap] (12.4, -4.0) -- (12.4, -5.75);
+ \draw plot[swap] (12.4, -4.0);
+ \draw plot[swap] (12.4, -5.75);
+ \node[wc] at (12.4, -5.25) {};
+
+ % Column 17
+ \draw[connSwap] (12.8, 5.75) -- (12.8, 4.0);
+ \draw plot[swap] (12.8, 5.75);
+ \draw plot[swap] (12.8, 4.0);
+ \node[bc] at (12.8, 5.25) {};
+
+ \draw[connSwap] (12.8, 3.0) -- (12.8, 1.75);
+ \draw plot[swap] (12.8, 3.0);
+ \draw plot[swap] (12.8, 2.25);
+ \node[bc] at (12.8, 1.75) {};
+
+ \draw[connSwap] (12.8, -1.25) -- (12.8, -3.0);
+ \draw plot[swap] (12.8, -1.25);
+ \draw plot[swap] (12.8, -3.0);
+ \node[bc] at (12.8, -1.75) {};
+
+ \draw[connSwap] (12.8, -4.0) -- (12.8, -5.25);
+ \draw plot[swap] (12.8, -4.0);
+ \draw plot[swap] (12.8, -4.75);
+ \node[bc] at (12.8, -5.25) {};
+
+ % Text Column
+ \node[op] at (14.8, 5.75) {$x_7$};
+ \node[op] at (14.8, 4.75) {$x_6$};
+ \node[op] at (14.8, 2.25) {$x_5$};
+ \node[op] at (14.8, 1.25) {$x_4$};
+ \node[op] at (14.8, -1.25) {$x_3$};
+ \node[op] at (14.8, -2.25) {$x_2$};
+ \node[op] at (14.8, -4.75) {$x_1$};
+ \node[op] at (14.8, -5.75) {$x_0$};
+
+ % The row of U
+ \draw[decorate, decoration = {brace, amplitude = 5pt, raise = 1ex, mirror}] (1.4, -6.0) -- (3.0, -6.0);
+ \node at (2.2, -6.8) {$U_1$};
+ \draw[decorate, decoration = {brace, amplitude = 5pt, raise = 1ex, mirror}] (3.2, -6.0) -- (6.4, -6.0);
+ \node at (4.8, -6.8) {$U_2$};
+ \draw[decorate, decoration = {brace, amplitude = 5pt, raise = 1ex, mirror}] (6.6, -6.0) -- (9.3, -6.0);
+ \node at (7.95, -6.8) {$U_3$};
+ \draw[decorate, decoration = {brace, amplitude = 5pt, raise = 1ex, mirror}] (9.5, -6.0) -- (11.6, -6.0);
+ \node at (10.55, -6.8) {$U_4$};
+ \draw[decorate, decoration = {brace, amplitude = 5pt, raise = 1ex, mirror}] (11.8, -6.0) -- (14.1, -6.0);
+ \node at (12.95, -6.8) {$U_5$};
+
+
+\end{tikzpicture}
+}
+
+
+\end{document}
From 5bf241c7a6a052870baa8f7e5a981f81a833e910 Mon Sep 17 00:00:00 2001
From: Scinawa <2940017+Scinawa@users.noreply.github.com>
Date: Wed, 20 Nov 2024 11:16:06 +0100
Subject: [PATCH 06/22] Add images of Francesco
---
algpseudocode/KP-trees-example.tex | 58 ++++++++++++++++++++++++++++++
algpseudocode/KP-trees_RPhase.tex | 42 ++++++++++++++++++++++
algpseudocode/KP-trees_RY.tex | 44 +++++++++++++++++++++++
3 files changed, 144 insertions(+)
create mode 100644 algpseudocode/KP-trees-example.tex
create mode 100644 algpseudocode/KP-trees_RPhase.tex
create mode 100644 algpseudocode/KP-trees_RY.tex
diff --git a/algpseudocode/KP-trees-example.tex b/algpseudocode/KP-trees-example.tex
new file mode 100644
index 0000000..6b0a793
--- /dev/null
+++ b/algpseudocode/KP-trees-example.tex
@@ -0,0 +1,58 @@
+\documentclass{article}
+\usepackage[utf8]{inputenc}
+\usepackage{algorithm}
+\usepackage{algpseudocode}
+\usepackage{amsmath, amsfonts, amssymb}
+\usepackage[braket, qm]{qcircuit}
+\usepackage{tikz}
+\usepackage[a4paper, total={7in, 10in}]{geometry}
+
+\algrenewcommand\algorithmicrequire{\textbf{Input:}}
+\algrenewcommand\algorithmicensure{\textbf{Output:}}
+
+\makeatletter
+\renewcommand{\fnum@algorithm}{\fname@algorithm}
+\makeatother
+
+\begin{document}
+\pagestyle{empty}
+
+\begin{figure}
+\begin{tikzpicture}[
+ every node/.style={rectangle, draw},
+ level 1/.style={sibling distance=50mm, level distance=10mm},
+ level 2/.style={sibling distance=30mm, level distance=20mm}
+]
+\node {1}
+ child {node {0.32}
+ child {node {(0.16, $\frac{\pi}{4}$)}}
+ child {node {(0.16, $\frac{\pi}{12}$)}}
+ }
+ child {node {0.68}
+ child {node {(0.64, $\frac{\pi}{3}$)}}
+ child {node {(0.04, $\frac{\pi}{6}$)}}
+ };
+\end{tikzpicture}
+\hspace{1cm} % Adjust horizontal space between trees here
+\begin{tikzpicture}[
+ every node/.style={rectangle, draw},
+ level 1/.style={sibling distance=30mm, level distance=15mm},
+ level 2/.style={sibling distance=15mm, level distance=15mm}
+]
+\node {$2\text{cos}^{-1}\left(\sqrt{\frac{0.32}{1}} \right)$}
+ child {node {$2\text{cos}^{-1}\left(\sqrt{\frac{0.16}{0.32}} \right)$}
+ child {node {$\frac{\pi}{4}$}}
+ child {node {$\frac{\pi}{12}$}}
+ }
+ child {node {$2\text{cos}^{-1}\left(\sqrt{\frac{0.64}{0.68}} \right)$}
+ child {node {$\frac{\pi}{3}$}}
+ child {node {$\frac{\pi}{6}$}}
+ };
+\end{tikzpicture}
+
+\begin{tikzpicture}[overlay, remember picture]
+ \draw[->,thick] (8,2.5) -- (10,2.5);
+\end{tikzpicture}
+\end{figure}
+
+\end{document}
diff --git a/algpseudocode/KP-trees_RPhase.tex b/algpseudocode/KP-trees_RPhase.tex
new file mode 100644
index 0000000..b0bc36d
--- /dev/null
+++ b/algpseudocode/KP-trees_RPhase.tex
@@ -0,0 +1,42 @@
+\documentclass{article}
+\usepackage[utf8]{inputenc}
+\usepackage{algorithm}
+\usepackage{algpseudocode}
+\usepackage{amsmath, amsfonts, amssymb}
+\usepackage[braket, qm]{qcircuit}
+
+\algrenewcommand\algorithmicrequire{\textbf{Input:}}
+\algrenewcommand\algorithmicensure{\textbf{Output:}}
+
+\makeatletter
+\renewcommand{\fnum@algorithm}{\fname@algorithm}
+\makeatother
+
+\begin{document}
+\pagestyle{empty}
+
+\Qcircuit @C=0.6em @R=1em {
+%Index register
+\lstick{} & \multigate{12}{QMD} & \qw & \qw & \qw & \hdots && \qw & \qw & \qw & \qw & \qw & \hdots && \qw & \qw & \multigate{12}{QMD^{\dagger}} & \qw \\
+\lstick{} & \ghost{QMD} & \qw & \qw & \qw & \hdots && \qw & \qw & \qw & \qw & \qw & \hdots && \qw & \qw & \ghost{QMD^{\dagger}} & \qw \\
+& \nghost{QMD} &&&& \vdots &&&&&&& \vdots \\
+\lstick{} & \ghost{QMD} & \qw & \qw & \qw & \hdots && \qw & \qw & \qw & \qw & \qw & \hdots && \qw & \qw & \ghost{QMD^{\dagger}} & \qw
+ \inputgroupv{1}{4}{0.5em}{2.5em}{\ket{i}} \\
+%Angle register
+\lstick{} & \ghost{QMD} & \ctrl{8} & \qw & \qw & \hdots && \qw & \qw & \ctrl{8} & \qw & \qw & \hdots && \qw & \qw & \ghost{QMD^{\dagger}} & \qw\\
+\lstick{} & \ghost{QMD} & \qw & \ctrl{7} & \qw & \hdots && \qw & \qw & \qw & \ctrl{7} & \qw & \hdots && \qw & \qw & \ghost{QMD^{\dagger}} & \qw\\
+& \nghost{QMD} &&&& \vdots &&&&&&& \vdots\\
+\lstick{} & \ghost{QMD} & \qw & \qw & \qw & \hdots && \ctrl{5} & \qw & \qw & \qw & \qw & \hdots && \ctrl{5} & \qw & \ghost{QMD^{\dagger}} & \qw
+ \inputgroupv{5}{8}{0.5em}{2.5em}{\ket{0}^{\otimes t'}}\\
+%Main register
+\lstick{} & \ghost{QMD} & \qw & \qw & \qw & \hdots && \qw & \qw & \qw & \qw & \qw & \hdots && \qw & \qw & \ghost{QMD^{\dagger}} & \qw \\
+\lstick{} & \ghost{QMD} & \qw & \qw & \qw & \hdots && \qw & \qw & \qw & \qw & \qw & \hdots && \qw & \qw & \ghost{QMD^{\dagger}} & \qw \\
+\lstick{\ket{\Psi_{\log(n)}}} &&&&& \vdots &&&&&&& \vdots &&&&&&& \ket{V_i}\\
+\lstick{} & \ghost{QMD} & \qw & \qw & \qw & \hdots && \qw & \qw & \qw & \qw & \qw & \hdots && \qw & \qw & \ghost{QMD^{\dagger}} & \qw \\
+\lstick{} & \ghost{QMD} & \gate{P(2^2)} & \gate{P(2^1)} & \qw & \hdots && \gate{P(2^{1-t})} & \gate{X} & \gate{P(2^2)} & \gate{P(2^1)} & \qw & \hdots && \gate{P(2^{1-t})} & \gate{X} & \ghost{QMD^{\dagger}} & \qw
+ %\inputgroup{9}{13}{3.5em}{\ket{\Psi_{\log(n)}}}
+ \gategroup{9}{1}{13}{1}{0.5em}{\{}
+ \gategroup{9}{17}{13}{17}{1em}{\}}
+}
+
+\end{document}
diff --git a/algpseudocode/KP-trees_RY.tex b/algpseudocode/KP-trees_RY.tex
new file mode 100644
index 0000000..d10d287
--- /dev/null
+++ b/algpseudocode/KP-trees_RY.tex
@@ -0,0 +1,44 @@
+\documentclass{article}
+\usepackage[utf8]{inputenc}
+\usepackage{algorithm}
+\usepackage{algpseudocode}
+\usepackage{amsmath, amsfonts, amssymb}
+\usepackage[braket, qm]{qcircuit}
+
+\algrenewcommand\algorithmicrequire{\textbf{Input:}}
+\algrenewcommand\algorithmicensure{\textbf{Output:}}
+
+\makeatletter
+\renewcommand{\fnum@algorithm}{\fname@algorithm}
+\makeatother
+
+\begin{document}
+\pagestyle{empty}
+
+\Qcircuit @C=0.6em @R=1em {
+%Index register
+\lstick{} & \qw & \multigate{11}{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \multigate{11}{QMD^{\dagger}} & \qw \\
+\lstick{} & \qw & \ghost{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \ghost{QMD^{\dagger}} & \qw\\
+&& \nghost{QMD} &&&&&&& \vdots &&& \nghost{QMD^{\dagger}}\\
+\lstick{} & \qw & \ghost{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \ghost{QMD^{\dagger}} & \qw
+ \inputgroupv{1}{4}{0.5em}{2.75em}{\ket{i}} \\
+%Angle register
+\lstick{} & \qw & \ghost{QMD} & \qw & \ctrl{8} & \qw & \qw & \qw && \hdots &&& \qw & \ghost{QMD^{\dagger}} & \qw\\
+\lstick{} & \qw & \ghost{QMD} & \qw & \qw & \qw & \ctrl{7} & \qw && \hdots &&& \qw & \ghost{QMD^{\dagger}} & \qw\\
+&& \nghost{QMD} &&&&&&& \vdots &&& \nghost{QMD^{\dagger}}\\
+\lstick{} & \qw & \ghost{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \ctrl{5} & \ghost{QMD^{\dagger}} & \qw
+ \inputgroupv{5}{8}{0.5em}{2.75em}{\ket{0}^{\otimes t'}}\\
+%Main register
+\lstick{} & \qw & \ghost{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \ghost{QMD^{\dagger}} & \qw & \rstick{}\\
+\lstick{} & \qw & \ghost{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \ghost{QMD^{\dagger}} & \qw & \rstick{}\\
+&& \nghost{QMD} &&&&&&& \vdots &&& \nghost{QMD^{\dagger}} && \rstick{\ket{\Psi_{k+1}}} \\
+\lstick{} & \qw & \ghost{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \ghost{QMD^{\dagger}} & \qw & \rstick{}\\
+\lstick{\ket{0}} & \qw & \qw & \qw & \gate{R_y(2^1)} & \qw & \gate{R_y(2^0)} & \qw && \hdots &&& \gate{R_y(2^{-t})} & \qw & \qw & \rstick{}\\
+&&&&&&&&& \vdots \\
+\\
+\lstick{\ket{0}} & \qw & \qw & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \qw & \qw & \qw
+ \inputgroupv{9}{12}{0.5em}{2.75em}{\ket{\Psi_k}}
+ \gategroup{9}{15}{13}{15}{.8em}{\}}
+}
+
+\end{document}
From 858079ef949063656da1c969dfb0d47acaa4416e Mon Sep 17 00:00:00 2001
From: Scinawa <2940017+Scinawa@users.noreply.github.com>
Date: Wed, 20 Nov 2024 11:21:26 +0100
Subject: [PATCH 07/22] Add example of Ghisoni
---
algpseudocode/state-prep-example.tex | 35 ++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
create mode 100644 algpseudocode/state-prep-example.tex
diff --git a/algpseudocode/state-prep-example.tex b/algpseudocode/state-prep-example.tex
new file mode 100644
index 0000000..5231c11
--- /dev/null
+++ b/algpseudocode/state-prep-example.tex
@@ -0,0 +1,35 @@
+\documentclass{article}
+\usepackage[utf8]{inputenc}
+\usepackage{algorithm}
+\usepackage{algpseudocode}
+\usepackage{amsmath, amsfonts, amssymb}
+\usepackage[braket, qm]{qcircuit}
+\usepackage{tikz}
+\usepackage[landscape, a2paper]{geometry}
+
+\algrenewcommand\algorithmicrequire{\textbf{Input:}}
+\algrenewcommand\algorithmicensure{\textbf{Output:}}
+
+\makeatletter
+\renewcommand{\fnum@algorithm}{\fname@algorithm}
+\makeatother
+
+\begin{document}
+\pagestyle{empty}
+
+\Qcircuit @C=0.6em @R=2.5em {
+%Index register
+\lstick{\ket{0}} & \qw & \multigate{4}{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \multigate{4}{QMD^{\dagger}} & \qw & \multigate{5}{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \multigate{5}{QMD^{\dagger}} & \qw & \multigate{6}{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \qw & \multigate{6}{QMD^{\dagger}} & \qw\\
+%Angle register
+\lstick{} & \qw & \ghost{QMD} & \qw & \ctrl{4} & \qw & \qw & \qw && \hdots &&& \qw & \ghost{QMD^{\dagger}} & \qw & \ghost{QMD} & \qw & \ctrl{5} & \qw & \qw & \qw && \hdots &&& \qw & \ghost{QMD^{\dagger}} & \qw & \ghost{QMD} & \qw & \ctrl{5} & \qw & \qw & \qw && \hdots &&& \qw & \qw & \ctrl{5} & \qw & \qw & \qw && \hdots &&& \qw & \qw & \ghost{QMD^{\dagger}} & \qw\\
+\lstick{} & \qw & \ghost{QMD} & \qw & \qw & \qw & \ctrl{3} & \qw && \hdots &&& \qw & \ghost{QMD^{\dagger}} & \qw & \ghost{QMD} & \qw & \qw & \qw & \ctrl{4} & \qw && \hdots &&& \qw & \ghost{QMD^{\dagger}} & \qw & \ghost{QMD} & \qw & \qw & \qw & \ctrl{4} & \qw && \hdots &&& \qw & \qw & \qw & \qw & \ctrl{4} & \qw && \hdots &&& \qw & \qw & \ghost{QMD^{\dagger}} & \qw\\
+&& \nghost{QMD} &&&&&&& \vdots &&&& \nghost{QMD^{\dagger}} && \nghost{QMD} &&&&&&& \vdots &&&& \nghost{QMD^{\dagger}} && \nghost{QMD} &&&&&&& \vdots &&&& &&&&&& \vdots &&&& \nghost{QMD^{\dagger}}\\
+\lstick{} & \qw & \ghost{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \ctrl{1} & \ghost{QMD^{\dagger}} & \qw & \ghost{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \ctrl{2} & \ghost{QMD^{\dagger}} & \qw & \ghost{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \ctrl{2} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \ctrl{2} & \qw & \ghost{QMD^{\dagger}} & \qw
+ \inputgroupv{2}{5}{0.5em}{3.75em}{\ket{0}^{\otimes t'}}\\
+%Main register
+\lstick{\ket{0}} & \qw & \qw & \qw & \gate{R_{y}(2^{1})} & \qw & \gate{R_{y}(2^{0})} & \qw && \hdots &&& \gate{R_{y}(2^{-t})} & \qw & \qw & \ghost{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \ghost{QMD^{\dagger}} & \qw & \ghost{QMD} & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \qw & \ghost{QMD^{\dagger}} & \qw\\
+\lstick{\ket{0}} & \qw & \qw & \qw & \qw & \qw & \qw & \qw && \hdots &&& \qw & \qw & \qw & \qw & \qw & \gate{R_{y}(2^{1})} & \qw & \gate{R_{y}(2^{0})} & \qw && \hdots &&& \gate{R_{y}(2^{-t})} & \qw & \qw & \ghost{QMD} & \qw & \gate{P(2^{2})} & \qw & \gate{P(2^{1})} & \qw && \hdots &&& \gate{P(2^{1-t})} & \gate{X} & \gate{P(2^{2})} & \qw & \gate{P(2^{1})} & \qw && \hdots &&& \gate{P(2^{1-t})} & \gate{X} & \ghost{QMD^{\dagger}} & \qw\\
+}
+
+
+\end{document}
From 30f7e0069fe9bef1168d1347912df04ab61000e5 Mon Sep 17 00:00:00 2001
From: Scinawa <2940017+Scinawa@users.noreply.github.com>
Date: Wed, 20 Nov 2024 11:22:59 +0100
Subject: [PATCH 08/22] Add bibliography
---
book.bib | 422 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 422 insertions(+)
diff --git a/book.bib b/book.bib
index 2d0a5db..72c0bde 100644
--- a/book.bib
+++ b/book.bib
@@ -15,6 +15,20 @@ @article{giurgica2022low
year={2022},
publisher={APS}
}
+@article{aharonov2018quantum,
+ title={Quantum circuit depth lower bounds for homological codes},
+ author={Aharonov, Dorit and Touati, Yonathan},
+ journal={arXiv preprint arXiv:1810.03912},
+ year={2018}
+}
+
+@article{mori2024efficient,
+ title={Efficient state preparation for multivariate Monte Carlo simulation},
+ author={Mori, Hitomi and Mitarai, Kosuke and Fujii, Keisuke},
+ journal={arXiv preprint arXiv:2409.07336},
+ year={2024}
+}
+
@article{callison2022improved,
title={Improved maximum-likelihood quantum amplitude estimation},
@@ -167,6 +181,24 @@ @article{markov1890ineq
year = 1890,
journal = {Zap. Imp. Akad. Nauk. St. Petersburg},
}
+@inproceedings{gleinig2021efficient,
+ title={An efficient algorithm for sparse quantum state preparation},
+ author={Gleinig, Niels and Hoefler, Torsten},
+ booktitle={2021 58th ACM/IEEE Design Automation Conference (DAC)},
+ pages={433--438},
+ year={2021},
+ organization={IEEE}
+}
+@article{shende2004minimal,
+ title={Minimal universal two-qubit controlled-NOT-based circuits},
+ author={Shende, Vivek V and Markov, Igor L and Bullock, Stephen S},
+ journal={Physical Review A—Atomic, Molecular, and Optical Physics},
+ volume={69},
+ number={6},
+ pages={062321},
+ year={2004},
+ publisher={APS}
+}
@inproceedings{paturi1992degbound,
title = {On the Degree of Polynomials That Approximate Symmetric Boolean Functions (Preliminary Version)},
author = {Paturi, Ramamohan},
@@ -837,6 +869,28 @@ @inproceedings{schmitt2021boolean
pages = {1044--1049},
organization = {IEEE},
}
+@inproceedings{krishnakumar2022aq,
+ title={AQ\# implementation of a quantum lookup table for quantum arithmetic functions},
+ author={Krishnakumar, Rajiv and Soeken, Mathias and Roetteler, Martin and Zeng, William},
+ booktitle={2022 IEEE/ACM Third International Workshop on Quantum Computing Software (QCS)},
+ pages={75--82},
+ year={2022},
+ organization={IEEE}
+}
+
+@article{gur2021sublinear,
+ title={Sublinear quantum algorithms for estimating von Neumann entropy},
+ author={Gur, Tom and Hsieh, Min-Hsiu and Subramanian, Sathyawageeswar},
+ journal={arXiv preprint arXiv:2111.11139},
+ year={2021}
+}
+
+@article{luongo2024measurement,
+ title={Measurement-based uncomputation of quantum circuits for modular arithmetic},
+ author={Luongo, Alessandro and Miti, Antonio Michele and Narasimhachar, Varun and Sireesh, Adithya},
+ journal={arXiv preprint arXiv:2407.20167},
+ year={2024}
+}
@inproceedings{kerenidis2019qmeans,
title = {q-means: A quantum algorithm for unsupervised machine learning},
author = {Kerenidis, Iordanis and Landman, Jonas and Luongo, Alessandro and Prakash, Anupam},
@@ -1009,6 +1063,129 @@ @article{harrow2009quantum
number = 15,
pages = 150502,
}
+@ARTICLE{STY-asymptotically,
+ author={Sun, Xiaoming and Tian, Guojing and Yang, Shuai and Yuan, Pei and Zhang, Shengyu},
+ journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems},
+ title={Asymptotically Optimal Circuit Depth for Quantum State Preparation and General Unitary Synthesis},
+ year={2023},
+ volume={42},
+ number={10},
+ pages={3301-3314},
+ doi={10.1109/TCAD.2023.3244885}}
+
+@article{plesch2011quantum,
+ title = {Quantum-state preparation with universal gate decompositions},
+ author = {Plesch, Martin and Brukner, \v{C}aslav},
+ journal = {Phys. Rev. A},
+ volume = {83},
+ issue = {3},
+ pages = {032302},
+ numpages = {5},
+ year = {2011},
+ month = {Mar},
+ publisher = {American Physical Society},
+ doi = {10.1103/PhysRevA.83.032302}
+}
+@article{zhao2021smooth,
+ title={Smooth input preparation for quantum and quantum-inspired machine learning},
+ author={Zhao, Zhikuan and Fitzsimons, Jack K and Rebentrost, Patrick and Dunjko, Vedran and Fitzsimons, Joseph F},
+ journal={Quantum Machine Intelligence},
+ volume={3},
+ number={1},
+ pages={14},
+ year={2021},
+ publisher={Springer}
+}
+@article{zhu2024unified,
+ title={Unified architecture for a quantum lookup table},
+ author={Zhu, Shuchen and Sundaram, Aarthi and Low, Guang Hao},
+ journal={arXiv preprint arXiv:2406.18030},
+ year={2024}
+}
+@article{gidney2021factor,
+ title={How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits},
+ author={Gidney, Craig and Eker{\aa}, Martin},
+ journal={Quantum},
+ volume={5},
+ pages={433},
+ year={2021},
+ publisher={Verein zur F{\"o}rderung des Open Access Publizierens in den Quantenwissenschaften}
+}
+@article{gidney2018halving,
+ title={Halving the cost of quantum addition},
+ author={Gidney, Craig},
+ journal={Quantum},
+ volume={2},
+ pages={74},
+ year={2018},
+ publisher={Verein zur F{\"o}rderung des Open Access Publizierens in den Quantenwissenschaften}
+}
+@inproceedings{rosenthal2023efficient,
+author = {Rosenthal, Gregory},
+title = {Efficient Quantum State Synthesis with One Query},
+booktitle = {Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)},
+chapter = {},
+year={2024},
+pages = {2508-2534},
+doi = {10.1137/1.9781611977912.89},
+}
+@article{cuccaro2004new,
+ title={A new quantum ripple-carry addition circuit},
+ author={Cuccaro, Steven A and Draper, Thomas G and Kutin, Samuel A and Moulton, David Petrie},
+ journal={arXiv preprint quant-ph/0410184},
+ year={2004}
+}
+@article{bouland2023state,
+ title={State preparation by shallow circuits using feed forward},
+ author={Harry Buhrman and Marten Folkertsma and Bruno Loff and Niels M. P. Neumann},
+ journal={arXiv preprint arXiv:2307.14840},
+ year={2023},
+ doi={10.48550/arXiv.2307.14840}
+}
+@article{de2022double,
+ title={Double sparse quantum state preparation},
+ author={de Veras, Tiago ML and da Silva, Leon D and da Silva, Adenilton J},
+ journal={Quantum Information Processing},
+ volume={21},
+ number={6},
+ pages={204},
+ year={2022},
+ publisher={Springer}
+}
+@article{doriguello2024practicality,
+ title={On the practicality of quantum sieving algorithms for the shortest vector problem},
+ author={Doriguello, Joao F and Giapitzakis, George and Luongo, Alessandro and Morolia, Aditya},
+ journal={arXiv preprint arXiv:2410.13759},
+ year={2024}
+}
+@article{camps2024explicit,
+ title={Explicit quantum circuits for block encodings of certain sparse matrices},
+ author={Camps, Daan and Lin, Lin and Van Beeumen, Roel and Yang, Chao},
+ journal={SIAM Journal on Matrix Analysis and Applications},
+ volume={45},
+ number={1},
+ pages={801--827},
+ year={2024},
+ publisher={SIAM}
+}
+
+@article{rosenthal2021query,
+ title={Query and depth upper bounds for quantum unitaries via {G}rover search},
+ author={Rosenthal, Gregory},
+ journal={arXiv preprint arXiv:2111.07992},
+ year={2021},
+ doi={10.48550/arXiv.2111.07992}
+}
+@article{babbush2018encoding,
+ title={Encoding electronic spectra in quantum circuits with linear T complexity},
+ author={Babbush, Ryan and Gidney, Craig and Berry, Dominic W and Wiebe, Nathan and McClean, Jarrod and Paler, Alexandru and Fowler, Austin and Neven, Hartmut},
+ journal={Physical Review X},
+ volume={8},
+ number={4},
+ pages={041015},
+ year={2018},
+ publisher={APS}
+}
@article{VBE96,
title = {Quantum networks for elementary arithmetic operations},
author = {Vedral, Vlatko and Barenco, Adriano and Ekert, Artur},
@@ -1442,6 +1619,12 @@ @article{Dikranjan2003
year = 2003,
pages = {1--77},
}
+@article{litinski2022active,
+ title={Active volume: An architecture for efficient fault-tolerant quantum computers with limited non-local connections},
+ author={Litinski, Daniel and Nickerson, Naomi},
+ journal={arXiv preprint arXiv:2211.15465},
+ year={2022}
+}
@book{Feynman,
title = {{Feynman lectures on Computation}},
author = {Feynman, Richard P.},
@@ -1694,6 +1877,16 @@ @article{Burges2002
doi = {MSR-TR-2002-83},
url = {http://research.microsoft.com/apps/pubs/default.aspx?id=67122},
}
+@article{moosa2023linear,
+ title={Linear-depth quantum circuits for loading Fourier approximations of arbitrary functions},
+ author={Moosa, Mudassir and Watts, Thomas W and Chen, Yiyou and Sarma, Abhijat and McMahon, Peter L},
+ journal={Quantum Science and Technology},
+ volume={9},
+ number={1},
+ pages={015002},
+ year={2023},
+ publisher={IOP Publishing}
+}
@article{Peng2008,
title = {{Quantum adiabatic algorithm for factorization and its experimental implementation}},
author = {Peng, Xinhua and Liao, Zeyang and Xu, Nanyang and Qin, Gan and Zhou, Xianyi and Suter, Dieter and Du, Jiangfeng},
@@ -1784,6 +1977,28 @@ @article{kannan2017randomized
volume = 26,
pages = {95--135},
}
+@inproceedings{metger2023stateqip,
+ title={stateQIP= statePSPACE},
+ author={Metger, Tony and Yuen, Henry},
+ booktitle={2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS)},
+ pages={1349--1356},
+ year={2023},
+ organization={IEEE}
+}
+@article{rosenthal2021interactive,
+ title={Interactive proofs for synthesizing quantum states and unitaries},
+ author={Rosenthal, Gregory and Yuen, Henry},
+ journal={arXiv preprint arXiv:2108.07192},
+ year={2021}
+}
+@inproceedings{holmes2020efficient,
+ title={Efficient quantum circuits for accurate state preparation of smooth, differentiable functions},
+ author={Holmes, Adam and Matsuura, Anne Y},
+ booktitle={2020 IEEE International Conference on Quantum Computing and Engineering (QCE)},
+ pages={169--179},
+ year={2020},
+ organization={IEEE}
+}
@article{bausch2019quantum,
title = {A Quantum Search Decoder for Natural Language Processing},
author = {Bausch, Johannes and Subramanian, Sathyawageeswar and Piddock, Stephen},
@@ -5474,6 +5689,50 @@ @incollection{buchi1990decision
publisher = {Springer},
pages = {425--435},
}
+@article{bergholm2005quantum,
+ title = {Quantum circuits with uniformly controlled one-qubit gates},
+ author = {Bergholm, Ville and Vartiainen, Juha J. and M\"ott\"onen, Mikko and Salomaa, Martti M.},
+ journal = {Phys. Rev. A},
+ volume = {71},
+ issue = {5},
+ pages = {052330},
+ numpages = {7},
+ year = {2005},
+ month = {May},
+ publisher = {American Physical Society},
+ doi = {10.1103/PhysRevA.71.052330}
+}
+@article{rattew2022preparing-arbitrary,
+ title={Preparing arbitrary continuous functions in quantum registers with logarithmic complexity},
+ author={Rattew, Arthur G. and Koczor, B{\'a}lint},
+ journal={arXiv preprint arXiv:2205.00519},
+ year={2022},
+ doi={10.48550/arXiv.2205.00519}
+}
+@article{mcardle2022quantum,
+ title={Quantum state preparation without coherent arithmetic},
+ author={McArdle, Sam and Gily{\'e}n, Andr{\'a}s and Berta, Mario},
+ journal={arXiv preprint arXiv:2210.14892},
+ year={2022},
+ doi={10.48550/arXiv.2210.14892}
+}
+
+@Article{araujo2021divide,
+author={Araujo, Israel F.
+and Park, Daniel K.
+and Petruccione, Francesco
+and da Silva, Adenilton J.},
+title={A divide-and-conquer algorithm for quantum state preparation},
+journal={Scientific Reports},
+year={2021},
+month={Mar},
+day={18},
+volume={11},
+number={1},
+pages={6329},
+issn={2045-2322},
+doi={10.1038/s41598-021-85474-1}
+}
@phdthesis{lenvco2015verification,
title = {Verification of Name Service Cache Daemon with DIVINE Model Checker},
author = {LEN{\v{C}}O, Milan},
@@ -5935,6 +6194,62 @@ @inproceedings{grover1996fast
pages = {212--219},
organization = {ACM},
}
+@article{grover2000synthesis,
+ title={Synthesis of quantum superpositions by quantum computation},
+ author={Grover, Lov K},
+ journal={Physical review letters},
+ volume={85},
+ number={6},
+ pages={1334},
+ year={2000},
+ publisher={APS}
+}
+@article{rosenkranz2024quantum,
+ title={Quantum state preparation for multivariate functions},
+ author={Rosenkranz, Matthias and Brunner, Eric and Marin-Sanchez, Gabriel and Fitzpatrick, Nathan and Dilkes, Silas and Tang, Yao and Kikuchi, Yuta and Benedetti, Marcello},
+ journal={arXiv preprint arXiv:2405.21058},
+ year={2024}
+}
+@article{bausch2022fast,
+ title={Fast black-box quantum state preparation},
+ author={Bausch, Johannes},
+ journal={Quantum},
+ volume={6},
+ pages={773},
+ year={2022},
+ publisher={Verein zur F{\"o}rderung des Open Access Publizierens in den Quantenwissenschaften}
+}
+@article{yuan2023optimal,
+ title={Optimal (controlled) quantum state preparation and improved unitary synthesis by quantum circuits with any number of ancillary qubits},
+ author={Yuan, Pei and Zhang, Shengyu},
+ journal={Quantum},
+ volume={7},
+ pages={956},
+ year={2023},
+ publisher={Verein zur F{\"o}rderung des Open Access Publizierens in den Quantenwissenschaften}
+}
+@article{zhang2024parallel,
+ title={Parallel quantum algorithm for hamiltonian simulation},
+ author={Zhang, Zhicheng and Wang, Qisheng and Ying, Mingsheng},
+ journal={Quantum},
+ volume={8},
+ pages={1228},
+ year={2024},
+ publisher={Verein zur F{\"o}rderung des Open Access Publizierens in den Quantenwissenschaften}
+}
+@article{zhang2021lowdepth,
+ title = {Low-depth quantum state preparation},
+ author = {Zhang, Xiao-Ming and Yung, Man-Hong and Yuan, Xiao},
+ journal = {Phys. Rev. Res.},
+ volume = {3},
+ issue = {4},
+ pages = {043200},
+ numpages = {14},
+ year = {2021},
+ month = {Dec},
+ publisher = {American Physical Society},
+ doi = {10.1103/PhysRevResearch.3.043200},
+}
@article{wiebe2018quantum,
title = {Quantum nearest-neighbor algorithms for machine learning},
author = {Wiebe, Nathan and Kapoor, Ashish and Svore, Krysta M},
@@ -5942,6 +6257,16 @@ @article{wiebe2018quantum
journal = {Quantum information and computation},
volume = 15,
}
+@article{sanders2019black,
+ title={Black-box quantum state preparation without arithmetic},
+ author={Sanders, Yuval R and Low, Guang Hao and Scherer, Artur and Berry, Dominic W},
+ journal={Physical review letters},
+ volume={122},
+ number={2},
+ pages={020502},
+ year={2019},
+ publisher={APS}
+}
@misc{sanderthesis,
title = {Applications of optimization to factorization ranks and quantum information theory},
author = {Gribling, Sander},
@@ -6068,3 +6393,100 @@ @book{Iske2018
publisher = {Springer},
url = {https://link.springer.com/book/10.1007/978-3-030-05228-7},
}
+
+@article{mottonen2004transformation,
+ title={Transformation of quantum states using uniformly controlled rotations},
+ author={Mikko Mottonen and Juha J. Vartiainen and Ville Bergholm and Martti M. Salomaa},
+ year={2004},
+ eprint={quant-ph/0407010},
+ archivePrefix={arXiv},
+ primaryClass={quant-ph}
+}
+
+@misc{mathur2022medical,
+ title={Medical image classification via quantum neural networks},
+ author={Natansh Mathur and Jonas Landman and Yun Yvonna Li and Martin Strahm and Skander Kazdaghli and Anupam Prakash and Iordanis Kerenidis},
+ year={2022},
+ eprint={2109.01831},
+ archivePrefix={arXiv},
+ primaryClass={quant-ph}
+}
+
+@article{graph_encoding,
+ title={Fast graph operations in quantum computation},
+ volume={93},
+ ISSN={2469-9934},
+ url={http://dx.doi.org/10.1103/PhysRevA.93.032314},
+ DOI={10.1103/physreva.93.032314},
+ number={3},
+ journal={Physical Review A},
+ publisher={American Physical Society (APS)},
+ author={Zhao, Liming and Pérez-Delgado, Carlos A. and Fitzsimons, Joseph F.},
+ year={2016},
+ month=mar }
+
+@inproceedings{optimalstoppingtime,
+ doi = {10.4230/LIPICS.TQC.2022.2},
+ url = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2022.2},
+ author = {Doriguello, João F. and Luongo, Alessandro and Bao, Jinge and Rebentrost, Patrick and Santha, Miklos},
+ keywords = {Quantum computation complexity, optimal stopping time, stochastic processes, American options, quantum finance, Mathematics of computing → Stochastic processes, Mathematics of computing → Markov-chain Monte Carlo methods, Theory of computation → Quantum computation theory},
+ language = {en},
+ title = {Quantum Algorithm for Stochastic Optimal Stopping Problems with Applications in Finance},
+ publisher = {Schloss Dagstuhl – Leibniz-Zentrum für Informatik},
+ year = {2022},
+ copyright = {Creative Commons Attribution 4.0 International license}
+}
+
+@article{alphatron,
+ title={Quantum Alphatron: quantum advantage for learning with kernels and noise},
+ volume={7},
+ ISSN={2521-327X},
+ url={http://dx.doi.org/10.22331/q-2023-11-08-1174},
+ DOI={10.22331/q-2023-11-08-1174},
+ journal={Quantum},
+ publisher={Verein zur Forderung des Open Access Publizierens in den Quantenwissenschaften},
+ author={Yang, Siyi and Guo, Naixu and Santha, Miklos and Rebentrost, Patrick},
+ year={2023},
+ month=nov, pages={1174} }
+
+@misc{allcock2023quantum,
+ title={Constant-depth circuits for Uniformly Controlled Gates and Boolean functions with application to quantum memory circuits},
+ author={Jonathan Allcock and Jinge Bao and João F. Doriguello and Alessandro Luongo and Miklos Santha},
+ year={2023},
+ eprint={2308.08539},
+ archivePrefix={arXiv},
+ primaryClass={quant-ph}
+}
+
+@book{schuld2021machine,
+ title={Machine Learning with Quantum Computers},
+ author={Schuld, M. and Petruccione, F.},
+ isbn={9783030830984},
+ series={Quantum Science and Technology},
+ url={https://books.google.jo/books?id=-N5IEAAAQBAJ},
+ year={2021},
+ publisher={Springer International Publishing}
+}
+
+@inproceedings{Tang_2019, series={STOC ’19},
+ title={A quantum-inspired classical algorithm for recommendation systems},
+ url={http://dx.doi.org/10.1145/3313276.3316310},
+ DOI={10.1145/3313276.3316310},
+ booktitle={Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing},
+ publisher={ACM},
+ author={Tang, Ewin},
+ year={2019},
+ month=jun, collection={STOC ’19} }
+
+@article{Beals_2013,
+ title={Efficient distributed quantum computing},
+ volume={469},
+ ISSN={1471-2946},
+ url={http://dx.doi.org/10.1098/rspa.2012.0686},
+ DOI={10.1098/rspa.2012.0686},
+ number={2153},
+ journal={Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences},
+ publisher={The Royal Society},
+ author={Beals, Robert and Brierley, Stephen and Gray, Oliver and Harrow, Aram W. and Kutin, Samuel and Linden, Noah and Shepherd, Dan and Stather, Mark},
+ year={2013},
+ month=may, pages={20120686} }
\ No newline at end of file
From 974fbfe56fbe6f875a920a9717ec2ace278f4f09 Mon Sep 17 00:00:00 2001
From: Alessandro Luongo <2940017+Scinawa@users.noreply.github.com>
Date: Wed, 20 Nov 2024 18:29:18 +0800
Subject: [PATCH 09/22] Update index.Rmd with command poly
---
index.Rmd | 1 +
1 file changed, 1 insertion(+)
diff --git a/index.Rmd b/index.Rmd
index 3873ba2..34d989e 100644
--- a/index.Rmd
+++ b/index.Rmd
@@ -46,6 +46,7 @@ github-repo: "scinawa/quantumalgorithms.org"
\newcommand{\tOrd}[1]{\widetilde{\mathcal{O}}\left( #1 \right)}
+\newcommand{\poly}{\text{poly}}
From 309b55aa746f7800cc54210c02904504dbfce876 Mon Sep 17 00:00:00 2001
From: Alessandro Luongo <2940017+Scinawa@users.noreply.github.com>
Date: Wed, 20 Nov 2024 18:31:53 +0800
Subject: [PATCH 10/22] Update index.Rmd with definition of symbol for complex
numbers
---
index.Rmd | 2 ++
1 file changed, 2 insertions(+)
diff --git a/index.Rmd b/index.Rmd
index 34d989e..4436e06 100644
--- a/index.Rmd
+++ b/index.Rmd
@@ -33,6 +33,8 @@ github-repo: "scinawa/quantumalgorithms.org"
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\E}{\mathbb{E}}
+\newcommand{\C}{\mathbb{C}}
+
\newcommand{\ket}[1]{|#1\rangle}
\newcommand{\bra}[1]{\langle#1|}
From 747ee168c320fd8c180ace600c1aa19304d74695 Mon Sep 17 00:00:00 2001
From: Alessandro Luongo <2940017+Scinawa@users.noreply.github.com>
Date: Thu, 21 Nov 2024 02:44:57 +0800
Subject: [PATCH 11/22] Update data.Rmd with Alex's GRwork
---
data.Rmd | 40 ++++++++++++++++++----------------------
1 file changed, 18 insertions(+), 22 deletions(-)
diff --git a/data.Rmd b/data.Rmd
index 127299b..1ddb032 100644
--- a/data.Rmd
+++ b/data.Rmd
@@ -535,7 +535,7 @@ Let $V \in \mathbb{R}^{n \times d}$, there is an oracle that allows to perform t
The previous definition is also called *adjacency array* model. The emphasis is on the word *array*, contrary to the adjacency list model in classical algorithms (where we usually need to go through all the list of adjacency nodes for a given node, while here we can query the list as an array, and thus use superposition) [@Durr2004].
-It's important to recall that for Definition \@ref(def:oracle-access-adjacencymatrix) and \@ref(def:oracle-access-adjacencylist) we could use a $\mathsf{QRAM}$, but we also expect **not** to use a $\mathsf{QRAM}$, as there might be other efficient circuit for performing those mapping. For instance, when working with graphs (remember that a generic weighted and directed graph $G=(V,E)$ can be seen as its adjacency matrix $A\in \mathbb{R}^{|E| \times |E|}$), many algorithms call Definition \@ref(def:oracle-access-adjacencymatrix) **vertex-pair-query**, and the two mappings in Definition \@ref(def:oracle-access-adjacencylist) as **degree query** and **neighbor query**. When we have access to both queries, we call that **quantum general graph model** [@hamoudi2018quantum]. This is usually the case in all the literature for quantum algorithms for Hamiltonian simulation, graphs, or algorithms on sparse matrices.
+It is important to recall that for Definition \@ref(def:oracle-access-adjacencymatrix) and \@ref(def:oracle-access-adjacencylist) we could use a $\mathsf{QRAM}$, but we also expect **not** to use a $\mathsf{QRAM}$, as there might be other efficient circuit for performing those mapping. For instance, when working with graphs (remember that a generic weighted and directed graph $G=(V,E)$ can be seen as its adjacency matrix $A\in \mathbb{R}^{|E| \times |E|}$), many algorithms call Definition \@ref(def:oracle-access-adjacencymatrix) **vertex-pair-query**, and the two mappings in Definition \@ref(def:oracle-access-adjacencylist) as **degree query** and **neighbor query**. When we have access to both queries, we call that **quantum general graph model** [@hamoudi2018quantum]. This is usually the case in all the literature for quantum algorithms for Hamiltonian simulation, graphs, or algorithms on sparse matrices.
#### Bucket brigade circuits {#sec:implementation-bbrigade}
@@ -592,19 +592,16 @@ One bucket-brigade $\mathsf{QRAM}$ call of size $2^n$ and precision $\kappa$ req
We now move our attention to amplitude encoding, which was first introduced in section \@ref(sec:amplitude-encoding). In amplitude encoding, we encode a vector of numbers in the amplitude of a quantum state. Implementing a quantum circuit for amplitude encoding can be seen as preparing a specific quantum states, for which we know the amplitudes. In other words, this is actually a *state preparation problem* in disguise, and we can use standard state preparation methods to perform amplitude encoding. However, note that amplitude encoding is a specific example of state preparation, when the amplitudes of the state are known classically or via an oracle. There are other state preparation problems that are not amplitude encoding, like ground state preparation, where the amplitudes of the quantum state is not known and only the Hamiltonian of the system is given. In the following, we briefly discuss the main techniques developed in the past decades for amplitude encoding.
-
-
-
What are the lower bounds for the size and depth complexity of circuits performing amplitude encoding? Since amplitude encoding can be seen as a quantum state preparation, without assuming any kind of oracle access, we have a lower bound of $\Omega\left(2^n\right)$ [@plesch2011quantum;@shende2004minimal]. For the depth, we have a long history of results. For example, there is a lower bound of $\Omega(\log n)$ that holds for some states (and hence puts a lower bound on algorithms performing generic state preparation ) using techniques from algebraic topology [@aharonov2018quantum]. Without ancilla qubits [@plesch2011quantum] proposed a bound of $\Omega(\frac{2^n}{n})$. The bound on the depth has been refined to a $\Omega(n)$, but only when having arbitrarily many ancilla qubits [@zhang2021lowdepth]. The more accurate bound is of $\Omega\left( \max \{n ,\frac{4^n}{n+m} \} \right )$ (Theorem 3 of [@STY-asymptotically]), where $m$ is the number of ancilla qubits. The algorithms of [@yuan2023optimal], which we discuss later, saturates this bound.
We can also study the complexity of the problem in the oracle model. For example, if we assume an oracle access to $f : \{0,1\}^n \mapsto [0,1]$, using amplitude amplification techniques on the state $\sum_x \ket{x}\left(f(x)\ket{0} + \sqrt{1-f(x)}\ket{1} \right)$, there is a quadratic improvement in the number of queries to the oracle, yielding $\widetilde{O}(\sqrt{N})$ complexity [@grover2000synthesis], where $N = 2^n$. This can be seen if we imagine a vector with only one entry with the value $1$, where the number of queries to amplify the subspace associated with the rightmost qubit scales with $\sqrt{N}$. Few years later, we find another work by [@Grover2002] which, under some mildly stronger assumptions improved the complexity of the algorithms for a very broad class of states. This algorithm is better discussed in Section \@ref(sec:implementation-grover-rudolph).
-Alternatively, we can assume a direct oracle access to the amplitudes [@sanders2019black]. Under this assumption, we have access to an oracle storing the $i$th amplitude $\alpha_i$ with $n$ bits, (actually, they use a slightly different model, where the oracle for the amplitude $\alpha_i$ is $\ket{i}\ket{z}\mapsto \ket{i}\ket{z \oplus \alpha_i^{(n)}}$ where $\alpha_i^{(n)}=\lfloor 2^n\alpha_i \rfloor$). Ordinarily, the circuit involves the mapping $\ket{i}\ket{\alpha_i^{(n)}}\ket{0} \mapsto \ket{i}\ket{\alpha_i} \left(\sin(\theta_i)\ket{0} + \cos(\theta_i)\ket{1}\right)$, which requires control rotations and arithmetic circuits to compute the angles $\theta_i = \arcsin(\alpha_i/2^n)$. However, by substituting the arithmetic circuit by a comparator operator [@gidney2018halving;@cuccaro2004new;@luongo2024measurement], the circuit can be implemented either with $2n$ non-Clifford gates or $n$ non-Clifford gates and $n$ ancilla qubits. This scheme can even be extended to encode complex amplitudes in both Cartesian and polar forms, or apply to the root coefficient problem of real amplitudes, where we have an oracle access to the square of the amplitude $\alpha_i^2$ instead of $\alpha_i$. For positive or complex amplitudes, this algorithm involves $\frac{\pi}{4}\frac{\sqrt{N}}{\|\alpha\|_2}$ exact amplitude amplifications, so it has a runtime of $\frac{\pi}{4}\frac{t\sqrt{N}}{\|\alpha\|_2} + O(1)$ non-Clifford gates, where $t$ is the number of bits of precision used to specify an amplitude (the authors preferred to count the number of non-Clifford gates, as they are the most expesive one to implement in (most of) the error corrected architectures, and serves as a lower bound for the size complexity of a circuit. For the root coefficient problem, the runtime becomes $\frac{\pi}{4} \frac{n\sqrt{N}}{\|\alpha\|_1} + O\left(n \log \left(\frac{1}{\epsilon}\right)\right)$ non-Clifford gates. For certain sets of coefficients, this model can further be improved to reduce the number of ancilla qubits needed per bits of precision from a linear dependence [@sanders2019black] to a log dependence (Table 2 of [@bausch2022fast]). Also the work of [@mcardle2022quantum] doesn't use arithmetic, and uses $O(\frac{n d_\epsilon}{\mathcal{F}_{\widetilde{f}^{[N]}} })$ (where $\widetilde{f}^{[N]}$ is called "discretized $\ell_2$-norm filling-fraction", and $d_\epsilon$ is the degree of a polynomial approximation that depends on $\epsilon$, the approximation error in the quantum state ) and uses only $4$ ancilla qubits. define $f$ before here.
+Alternatively, we can assume a direct oracle access to the amplitudes [@sanders2019black]. Under this assumption, we have access to an oracle storing the $i$th amplitude $\alpha_i$ with $n$ bits, (actually, they use a slightly different model, where the oracle for the amplitude $\alpha_i$ is $\ket{i}\ket{z}\mapsto \ket{i}\ket{z \oplus \alpha_i^{(n)}}$ where $\alpha_i^{(n)}=\lfloor 2^n\alpha_i \rfloor$). Ordinarily, the circuit involves the mapping $\ket{i}\ket{\alpha_i^{(n)}}\ket{0} \mapsto \ket{i}\ket{\alpha_i} \left(\sin(\theta_i)\ket{0} + \cos(\theta_i)\ket{1}\right)$, which requires control rotations and arithmetic circuits to compute the angles $\theta_i = \arcsin(\alpha_i/2^n)$. However, by substituting the arithmetic circuit by a comparator operator [@gidney2018halving;@cuccaro2004new;@luongo2024measurement], the circuit can be implemented either with $2n$ non-Clifford gates or $n$ non-Clifford gates and $n$ ancilla qubits. This scheme can even be extended to encode complex amplitudes in both Cartesian and polar forms, or apply to the root coefficient problem of real amplitudes, where we have an oracle access to the square of the amplitude $\alpha_i^2$ instead of $\alpha_i$. For positive or complex amplitudes, this algorithm involves $\frac{\pi}{4}\frac{\sqrt{N}}{\|\alpha\|_2}$ exact amplitude amplifications, so it has a runtime of $\frac{\pi}{4}\frac{t\sqrt{N}}{\|\alpha\|_2} + O(1)$ non-Clifford gates, where $t$ is the number of bits of precision used to specify an amplitude (the authors preferred to count the number of non-Clifford gates, as they are the most expesive one to implement in (most of) the error corrected architectures, and serves as a lower bound for the size complexity of a circuit. For the root coefficient problem, the runtime becomes $\frac{\pi}{4} \frac{n\sqrt{N}}{\|\alpha\|_1} + O\left(n \log \left(\frac{1}{\epsilon}\right)\right)$ non-Clifford gates. For certain sets of coefficients, this model can further be improved to reduce the number of ancilla qubits needed per bits of precision from a linear dependence [@sanders2019black] to a log dependence (Table 2 of [@bausch2022fast]). Also the work of [@mcardle2022quantum] works assuming an oracle returning the amplitudes of the state we want to build $\frac{1}{\mathcal{N}_f}\sum_{x=0}^{N-1}f(x)\ket{x}$ (where $\mathcal{N}_f$ is the usual normalization factor), and does not use arithmetic, and uses $O(\frac{n d_\epsilon}{\mathcal{F}_{\widetilde{f}^{[N]}} })$ (where $\widetilde{f}^{[N]}$ is called "discretized $\ell_2$-norm filling-fraction", and $d_\epsilon$ is the degree of a polynomial approximation that depends on $\epsilon$, the approximation error in the quantum state ) and uses only $4$ ancilla qubits.
-
+
@@ -640,7 +637,7 @@ Meanwhile, if trade-offs are allowed for state preparation, we can further impro
-
+
In addition to the algorithms [@STY-asymptotically;@rosenthal2021query], trade-offs can introduce additional circuits that can achieve the lower bound of the depth complexity. For example, using $O \left(2^n\right)$ ancilla qubits, we can perform amplitude encoding with circuit depth $\Theta \left( n \right)$, which further relaxes the connectivity requirements for M-QSP [@zhang2022quantum]. This technique also improves upon sparse state preparation, with a circuit depth $\Theta \left( \log k N \right)$, where $k$ is the sparsity. This represents an exponential improvement in circuit depth over previous works [@gleinig2021efficient;@de2022double]. This leads to a deterministic algorithm that achieves the lower bounds in circuit depth if we allow $m$ ancilla qubits, which is summarized in the following theorem.
@@ -659,7 +656,7 @@ $$ \ket{i}\ket{0} \mapsto \ket{i}\ket{\psi_i}, \forall i \in \{0,1\}^k, $$
For any $m > 0$, any $n$-qubit quantum state $\ket{\psi_v}$ can be generated by a quantum circuit using single qubit gates and CNOT gates, of depth $O(n+ \frac{2^n}{n+m}$) and size $O(2^n)$ with $m$ ancillary qubits. These bounds are optimal for any $m \geq 0$.
```
-
+
There are also other trade-off techniques that can be used, like probabilistic state preparation via measurements [@zhang2021lowdepth] or approximate state preparation problem [@zhang2024parallel]. However, these techniques are beyond the scope of this chapter and will not be discussed. Interested readers can refer to the respective articles.
@@ -708,7 +705,7 @@ Finally we note that in [@PrakashPhD] (Section 2.2.1), Prakash shows subroutines
-#### Grover-Rudolph{#sec:implementation-grover-rudolph}
+#### Grover-Rudolph state preparation, its problems, and the solutions{#sec:implementation-grover-rudolph}
In [@Grover2002] the authors discussed how to efficiently create quantum states proportional to functions satisfying certain integrability condition, i.e. the function considered must be square-integrable. An example of functions with this properties are [log-concave probability distributions](https://sites.stat.washington.edu/jaw/RESEARCH/TALKS/Toulouse1-Mar-p1-small.pdf). Let $p(x)$ be a probability distribution over $\mathbb{R}$. We denote by $x_i^n$ is the points of the discretization over the domain, i.e $x_i^{(n)} = -w + 2w \frac{i}{2^n}$ for $i=0,\dots,2^n$, and $[-w,w]$ is the window of discretization, for a constant $w\in\mathbb{R}_+$. In this case, $n$ acts as the parameter that controls how coarse or fine is the discretization. Consider referencing the appendix for more informations about measure theory and probability distributions. We want to create the quantum state
\begin{align}
@@ -745,12 +742,12 @@ After the rotation, we undo the mapping that gives us the $\theta_i$. These oper
Computing the mapping for the angles $\theta_i$ can be done efficiently only for square-integrable probability distributions, i.e. for probability distribution for which the integral in Equation \@ref(eq:grover-rudolph-rotation) can be approximated efficiently. Fortunately, this is the case for most of the probability distribution that we care about.
-##### The problem (and solutions) with Grover-Rudolph{#sec:implementation-problem-gr}
+
Creating quantum sample access to a probability distribution is a task often used to obtain quadratic speedups. A recent work [@herbert2021no] pointed out that in certain cases, the time needed to prepare the oracle used to create $\ket{\psi}$ might cancel the benefits of the speedup. This is the case when we don't have an analytical formulation for integrals of the form $\int_a^b p(x)dx$, and we need to resort to numerical methods.
Often in quantum algorithms we want to estimate expected values of integrals of the form $\mathbb{E}[x] := \int_x x p(x) dx$ (e.g. see Chapter \@ref(chap-montecarlo)), Following a garbage-in-garbage-out argument, [@herbert2021no] was able to show that if we require a precision $\epsilon$ in $\mathbb{E}[x]$, we also need to require the same kind of precision for the state preparation of our quantum computer. In particular, in our quantum Monte Carlo algorithms we have to create a state $\ket{\psi}$ encoding a (discretized) version of $p(x)$ as $\ket{\psi}=\sum_{i=0}^{2^n-1} \sqrt{p(i)}\ket{i}$.
-Let's define $\mu$ as the mean of a probability distribution $p(x)$ and $\widehat{\mu}=\mathbb{E(x)}$ be an estimate of $\mu$. The error of choice for this kind of problem (which comes from applications that we will see in Section \@ref(chap-montecarlo) ) is called the Root Mean Square Error (RMSE), i.e. $\widehat{\epsilon} = \sqrt{\mathbb{E}(\widehat{\mu}- \mu)}$.
+Let's define $\mu$ as the mean of a probability distribution $p(x)$ and $\widehat{\mu}=\mathbb{E(x)}$ be an estimate of $\mu$. The error of choice for this kind of problem (which comes from applications that we will see in Section \@ref(chap-montecarlo) ) is called the Root Mean Square Error (RMSE), i.e. $\widehat{\epsilon} = \sqrt{\mathbb{E}((\widehat{\mu}- \mu)^2)}$.
The proof shows that an error of $\epsilon$ in the first rotation of the GR algorithm, due to an error in the computation of the first $f(i)$, would propagate in the final error of the expected value of $\mu$. To avoid this error, we should compute $f(i)$ with accuracy at least $\epsilon$. The best classical algorithms allows us to perform this step at a cost of $O(\frac{1}{\epsilon^2})$, thus canceling the benefits of a quadratic speedup. Mitigating this problem is currently active area of research.
@@ -762,19 +759,23 @@ The proof shows that an error of $\epsilon$ in the first rotation of the GR algo
+
+
+
-If we resrict ourselves to considering loading probabilities from a Gaussian distributions then we can use the following approaches.
+
-##### The solution: Pre-computation
+However, if we restrict ourselves to considering loading probabilities from a Gaussian distributions, then we can retain the quadratic speedup of the GR algorithm. This is because when we create quantum sample access to the Gaussian distribution, we must compute integrals of the form
-TODO SAY THAT WE DO GAUSSIAN THINGS.
-We must compute integrals of the form
+
\begin{align*}
- \int_{x_i^{(m)}}^{x_{i+1}^{(m)}}\frac{1}{\sigma\sqrt{\pi}}e^{-x^2/\sigma^2}\text{d}x = \int_{x_i^{(m)}/\sigma}^{x_{i+1}^{(m)}/\sigma}\frac{1}{\sqrt{\pi}}e^{-x^2}\text{d}x
+ I_{i,m} \left( \sigma \right) = \int_{x_i^{(m)}}^{x_{i+1}^{(m)}}\frac{1}{\sigma\sqrt{\pi}}e^{-x^2/\sigma^2}\text{d}x = \int_{x_i^{(m)}/\sigma}^{x_{i+1}^{(m)}/\sigma}\frac{1}{\sqrt{\pi}}e^{-x^2}\text{d}x \,,
\end{align*}
-for $x_i^{(m)} = -w\sigma + 2w\sigma\frac{i}{2^m}$ with $i=0,\dots,2^m$ and $m=1,\dots,n$. But this is equivalent to computing $\int_{x_i^{(m)}}^{x_{i+1}^{(m)}}\frac{1}{\sqrt{\pi}}e^{-x^2}\text{d}x$ for $x_i^{(m)} = -w + 2w\frac{i}{2^m}$, i.e., for $\sigma=1$, which can be done beforehand with high precision and classically stored. The above iterative construction is thus efficient.
+where the second equality is obtained through the substitution $x \mapsto \frac{x}{\sigma}$ in the integral, $m = 1,\, \dots,\, n$ determines the size of the interval partition $\frac{1}{2^m}$, $i = 0,\, \dots,\, 2^m$ indexes the interval points, $\sigma$ is the standard deviation of the Gaussian distribution, and $w$ determines the end point of the integration, which is chosen such that the interval points $x_i^{(m)} = w\sigma \left( \frac{i}{2^{m-1}}-1\right)$ is linear in $\sigma$. By the choice of the interval points, $I_{i,m} \left(\sigma \right) = I_{i,m} \left( 1 \right)$. Therefore, there is only one set of integrals to be evaluated for all values of $\sigma$, and we can store the integrals classically to high precision. This iterative construction is thus effective, retaining the quadratic speedup benefits of the GR algorithm.
+
+
@@ -812,11 +813,6 @@ for $x_i^{(m)} = -w\sigma + 2w\sigma\frac{i}{2^m}$ with $i=0,\dots,2^m$ and $m=1
-
-
-
-
-
#### KP-Trees{#sec:implementation-KPtrees}
TODO need some introduction
From b7ada14e994d9b735677a4ded30f591c25431e83 Mon Sep 17 00:00:00 2001
From: Scinawa <2940017+Scinawa@users.noreply.github.com>
Date: Wed, 20 Nov 2024 19:49:13 +0100
Subject: [PATCH 12/22] Add decomposedlookups.tex
---
algpseudocode/decomposedlookups.tex | 174 ++++++++++++++++++++++++++++
1 file changed, 174 insertions(+)
create mode 100644 algpseudocode/decomposedlookups.tex
diff --git a/algpseudocode/decomposedlookups.tex b/algpseudocode/decomposedlookups.tex
new file mode 100644
index 0000000..3f3abd3
--- /dev/null
+++ b/algpseudocode/decomposedlookups.tex
@@ -0,0 +1,174 @@
+\documentclass[figure]{standalone}
+\usepackage{amsmath}
+\usepackage{amsthm}
+\usepackage{amsfonts}
+\usepackage{braket}
+\usepackage{quantikz}
+\usetikzlibrary{external}
+\usepackage{graphicx}
+\usepackage{hyperref}
+% \usepackage{tikz}
+\usetikzlibrary{patterns}
+\usepackage{calc}
+\usepackage[ruled,vlined]{algorithm2e}
+\usepackage[nopatch]{microtype}
+
+
+
+
+
+% \usepackage{xparse}
+\tikzset{
+ gateX/.style={
+ append after command={
+ \pgfextra {
+ \node at ([shift={(-0.01,-0.01)}] \tikzlastnode.north east) {?};
+ }
+ }
+ },
+ gateXbottom/.style={
+ append after command={
+ \pgfextra {
+ \node at ([shift={(-0.01,-0.01)}] \tikzlastnode.north east) {?};
+ % \node[anchor=north] at (\tikzlastnode.south ) {#1};
+ }
+ }
+ }
+}
+\newcommand{\Oplus}{\ensuremath{\vcenter{\hbox{\scalebox{1.5}{$\oplus$}}}}}
+
+\DeclareExpandableDocumentCommand{\gateX}{O{}m}{%
+ |[gateX,#1]| {#2} \qw
+}
+
+\DeclareExpandableDocumentCommand{\gateXbottom}{O{}m}{%
+ |[gateXbottom={#2},#1]| {#2} \qw
+}
+
+
+\usepackage{tikz}
+\usetikzlibrary{backgrounds}
+\usetikzlibrary{arrows}
+\usetikzlibrary{shapes,shapes.geometric,shapes.misc}
+
+% this style is applied by default to any tikzpicture included via \tikzfig
+\tikzstyle{tikzfig}=[baseline=-0.25em,scale=0.5]
+
+% these are dummy properties used by TikZiT, but ignored by LaTex
+\pgfkeys{/tikz/tikzit fill/.initial=0}
+\pgfkeys{/tikz/tikzit draw/.initial=0}
+\pgfkeys{/tikz/tikzit shape/.initial=0}
+\pgfkeys{/tikz/tikzit category/.initial=0}
+
+% standard layers used in .tikz files
+\pgfdeclarelayer{edgelayer}
+\pgfdeclarelayer{nodelayer}
+\pgfsetlayers{background,edgelayer,nodelayer,main}
+
+% style for blank nodes
+\tikzstyle{none}=[inner sep=0mm]
+
+% include a .tikz file
+\newcommand{\tikzfig}[1]{%
+{\tikzstyle{every picture}=[tikzfig]
+\IfFileExists{#1.tikz}
+ {\input{#1.tikz}}
+ {%
+ \IfFileExists{./figures/#1.tikz}
+ {\input{./figures/#1.tikz}}
+ {\tikz[baseline=-0.5em]{\node[draw=red,font=\color{red},fill=red!10!white] {\textit{#1}};}}%
+ }}%
+}
+
+% the same as \tikzfig, but in a {center} environment
+\newcommand{\ctikzfig}[1]{%
+\begin{center}\rm
+ \tikzfig{#1}
+\end{center}}
+
+% fix strange self-loops, which are PGF/TikZ default
+\tikzstyle{every loop}=[]
+
+% TiKZ style file generated by TikZiT. You may edit this file manually,
+% but some things (e.g. comments) may be overwritten. To be readable in
+% TikZiT, the only non-comment lines must be of the form:
+% \tikzstyle{NAME}=[PROPERTY LIST]
+
+% TiKZ style file generated by TikZiT. You may edit this file manually,
+% but some things (e.g. comments) may be overwritten. To be readable in
+% TikZiT, the only non-comment lines must be of the form:
+% \tikzstyle{NAME}=[PROPERTY LIST]
+
+% Node styles
+% TiKZ style file generated by TikZiT. You may edit this file manually,
+% but some things (e.g. comments) may be overwritten. To be readable in
+% TikZiT, the only non-comment lines must be of the form:
+% \tikzstyle{NAME}=[PROPERTY LIST]
+
+% Node styles
+\tikzstyle{rectangle black}=[fill={rgb,255: red,64; green,64; blue,64}, draw=black, shape=rectangle]
+\tikzstyle{new style 0}=[fill=black, draw=black, shape=circle]
+\tikzstyle{CNOT}=[fill=none, draw=black, shape=circle, tikzit draw=black, new atom]
+
+% Edge styles
+\tikzstyle{dashed edge}=[-, dashed, dash pattern=on 4mm off 2mm]
+\tikzstyle{thick edge}=[-, fill={rgb,255: red,64; green,64; blue,64}, thick]
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+\begin{document}
+\begin{quantikz}[
+ font=\footnotesize,
+ row sep={0.6cm,between origins},
+ column sep=0.5cm,
+ classical gap=0.1cm,
+ wire types={q,q,n,q,n,n,q,q,q,q,q,q}
+]
+ \lstick{$a_2$}&\octrl{1}&\octrl{1}&\octrl{1} &\octrl{1}&\ctrl{1}&\ctrl{1} &\ctrl{1}&\ctrl{1}&\rstick{$a_2$}\\
+ \lstick{$a_1$}&\octrl{2}&\octrl{2}&\ctrl{2} &\ctrl{2}&\octrl{2}&\octrl{2} &\ctrl{2}&\ctrl{2}&\rstick{$a_1$}\\
+ &&& &&& &&&\\
+ \lstick{$a_0$}&\octrl{8}&\ctrl{8}&\octrl{8} &\ctrl{8}&\octrl{8}&\ctrl{8} &\octrl{8}&\ctrl{8}&\rstick{$a_0$}\\
+ &&& &&& &&&\\
+ &&& &&& &&&\\
+ \lstick[6]{$d$}&\gateX{\Oplus}&\gateX{\Oplus}&\gateX{\Oplus} &\gateX{\Oplus}&\gateX{\Oplus}&\gateX{\Oplus} &\gateX{\Oplus}&\gateX{\Oplus}&\\
+ &\gateX{\Oplus}&\gateX{\Oplus}&\gateX{\Oplus} &\gateX{\Oplus}&\gateX{\Oplus}&\gateX{\Oplus} &\gateX{\Oplus}&\gateX{\Oplus}&\\
+ &\gateX{\Oplus}&\gateX{\Oplus}&\gateX{\Oplus} &\gateX{\Oplus}&\gateX{\Oplus}&\gateX{\Oplus} &\gateX{\Oplus}&\gateX{\Oplus}&\\
+ &\gateX{\Oplus}&\gateX{\Oplus}&\gateX{\Oplus} &\gateX{\Oplus}&\gateX{\Oplus}&\gateX{\Oplus} &\gateX{\Oplus}&\gateX{\Oplus}&\\
+ &\gateX{\Oplus}&\gateX{\Oplus}&\gateX{\Oplus} &\gateX{\Oplus}&\gateX{\Oplus}&\gateX{\Oplus} &\gateX{\Oplus}&\gateX{\Oplus}&\\
+ % &\gateX{\Oplus}&\gateX{\Oplus}&\gateX{\Oplus} &\gateX{\Oplus}&\gateX{\Oplus}&\gateX{\Oplus} &\gateX{\Oplus}&\gateX{\Oplus}&\\
+ &\gateXbottom{\Oplus}&\gateXbottom{\Oplus}&\gateXbottom{\Oplus}&\gateXbottom{\Oplus}&\gateXbottom{\Oplus}&\gateXbottom{\Oplus}&\gateXbottom{\Oplus}&\gateXbottom{\Oplus}&\\
+ \setwiretype{n}&\push{T_0}&\push{T_1}&\push{T_2}&\push{T_3}&\push{T_4}&\push{T_5}&\push{T_6}&\push{T_7}&\\
+\end{quantikz}=\begin{quantikz}[
+ font=\footnotesize,
+ row sep={0.6cm,between origins},
+ column sep=0.2cm,
+ classical gap=0.1cm,
+ wire types={q,q,n,q,n,n,q,q,q,q,q,q}
+]
+ \lstick{$a_2$}&\octrl{1}&& &&\octrl{1}& &\octrl{1}&& &&\octrl{1}&\push{\cdots} &\ctrl{1}&& &&\ctrl{1}& &\ctrl{1}&& &&\ctrl{1}&\\
+ \lstick{$a_1$}&\octrl{1}&& &&\octrl{1}& &\octrl{1}&& &&\octrl{1}&\push{\cdots} &\ctrl{1}&& &&\ctrl{1}& &\ctrl{1}&& &&\ctrl{1}&\\
+ &&\ctrl{1}\setwiretype{q}& &\ctrl{1}&&\setwiretype{n} &&\ctrl{1}\setwiretype{q}& &\ctrl{1}&&\setwiretype{n}\push{\cdots} &&\ctrl{1}\setwiretype{q}& &\ctrl{1}&&\setwiretype{n} &&\ctrl{1}\setwiretype{q}& &\ctrl{1}&&\setwiretype{n}\\
+ \lstick{$a_0$}&&\octrl{1}& &\octrl{1}&& &&\ctrl{1}& &\ctrl{1}&&\push{\cdots} &&\octrl{1}& &\octrl{1}&& &&\ctrl{1}& &\ctrl{1}&&\\
+ &&&\ctrl{7}\setwiretype{q} &&\setwiretype{n}& &&&\ctrl{7}\setwiretype{q} &&\setwiretype{n}&\push{\cdots} &&&\ctrl{7}\setwiretype{q} &&\setwiretype{n}& &&&\ctrl{7}\setwiretype{q} &&\setwiretype{n}&\\
+ &&& &&& &&& &&& &&& &&& &&& &&&\\
+ &&&\gateX{\Oplus} &&& &&&\gateX{\Oplus} &&&\push{\cdots} &&& \gateX{\Oplus}&&& &&&\gateX{\Oplus} &&&\\
+ &&&\gateX{\Oplus} &&& &&&\gateX{\Oplus} &&&\push{\cdots} &&& \gateX{\Oplus}&&& &&&\gateX{\Oplus} &&&\\
+ &&&\gateX{\Oplus} &&& &&&\gateX{\Oplus} &&&\push{\cdots} &&& \gateX{\Oplus}&&& &&&\gateX{\Oplus} &&&\\
+ &&&\gateX{\Oplus} &&& &&&\gateX{\Oplus} &&&\push{\cdots} &&& \gateX{\Oplus}&&& &&&\gateX{\Oplus} &&&\\
+ &&&\gateX{\Oplus} &&& &&&\gateX{\Oplus} &&&\push{\cdots} &&& \gateX{\Oplus}&&& &&&\gateX{\Oplus} &&&\\
+ &&&\gateX{\Oplus} &&& &&&\gateX{\Oplus} &&&\push{\cdots} &&& \gateX{\Oplus}&&& &&&\gateX{\Oplus} &&&\\
+ \setwiretype{n}&&&\push{T_0} &&& &&&\push{T_1} &&&\push{\cdots} &&& \push{T_6}&&& &&&\push{T_7} &&&
+\end{quantikz}
+\end{document}
+
From 2079d134f0e856fbe9af336d4482359b06bb319d Mon Sep 17 00:00:00 2001
From: Scinawa <2940017+Scinawa@users.noreply.github.com>
Date: Sun, 24 Nov 2024 13:14:03 +1100
Subject: [PATCH 13/22] Add tex and png files for KP-trees and refactoring of
data.Rmd with minor changes on intro
---
algpseudocode/KP-trees-example.png | Bin 0 -> 25662 bytes
algpseudocode/KP-trees_RPhase.png | Bin 0 -> 26438 bytes
algpseudocode/KP-trees_RY.png | Bin 0 -> 25071 bytes
algpseudocode/oracle_models.png | Bin 0 -> 31786 bytes
algpseudocode/oracle_models.tex | 2 +-
algpseudocode/quantum_architecture.png | Bin 0 -> 31372 bytes
data.Rmd | 129 ++++++-------------------
intro.Rmd | 32 +++---
8 files changed, 47 insertions(+), 116 deletions(-)
create mode 100644 algpseudocode/KP-trees-example.png
create mode 100644 algpseudocode/KP-trees_RPhase.png
create mode 100644 algpseudocode/KP-trees_RY.png
create mode 100644 algpseudocode/oracle_models.png
create mode 100644 algpseudocode/quantum_architecture.png
diff --git a/algpseudocode/KP-trees-example.png b/algpseudocode/KP-trees-example.png
new file mode 100644
index 0000000000000000000000000000000000000000..0c2668d1718e6729e1a8f645a761e8ba6d3d50d6
GIT binary patch
literal 25662
zcma&Nc{r5c|35xbc||B&QYhIOQkJr3AN#&9Mav9^VvwD@kUe|$eczIOtR*S3%-FIF
zNf>01v2WiqdVfCe&p*HG`nfLGJ#)|Vew^q2JkN8^9j2?TN=?Z^34uVUVQNbH5C}yc
z1VYyO&slJ1f9u*9xSX-oP*sAQgvNYp2j4C{S2J@3B!Z{^uCJHCGY|?COiP&pPsKn<
z$NrRD@+r7=$?lQ1J0O1tkZZyBHVEY0yS!fqFbKr<7^d{V@a4oRu03Y+cx-WN%*>bn
z-#3Rj=7=&&Y`6=Tjacq;78a#hjfoKKtp}aroH9{Em?>orjBXB0G3y^aea(M&$>W$<
zKBIT6IqSU)IPcDJYX7OX>u%4aU1@92a^Tj!eh|4jw<%kHpA3v$octm+D(Rks_l1zP
zth!ck2mBGcZIHsG54SZ9&u!R8zIsVriXnZ6yu1~4o^8^+%K_3+W1R&j0@l(a=6&efr2tE-E7JzUq5!rr;+|4!qc^pa2uZ^1AVG}2)JPWL^YZuG
z_Q?o-gx#Y_jWcrp8LgWR>NYt1q^{kK6M@^_ky7Qr313_$J!s$kV`9-XKkB_?$bFku
z*X)6Pitw@bkE^HM2G^bmUBBc}^+1fI5cwk0Y!BF`iW8c##7+_2-dM+!if>(=@31q>
zBQt!Ap^|+()*&whfmjEW{iwH{!RphE5_V<}f)~d;0w(Z~+4t660{)
z!)|}Mw0>JbE
zZq*YU&hJ%3>oD~&z(==je~6!VBRL8FYy>0BTx^@H8YLH1FQ1cNuGuxgovMai{WYU-
z?s#qajs>DHMQoTodQtDx|8%cW36@K{axDEwExXda6N0jUUt4A5!(z7=jHv5}x6S$W
z^M_@To-z!RPwU6@??Sg8WhHr0kIGuBWwYVTfQG@S!=LC1CJ^R`>Edzo+TheFKM$%p
zjJba_m(>jT`LzGm
z)%Ou6dhuk)yt|9JDfj6Jb%ih(ry8+5+Jn0OE1HJmR6pyS4KA(?yQR&>F6Axa+nUeP
zcw1b(t4*=v*k;XUime|N`M-b548%E6G4eJQ%dod+>Fn&9>%-#j^q-9bIvEXGbS%b4h4MZfke)3ZqZl-0F^gp6lN>87^DlB2zH&|ECqOgfo@nH5?KEHm430~qE6sih=czv}E
zrc7MQ&ZxiDD&ab$h$dI|2tmzM-dFweIP7VUXkXS+M9m0^TOkW8N39#s;B`a
zt{mr;w>jr6cr*ogp`HPM8oW)#z_0(y?Jh`QDqTt8Vje|K=4re}z5>xPEC
z??=bZ%9LNP>dbb6XiXqWOYZJWLPc}qfy1)Zggs5Q84;REfKevYxB3XXwG2<=SG!oQ
z-}d`CY7R3ue>B@cS74r~mg~0szVIUE0f;@8Uw0a|ejk%T9y-!7=DQPC?tg-L>s+k&
zaAh0vj$-#LZxm0)%d%iw%huC4x@$6j?O#cPmo#|`m|dPxkJF1GT>l%Q$bT5|os|@O
zPC5wF!%ckt6${Ui;FT6#zm_f&*IQ#hDPpjahPzRYh4K+K70DU0tGNf`bOot?iA;in
zpK5lF%Gk8@a_vsNBtZPSLu8I^#tzo6;7J9ps`-bAmX?(9^tDnIi}D}Gt6fw5cg~qe
zY*G@S@m%U$T9jE-zI#1pZ4}mFX9m_%eT69`gCF2XDfAi*t)FaR=6C$>G>HvD3<29Z
zx+PQH*}Es$vn6wdtD!q9!|LC6N@0FkMp=q$&gvd0)xY^;;@zM=cANjhf%*dFa-z5%
zYX4|G`C-#7#m*CJ)BOm7A@Xwyam?ewv%iU=zu5ht|iaaGOzg$*9Dr~!(
zwc)#Y3DXtwwY(wkqi=T&N`s#!`H#%+22G|mFR^4QoMw4=pv98&k^7j3)3LgvClOUme*whUX}%OsWVPy9lo*ENKAKIu7qOVXF5O5
zv4zn`sB?vPE47y#pt`-J^DW{mObds(%>@jKQ{4AmUxe9JM!nD4aS`aPt<|((x@nu7=KAHPvmS2dMUU@g-RP|#
z7KVw-ZCO)^4J{waVw|=PpcR%L^orD(VHX5o5^rQ~`_M!rF!
zS9-u?Q#-DzW~>AmLjL4-F(}HQBwFydXy-yDjU9wTx6m1dGbi`BCxSC-h3>*qP4Vt=
z#;LGW#8U^>TA9LWp-uMPDe5HnpL<&kVINgHYoYL^rf)IqAA|e<)&t27I<9kx@b+$2
zR{9SI2($c*Uo_k9%cap=eVmhsd3BGo4K0-l1wGRI_-f+%HX^47M_D06RdpHCoD}CX
zrF>cuYWwM+bwk93))e1;Z|6SmL8TeiFHTQV^tIajuh}EMX6O*Er@`~&1GU&jU2iST
zz9IbHJ4i73X%gY|-uZgs-}xZ4Mt2Fvx!;6ZwzX~;olxkaWX$xtk|`I6?iCz7K$OfK
zC}Ng2JI4vKK
z3PTfMdpx=xQ~E0gN4uEk&$4df(+<2UQ$)-1Vrm8M2NY4vx^1J|RBNTwnF2
zG?H;2`nEKr_O>22^e6R9{jAmgNg*7a-#r4)mOh=wW{cUth$F(>Hmx;I-pigsny%2=
ztVLy=aGRU3p0+ijuTiHAuBf@y!pu`{Y@9o_QR2wbyfI$QBHMK3DMhKp!zwd}siEfo
zB)%lm>?rqhuv&(B{i2a2o~})bxJiTn!Z!Y+$G#^mEWB!E!l`0Xg4ZE)ho?CxO+9Wh
zW3<1OQ(V7O5e>yyFHw!gyc9JadmbSIDHvD!y0n*|bwF_-SLgRKR15SZ{E4;oD`)TP
zzwy>4DpYf;JHANN=9@Mynhp%SW8C!;)gItV)CaOtfnx{(<
zn!X(abFVFW4Atw_RDp>_Mqizw?c`?fK>e^%O21SUeY12Wch6XvO`o@}N54o^=9DMi
z=+HMT=K_URqNtxmefFRI2YJlH)E)6!f^|AjM!q5ScA3R+!S`S24!lIWu-KsiAHuzT
znnq0%BnvNv(
z+M(eL9@~`ZO*id`MEJaUnTmhGu5oVtfp7JX&*Onzl&R>}w-!xJ7A@;%{Ds++#Dzfa
zeEDT0+rymIdF+v2%|g{XXEBUY%~0ZGSH>e#o8ycFo7(rfVYXev1!RyQm-aYMy+D-I
zK}8DPDzYr&jIyL;o!^cCW<}j@C&N{}=i4u4C3v^EjGRHA)TY
zkJbXCs~D%wMva|~-w}7S`b9`B3cl8W9d(0m$h<78v+n*^&joCv?EMiRDy3h8l(9clpOdkDdEUVxPYs)Or`scq)p^aR
zB?(#e%`-ZqtYx_5=>!_lnU(!Y<7UGmeE0C8DiCq{+PiOH
z5SiuSm6E{$dsMQ&vHw=Z!Y9MWqWJ5$*2w;{sXV^l{o{pJ4205h`$6-J)5uf0qpsDd
zG(U_mHKe2Cx^U#dl*D|R6v@`*H!m(7o8L;HCe3U+&S^O2wLYb+9ZK_SYS<3-ns546
zG39(Ts-9xoIytRhHJL~T38<9JbV!iNWT&6c)mW~k!Abg;z&ZHrh*mqG_1hgCVL@G9
z#zG($LnrwR_CoD4?uSNqKo|(ZjLV*CO4R5Tdv3
zeajzQ$pMOyA7JG96mxyRqF*R`GXEEAd$T}X!q(=LBVH{pFm~CS8vTzO0tDbPYGzeG
z2)*SiL8!UqSN_28In*%w(Uh}K0HC|fqW&K`FMB8-?>MU!aj;A+<(k4D&lb_!0aRxc
zNnG_@a`?;Dpj7v8gSpK9!9C}|+`#-*DvqGXqE
zaK}f6xLn32lW-G99~@_l6s8CV>Gsy`r2lA(_*osBKNY**Y<_M2c!!J8l8R9yHQ`za
z0lH^>3xC5EDNGiiVwSs$dN__KaPwW-)Np4X+GjI)ltb4=$;nzf+QVx7?hq1yxx;hc
z!JBWt>&z-6e12NJZ_BS-zPaRaplqGHKZ|w0VNDsk6Rvi2VAoK0UM8%0ESrV;;^^D|
z5eX?Yq^__3{=&+=a;f+7-R7t{MKo?|e8kY1E`#??Eqp&M8#GfLH^fYW*vh;h&yhqgC^olPGWv<=e#rbO#|&Z7XI*|W3E!Sz}u8jbLsmVR{JSHjr73rg_zljyYvqnJCiKa=J#U{prqQViQoCm4h`dFx4tjv=
z2#8#@o*MWeI}KW5tU}7GSJQLe+A%%vO+Pp|36H)r#MZYRbY}Y*<^?{;K5xYe3@%S!
zOlkZiEJ;5lXPjxpDz4Vx`^in*eEZ4x)F+$^)O@j
z_r7d?v%1IbeCiECswxXLky#AyrAML)1ypHx#lYiT_AnjVR{bqvjey|D@019#PdEpF
zC!=?}KnFjm2mhnsnHF{sb!m~|^AGFhv#Bs~NJMiA4>o}+uSp{nk
zMOkD=eSFqV{ufdQKvIyX!~m6KL1Jc|nTFWP&KoBSeoRlR%%0F+k!9qm(MS@DPh$I}
zt1LNi&Rd+APq;F`eTE7(T8b2KJvTB+g3g)q6Fd^fSG@ZZMq`!0x`YA?HHt|LFGR{j
z>xz;=;-sY#aPN{B-YE~J#O!0(U?bKgg4gU@!na
zh8pJ!V}qC8|Lq)F2y7w-pr7fI?vJexeVg@H9F7rXc}XV*A0{!D?fp5{HhRt?R~vu;
zz3EAif>P+Jj@SK6=Ws9Ze4=iUiYsYs^h3*%`NlSACYUQGC{eRi$pawn)f=MFv>
zgXu7DAd9GV1PPRNPLr>DuR`w}Li9&4i|7PN+;#goEuQX!K0O+Q==vowcx?>HLPno6
z)T0k3JFg%_pHqmzQ5Q)5x2bl^jQnM!C`%(iM2C?O@8`cMKeu}$>U&XN(j|qJhL%
zF*2}P*KyT;5Kl{DTQXLXsWM9Ly3Gr)B6%<8e3UH7y2TVQV!;s0eYiSDltw-#0BCBB
zBu}Oyoz=feEDdi|?0ywLu;V4&E9D4l3sz$3Qp*#u9nKb!{_EySGm0Eh9Qhb#F8e{|
zJEYp=&xwh7qc+Ea5LF3f5&eAXY$9=R_M*B3g!=>V9nii4
zw5JOC2Zp-JiT!db>hllaG^CM*VwIAWIilFXlcM&6@~5GF8HQry#O^AvL9eJe_30C2HPvQkJCCt8~mJEy`DH4VlbCK>4>
zBMgO8%kKsTX$5wNpZX|E9G?4)S`4t8R|t#fItz)Dduq993LG-V`4~KqzM~A7Dd6~4
zm=j$X@m{RTPEs)CgmnlDD)qp_5>BpLA<3^fD>c`b@=aC9y0
zrRn_G=j>+yR6(&jS^&isu0E+48AeW8pM?}hbM7+&(#5L!D-fM=#6MD>(&_{NrAK2$
zy|~;+!L!uyRy3;c(P15p!!T{YWsWdE>f~|5YrzxDVMMhzDm#-t{}{#noM04#XM0-~
z1~V0*QRUyAMhZNfCbr*?K6;qv2Z^MC5m$hKeR2-oWC0rD)x)xNhWVNTpdZDBXUGk*
zI;~6-8x|ztxhdP*PA?{V&q5lN=#I>Vh76_7O?-bxxco{1=z-z?Oo${Z7s;%^V$Jh*
z(Ghk1a*s(1`}q1~N|2H`_qTvN|BAA(KA>j7vjeXg)Z)af-iT$R#llgY{-nD_z3AHG
zqWa{2AfKZ-(K4J^y9X>$PP-?-n!ZV88YiJhVJb)@&|))y3KV6*t+xZC?h_1?kE*Y4ky8?qZ=1RA=foWHvOUNM|#
z3$^^vzbpLs`*C=}%~-bgq}BVDS6U1Xxnb^-a4Ww>`)UO(26&OCgU8GD9^Dk02}G~c
z^$yU(*`_d8pQ`>l19Fip5l_hDkWtFV1U)TLc;e)@F&CVl
zqRi&}hIe?nUmNBoy+oaG?H+=@Pqck`_oV?hpYGD5R7eP6sWsj)H|o`oMg
zOfGEuipSugkKB2PuM!=OmJ{2l$RgtymcKPmi!%-#S|hxdr?)EjFe9<>b?q!2%%nl<
z`S?;z{7-9p$gS<+X;@sXVYX((tC{o}SwZ8C)lZ{n%|^29XKJ|zK}ue5wLFS%w&(O`
zFZ_C-Z#nn;vu>d5@402x_K=`%WEB;}>d)cyS32c5<10MTt`&77SyL-#vfiWYt(ndv%gekrl?_$b$cWJW0}XH7xhsn23zt9CN7AComV9gogj!A01(&O1pL
zTIf84vOs7cI)n^C>9(I6ckk&`y%fIpD9o*mojic;W*C8oZfW)gc-l_YW&6jDN7HZn
zMIY%SnRMLfqG{E%wVR}$@0pl1iZ?sIrqcZU-BWdf(oY;R<4dU-OF9-Wbny>1mb@wc&Rjl>XKZu~4Ml
zH4W$VH$DYDcMrZ$6Ri0M*|p`!h@d94$EwkiBZ&7{b*#%!=L;zlminv=11*hTDje3i
zn#y@uK;Han_K=JUj};uzl!DgkX46hu%3bL|w$m@^UekcS>ky|D^Ofej4nc=j-$#KQT;a9QmkOwj>+wS!z2}-eAzOP*_um#!P7#SQ6|1$wp;iUmguXjqee*4!
zaW-Qj9CD;YCv#D#rbL{wPDE&Vn_&e0@|}+B{%|u#g>O832{nNh@TnAl7;d?(%If}d
zq2_ifs5`@Gl)sqjCy}hluug=WLR>e@KLAq*wyBx^^!r2XTEs`!yiv(kFed)kgrz7}
znE(||lPgP1ac2sGt7Lx>uy5eb(gSSr18#f3sr
z=PEO=L1EzoI%fH-w3I6Z3Bbm
zJ)tDtn_=g50|%y;Rv7*~-}KmvkM#)NE23&_^7G~Sao5|{`5#D?0-cN{C)QV)Mdqb3
zN+4rV`+5vdwvEo4w&PDq8mq(zPL>WCqaE#;gQQ8G_?aE&cS^CNi$U$U4_ac$sAnKd
zaS!NZ967N^N-Q##Haw}bohAueMr8agn76^|;~4VBxxLo-ljA4cj55#;D5|Bqp_`hX
zEE`mkPOy0i<_tme??ZV~Ll4^PKgsqnPW`*+(Dq6W1-8WYeN>(4-7i4=q}x6IwEhF9
z*K!$I*(@+c6nK|NLr~-};y=IW5UEjKKS_>4J+VAWZWXP_x0YEvp`-!Bej9D;tbq=y
ztQ@*pa>2q%P1?vVVhp@_U{cEHJi=-e$2}awvWgy;U=u3
z_kT~dWJ0VJhft?tiR3(v5U=_uAK+I*>!t_4Cer-sO?GvWt#y)x_<>Je5U1W
zK1Q+LxU?$1$2BLv(=%%POxMJ`-?kBTPo*}hefovB$cakxD^Bz9T6=FJ2tI+%yQs`J
zp0l9_7ZqQ$PCV`&Lu#dd|HNqA0F7(>5*j;MQdoN9M+HIelB>|`pI!RF7nj*YJ>DhK
zb)nUeU{N3&TED(&T-lo{uqZL1*^Z%O~MN^3X^Zar?lH^_3gbipzpufY9OnW
z{3+Mw!OHRarB+PPqWk`nFb<^PurY;Jgp}G;OR@JLa;w)iI2`PL51CtdxY1g5XY7$B
zEHrrVPfNhMl_og4)KTdtF%~l~acR&5Uc>URdP(>!J`8O0gVpSqR~9m^*H|0;cGs^$
zihnh=#{a7K+nI2uAgF@n>gA7M1CR9g|8WlDoZx$%$zSXT|3Cu3vJV8JE2y8X_F^)E
z41#ANtvm;CbQirHC{`th^q(%AGnnzki|S2mzk)!}!Hc9V4u
z&h1ki8LyTE=yg%nw}eC#M;ruJyH
zKM7tKr3K3mElOb~*fKa+JY$Gg*|D%!yqE4uVfYZPFj?FB&z4Wj;#2l7<^e>VeSadm
zhC0LeHJRMqM`1}5OFFC0*URiWi>;m-tdd}=nb~|u5R>smFCl}k<@F{b1}~Hc&!)ib
z+F`ezKMqThgimxZ2diPxt7{(pPsB$;l)Ow?B4{*s)41
zlwXsE`SPYl%!(+9jNmcVz}rVoenbb`_Bk#}x{q9tmm?kYh(I^jOo0OfwY5(x>t<9b
zWv`!v9S3vr4TaBL&tLN$ZHeu1%#S9>O-(SRE?EuLC!ZZ~5Dihi)Rm99Wf%KYpWY;6
zs((SfuGWftXOH~0kA_yQ!{f(2mN^OqUfx?KF6nTZf1x_$M34PW5gMK~%33G92%Po$
z#e=RssQzQVxqm3KzcRBuV7OcCZP->O6J>eLg{s2s@GOM4>uGqRm`enGlcsY|f+=>p
zK}l|0zx%rxLY*$^RvMrF==ZKBQ!63$Z(Mzm2b)C8P7&!>z}#%f9_J>+UmmS%Q3NN|
zABm9>BZAcnR5H!@U@lU9Tm|XdIno;KaqykdRw23CPVQ8gxhVZ*CIP%XqfXqEF%_)0
zKAC&~E*{c-nIxmja=26egzK;9HQAcR)iug&xwh`FqAxq#1S%NdU!z{=_LnfKn>Qs(
zoXhS^qO5h@nt8TRTq9F3!_>o@MzpLEkq##@6}5oQ-P9J#DtN(e0BK8I3GYvHD6$*5
zw(e1G;N-G=`@8J3-Ae{NO%@rtK6m9s%uVX-Nbcs`
zP5c`b&m=fUD~Bvb;4W;ku)mmedL`%)>GVqaXZvh_mZ;fkB@qPCR20XUp&BX*(b4iC
zJs*XQzE#Gu!luM-R<5RR;31J@E7RCseU8b@&`;;-hdfV5+gN4UTa-G9Emybjvi8y;
z>?$utn71SJs06pdzCFu(E9%9Nx)$D_r^phJWr8}zjB}5B&o*qRi=F%TPqY#?=R>Qp
zkMr&F;ChU8EC&?n%(Va
z`Z+8%@w#gUWifpH%ICH8tp69W1U5O70h*fL`007#23F@H@n+GJ$(i&L;dq5}NMXgG
znSR93_MO^4t@OUeM+-%+rDk@$Ow0Xj>N&D?hv6{geV-NygIq(6)h$nNZ-GITtnl`e
z)(tZ`^PZ)eLz~z3ky1$F&%0AjnpV-X2YQxVAsIcNil4e1>*+zmV_v9=d97QIooX;Z
z)q3RgL(dDT^C;R-uA~XQLd~2?-jjeTB|EXU7d*R`vQlRm(tkoAqk4Q&o8k>u>80}w(oR#oCWW|}M7~VmA*hI5+
z?&LP=U+k4qfev~%MQjd`yr84CUO+O21_5bkMW23nm@ZA%qZ+*^+J~EYnY*qn>?gS7
zxMWp`tJ!)Ok)YNiBljIDI^tZFG8U)6=CUt|iwJa&($baEq#kaFeIdMj@wOQ*GnJ$->!*}Il}4DX<^35tys3lXaQYcAdBuB3_2
z)6lw?wbYljXqRXha96Ru-p-6N8!1Je(Ap?Eabvq5fAC_rW7*CWMd7EH8*cM#QmcQHT;Jn0p;t~y$AMe-j2d9HH
zbf!}FGVK^|8
z-Wzf{*O_B
zWj0C}NvYs1dhl<0arP|k
zJKg80=@$L3)M#m{28x|J1ZT9{XV=Bp5_A|JUO{xdR!qYY;xEeD8I#*+eo^kVo(89h
zul$Z)0q^n3S6byo66nCHW2K>n!#2|a1YIr#bnY5VDIm*D&Wr|qzGKl+msyu__iVxj
z{n6_IGtZIXWk)9|YexsdZ+^dZ=3K?#8EJPZ-;LJBU8AonO1B=7+3u4Ocx9M6vTd9E{=xy%q
zWzAyKMSpf1o}U#_bI&J}i;&%=PZSfSBN(UJMdByT3V$q%t9@|o{dH%O5$qfP(V2O_
zHjn29Bj<@6-Xo3N$(&*1*tu>y@eBajVldp!}3Q4
ze3I0B4YXD1%q7V>_=WdaTWHmTJ2&L
zS~Usae3LU(~O6gQE^ax
zPXgQ-@u5uLmlvyTfvhvIzlNMRq(Zvah6krg`;Asg!7ING6w6X2EWgo#X-{$q9{y{J
z{k6}nUNqU!=8q(&O5isbj|k6b?J{iw1=7}b_w#{Vq-_i{=&J6B*ni+-DpPtq?sg2BYLi?n6G$9>=0Rt8N?&8jrf_@N?hq;Pjo!No1UqWsnQhqhig&LXX@`!hNx
zmw)*&S!IUu4(hY-jostJ6v;AJA3n}H35(bFwLavGjW3(G2%YFfR22_f#-_7K`Rnm}
z9Q!)A-N6)1GYVievnDLdaqpg6S-1Cfxtrv-Zm04Ym+Bx7ljgkznq4Lfy`~sj##Ab&
zk`6O#ZiF1=zGZIPIL_4bkqe2)`!1|PWs7sAPK^3#Hziifkzy@}$Yb%)pcZ3IuCr)k@9D}?sU2Y|K>7Qqke4qpe~TEKs|A^xk2A@%PQu?V0o*e2%{|W`%2fR
z^}d~*=%1w`mNF_b)01C8Q-jhkeI>W-q#RH&E?=qelrna~eJCZ-Ui1pLq3VxP4<#Gq*|)ecW^+4AVHr
z-xj~7t`CP1c*d;4SEqYQzjkvd+pphLYBXl<^wZP4ICJMCDGLV>WCiw_8V1d~3(ntu
zXyt26gbEF%-Qjx1qX6lLay>~!AM$N(I4}H>eq35H>LqpD$=yxA`cdP!`~I`0tWPEq
z$cYl+E5F)OY<|)StSq};JYUKiB9r?vQbdzsMukP@td$>tlvZS6ZV{3LBW2#Co_eMc
zoP6Q6a<}Bz{Jc+ttHI7)eZt#(hUhe@)~7D}y;8EDpBCzPHw@>_AzFGT{0()#x+SJ_
z88ujo>Fw?c;fi(iE_$nM;BDXXwuOqKchE=Wcnj@@$}F>~?KlWrWh{%)H^Kr@Ag?j7
zver>-nwy^;zPR)Dt)174llSJFFv7ag2jlV-@%`GE-(T2E5
zX-yt$dk`0rs%hHnBzZE+Z)O59?*Z8(Aq}YW0Z+R9nD~=jn
zyzpA9|5)WvThaN*h%sOI|C}JB^hJwGJu2BM+1`c={}f?n+FF18sGXNh^&)*2E!d+n
zl><21tk@~WE6fRPe|v>rX4z4rgw@>Wq;xcQ!%P*r``Pn(<2b>(8pBT3Da-cSc;_qb
zlMW6g-=NwK^LjQ2&ZxCfP2|fO84QqVbcEg
zy7x2O_Au1^?G4kH>YAfPo|HDXz`3)u;mUqWur>rOU{u7|2a*JPr|_-2U2VBF4%o()S)j$rc3(#7;?Y;a^WS7{%|jDS$>PeTx0y%J
zLFAkoc**f$#$*A$P7WsX-%PwqoA+J&WV%CDRipxepmKfem88ou
z*0x@+^Uc!L)`m^?e)#l>IF&E&eZK%HtQs`~=PQbt{f$g{`zJnKoYtnMWwY0p;Fcrn
zwlxN}jVN7Dw153#xZ*b0mbqVr
zo$~%L){aWZzrF{73%OGD-3OY-;rnH2?vT-AMlLR1U{l8DXa7N~cud|$2~a{_T8dcu
zGnj)&;>!~ENe)Ra_wLR^-R;!~a
zZ{O~VFIg367oz3(21CDY)>=IaKqwSw?UyV>zZfemZgA^aVZ`sc!Dz5aT{!JA&I!ngD7VP||ZRRhngZdw17
zN{R%{PNsGdZZe=$rf6+T$X-@cvo%vYxl`TpWVI7js(sr$1Q8I=R)pE{_kAIhE59TH
zGbk&Ydu47{V(YwC&oBca>K6GnUCqB!M?t-W7yk{M(A
zm4;ov`Db;rDe~xbV2_=IkiY&g!hG#L^)h7p=bGAds)Sf_u~(Io&48R7~3EcR<7+_Uiqp0QD{i}1Fmg^^wP#7r7NrjlnHQ8F`aQKb&qY?zIog4
z!r`inom(m|5}A{f)$@I3nk?X@GczaYYIRdIwT~H2j5<|+2JfGM(qj44pP2K>y<+V#
zBtiV&XUa1_jhXl{*g|s8Y>GkNhOH*tV5`Z|6!Gg?tM%g|HGR|Ny~Y(g&yJrfzInl^
z>9{w`WNEy^TjZX1a9S^M#_roffeQ0$Jibz7sb2;*hMH?K>}TmcZ-vdWd3evLx_?5=3)V
zbNQtHgOmuvk=(bUXw{HSATXG(4(cd+{;k^`d=-KzuhR7+b8nVa)AUOiB@#wfTG$``
zeaD3Mqnm|9J9C1!kK9a6Us%2)d{vs<2pX~y@l##oaO*mp3Yn(_qHTa3K7XfAXYGB%
zu!6TqF`}fsQXMp}I3C5-)5QgRMh++qBIf6rr5ZMweY{o^8ZBibzAI>{(^UB8etMV#
zUK65))(uu2ECrua=d3si7dIA~S1IHF{M=X15*iYdmC84g=I!dbQ&?iVy4h{lfF&Ag
zWG*QN9$0aQRgzk`R*@-CCp->AhJ6C2nO#g!nHGpH_|=EQ^;jL`w}G9^jrVa^mJ+`%
zKCq~$8&tmwWw)fU_&Rp~Hsk-APS6iZRv4O3tTT~LWQ+%Oe$nry1)P}pcfE12X
z@O_N_l~w^2P_1B<_-X!}w<9-(hBRO5GHYx5Z5;P8>t4N@w-!;OU8iAyYE5}BF`T3@s?NV_
zT=hKFk_eB<
z@=YOU^-JpJK-RkZTQ1NLR*CQK$Xxf>$Rl<7JDtp69T5NKyw{_PV15yqME3bNs{3ag
zxotZgb#Kv)>YlMScSb*hU%O4`lF6YMeb=3ZEG3I#`FVmww6Rg-ESl8ZUg@5DAeT&@
zB-ywoezw8Qpf9oHc2AX249HY$%&LC_o;|YQ?<973H11U#4x=1H6`qhd?aIb~s3KlN
zwDot>RIenU9mDjsa7#BIFuk-Al5Xqe{j5%-63ko)4HxKI&tQ;x!x3O#gAUYBKDYE&{Z)2Y-3qoNUv+RM^bgN!QL~_pY&q{>hG-{v*T!lp
zd}&nRS?WS61$`C6uY1hyO3C^Z+2$9==(>z-OK9xL_g&v=V{YIAf8Jm2l9RwT1tja~7s@kV6o)yKvJf1Ak-~yP`{m;>BGi
zKwAd&-2s&p3J#rca~H0_a?;a()+gr`IA?uXGI&I)wV>2uG9O7-_tV`^oZiVee&)VE
z)|~!|GD{|Q=-3`??ZMOslC|mY7c6EK6r@}FofM>issq+C8>PE-(h`o@#6D+;%WCwX
z7{wT)XNpcd`NP?}*WDPT_y&vmIkc+NWn;3cW5DES+NjFb?OFj4fOq*Uf$a?7VGkEW`+3VH1{MmgWY
zsmvWo-$51aX9rKY_P_AZ7cuaAJatnqtKyC+*ehpxFtUK&u+GQKkC_!PR|E?ZS|^bt$?l+VyD8ysxbaCc3tD3;K1nfD<4
zlZ-_K_!
zw=Br{0na=)Auwo*YFKNsq+JWI6WF?_2#I>1qW@%HCaK#JXG)bLPDxzAQz<#842^0{
zT8FolxcEJ#U*&@NPfaAL6U+9Ght6~r5wLY@@lMb~Lhx4R;;gqHZ#k2k+vf}$CWlr7TcYb3`tEgcxx~T$?)Q3yx?FcxA~DYEscDd;BaWZ
z+*H~K9W)VPJ_h4kx6jA?=d`bD;LLWant7AN!SIJ8rvoYJRvHduzWoJ31nQs^Op4Rf
zT=Ucn2*WtNIq$#K0Uswq>L0=EJ91bde9`f^)Ax~&BGP7
zJSlj~e@Jn+M?vxX2pXJiKu@kwvO89n0OOw@drE14Szt(X~Pjk$OW4$
z`m6Uh3(3XUv#s0@ulcMTe5la|2~A~LD}^J1^z18QyV@Otj^cbWUiMC82|JEB1
z&|fN`toteLrrXm6Oafl#zKK$nQwS9uJde(0mvU2UzYECelM$sENq&o5PK5G-phw6O
zB>`0vVs=8rquhHjqUgK$@a+eE6mT*tjO@Y$LZ2{u6SCj`B-OrRetkg0QH4%Mb1rNa
zO-gsS0I71Q=(;&d`QW~rTDQ0Z=s{cJdZ)7d#ula97e8Kw5pPFvg{+X4-Yc^OCfHfA
z9le8$t1*~65J==X1d3;n(ds<7Sq84r76>k|TLkGC1ilHe!aUL2B(wgv68|1(5%Tb?
zTAl?c5{zIKyDv`wO02C@~w%M1^0MLAstUvhyi&x30Y>E3Gl%rXd}JJw
z0YN%V+^uVxO{-Pw;*1o&FzC#*tcYblj~WBJyIe=^&0wdp@u_yoYpoXZsKt(Yz5ER3
z$*?g>M);%&9`YaYsRPJLKaTn6S~0>v>juCCnTjT;yWlE`S_1w5f=j`&`_*$>=*F}l>
z|JESG?1{Y?M%;DD{J>>=xDy?)iG*2h0S#fSS9Z-*BA+d4qD8jQG4Hikt&-S&7l{Of
zd}*!#+6=4#;fqu9yH;fTS&LQko_3oP1%$r|D#@7Bjo_031!?cXXpj*n3cv}TZcy`F
zwkWSUdY$3Cm|~T7Fej-ZS%%FD(+A9k&%QF2%e9Ot9$#_5&~4%AOEJQVCPat9o74&5
zS_ZBt*tI%%n3w4CkhP#*OwxZaK%nRXKd4d!0mBC!KE=?JA`#Yd0t^L!0sM0}S71L8
z=mDKuVP2~w!?FdJh%Qz&i1NFD&b5i6L39AKSAnP)yb2vTg@AdVLV!Wef?t3;t}|O<
zkiY}sHfeS6l2u8LknAJoTsAE+^b3@7Lm&;}$7OWxB=E68GU(_j#DnHjc1Q4qK*MZ6
z3En_`R6xVc6P3yuR>fSo1L(_gc2$54Ck6lxysr`LN^ApBYs=?P+0?ObXM#qxkw(!2
zqeOt{AhAKpBN|uiG5RLUVfZ}4r~bwjjA7n2B@@&nkTD$STow#Bippk!>cxDfN9#o-
zsaVGpvfx0PYbLQuXN9o>${!4=9L_Be8ezcj;KI{H#_~#3BtYy|0x|Ca2a+Ihl6&h^
z3jSqns5y(9OsYLf5g$&V0r>1FWseIW4LG=!FhSiaDqqXut-nP!keI#vpP#OP@q)p4
zU6O_oNx(^7IgR8NTs|Eqpb}32arOp)k#sY*WDYf7wnbjH8tr>|)qyn5D)@;5WDJ1u
z8ejxV8lPh1695Kg6!1D0U5R%BTC)SZngAoWC|q#>I$4Rp=Mp~m+#~74LFM$Ps23pn
zGy~mO=LVgMqmF+H#6ggTsQ|->OXiDM1D>w}>kOgIBa+aTc1p=8DecduKy6mH01MGH
zh)6~&3=U-b!N%xQ3$G93%xgt^yhKOvEWpIRC^frC07eePSuo9gk~}adIB5OAWF->+
z%4saOK;)F?kV?EP(C-`wOgW%m(c;cJILzhh7F>F86`*}2p``)~vI8_rfJOz-o}yY$
zE$NG^Qr4I&0}fLO(v>f8{m;N$$0V;*>n4Q_06796XcCAy01*Ztf&ka
zij_6&PC-Dm@C9-6IS_yVTNGQ-+LJ)0Ffjn+Jqe@;$fW~7b^*vO03x|nKmt*zJ_e5S
zVgwjLF02yY28^Hxj35V$FkH%Ws$nO*4$!a>&@eF`JS+?}{1~OYL(;I!{;4<2nYY_x
zuy2=vnX(j2P+8#6z${1`Gr&yt*t}E!KeulLT-OlT;OHzm_W$~gnn~id2OrLM1`MMFh5Idkq>bhf-BXJ3+dyH1
zrxe)&UU)zu2Pk|1#q{~V6cZ$h4iZHx_=P434gv0>efXPNVfAT4>5>H3b6vx
ztO6IQelAb}fU*EnGyur@y#DD>qt5|QHVH`P|Fw3VaZM)OUe=cvS4DOelp?SyD1rfz
z-m!oRQl&{K0@6w7EfKOT3kVnxq$pCPNN6F{B!spukgy^p(oG;DQY3^TJ+wPP-~0RB
z5BJl(pXS5!f9A~m&zW;h=9xJG9f@JG_(VU-J)
zz+t{=Kpg^5=MTsYd#KA{W>t!}{4|7@2%OKy!}iokfM5fwt_BDq%;NwtavTQ`9QVXN
z2Lu3@vI-Q%LHbet;@JHxRR~;8753qn>`fr(6c~y+Kyq@D&R-6qlDrVdZFq4IhXDc*
zKdTH79KYOm=m24MfbifT>;XaqAY=jJDVWX&(*s1cARP4J#zBYzgeX9OB!>cm(5H90
z2MFo`;?WO8^a;-(knjLzJ4FIG=BoAYHV{8&T=2Qjqp
z>O3IqFOHdr%qEb}05gFE=}mH+sK@L97k#`SuJGbGt{Nm(n;>eD{;c$P4A@c?3OkSy
z)&}e;n&Ty8Z&@pT1FEy`6F>(7)wuxoF;E!ia~2qy2glI3febYugBQr)j6(2$%l`oR_*zDo0A{L86+nX+N47zOm;(?6{va5f;~?_#0Rp&?budgMKzN`_4iH5g
z1Q!P}1D@Ol6VU+UJ@D1!1OLF+y$Nj4kK-R&LGKiR;IMW9c)c|J?}#1_;^7YjFYwHh
zgR=>-;=tF)x&Y4s*m#r;*pT9|aVbcz4H$74gj6sA?BePBBL_OYF60jAxC7{@lK^y_
z00>cl03Pir#pVE!FX#&p`W(b;g8l;_#1gbZU>LXbK}0IK%|{LcEB80fF!Zby-vPdG
zy5C!utJ4RJ9wFNUMmY||$XZbvSPV$Ig)ISXW`h0)T0JreEaxzA0xcXw8SpIKz;p%y
zB3}f_L5zh0#`lH`Z#$ri!2`hOMZifJ;PVr@iNoiD7Dzi5nbZq7{eA(oJ9x7g&^iO!
ziGtG+wtNGH$VA)KjjY*VEyTc}2dQ8g3gdjlYBrpnf&_)%U-bEvVOw1B{nEqObI*tT0
zVBlt#Kr5cO23(^2ZvzFrK!~^(HIofZ7V|t10)DIw!~tRdVC#Po{UCVKi|Sr-d*he-
zS;J=R?Nc8eoZ95>hcqr>xRrYcZ%7oCa5l)7ZK3K!k
zo`v^X*&$z7i-IV5BdHpacwUT_{cSpe2?HqHY6vvxjM?KT
zL2w(i&vDAVT31Ar4y7fIOB_p^9dbglLNav%j=l15xu03z{DtUx+#k6dVc6m4;5Zq6
zmPVv!u$cI>IB*J9{;6E=+0swZN_52`zZFtrl1vfR)U}z=pI+=
z3;pi)sD2F0J1jY_ru3W6bwAx|>ZZOw{!F90a5WG3%T!
zPUgmU(@kW;W=xSCjIJ8B-Qj7b`{0oAMEU&&FXoefekE*ZPsIzaL%mfTTEm2&Gt6;#sR^(;A8d*K(gOHvY?tOAR>Im$fGMjW+>%jQmjuG!i;QkfdmqwnsHKXj!gpA30g5
z8s@x_Q0sez-kcGtZ~lq9C2a3_oG%hzQE9ZYP`RkKYICG
zLoI@TZsoCBnup{5oY_)vR)uNL;wS8#ag1^i)dVIAO!bOWj9?2pb`LWLE2gRtmTRF6
zY<&NhRdL>(^h`$06v1`ZP>X!m^}}rJ0dK*_`LyfjrY)vamsV4}mT2rzt#7s7oAb^8
z&Rg2TP}i^%6S%>dR#rcg$WpY8()P8xY%;a_fS9#YDbJJ8;(+PiZBV3QWPjRGH0j1f
z7_^L=RcHUy+n^k9wAtaV@b3Ge(TsL|Uy$34fb)*Q`J<7#D%CymBxueWV&=v2P~>
z(gja=azljX-=AWi?JHZmVQGu5i5;t#pW3o=4qelY##n}s>Q%_6`W_@
ztY_~KA`qe~HL+cqDu`m;^1m4psYBLA!%$JTg8?BfP^uF)fweA8M)%+>$N?&&OM|UBcPDJ>ZO@u5YzZ%-vWZg1v`5+kH+fccsG5K?0BCX2=
zd%7(=CwWGLMohxm!Dx_ruW2VYMwf+aS4bof75yJk2p|gmV@jl?-Eu0m{-@oj3k&5TDas#Llfnt}P4_
zd^QMEVM+u{nWB#T)8P`i2l%QSf*4&hW5Q#n4$k`dWQdoX(=x_sf6Sc;ywF2HxqRP~+>BXMNQN(cKy$
zfTN~MK;=x1%7<&x%Y7Q%c-pk9T^m~jzJG>y1}uhlg)
zYSww3iuvAk@I#Ze7(uekU|7+Ob3=apHf`O^1@(0ndg1TuOz+tTuREtmc)Ec%_eYyE
z8Ot6Rc!hkBlg#QRxbN2xO_hd6Q-m?|a=6{O=6o70povsx^u+1VAt+9C7MZnDen*hnA*jB4cm7TlGXA
zX~d*DYpL;fE9c75*FGV%oMf>6Rcxu+JfbOs20toc&eX{A}OhjQ8&Kio2H7v#Z_
zkZV%ddU|&~@GNsX;6eX6ruw(eUkCmqinbts?p%S6hSIu&p3Q&zqtk!<)9Ol_Lb8V+
zk9i`OH{4`O2el%f9Xyrq;K3+Kw&k6Wo39t#8?V#O-ry*<^^se$u_6=tbE6pV>)D-R
zah1>bA_sCHFQ?C#1bE9ev-5XtOGLCHo9P)YYd*&_U*v%-==Go1JXf&`SXTGxrOCIA
z3qfB(&OFGhZ1aDyc=R*=#8}b{lDXc1oo{3%@Pvx`>ehVt$gFjEBhwWXXv8XgJaY+X
zTI#I|y2bkW=;aK40g1yDCok_oSL`U7@h^;|=M(NM&xyy#pc~ZQe%=dc=rP3xEqdn-
z?$m3GTz5h5+iRcQ2{ki#8HdaeJMRYyZD(KsJ15?ZH*+j0i+(@6hoOJmkl{bCW0x%`
zR+s7yB{IhwbiXQaX_|h0@$`d%xPtK1Hhnf4&N%Wl>=#*5lBj`t_I|wW!@GksUr0G2
zDj65vL+Iv=QOQ~$bCM=6nKRg}
z09ubVFHN<+IU|Apa=i%@;Y-0O#lk%UOf0=3QNd@OAl1#|pcrc-%mCzqns!sYc)vDQ
zVAMPJpNJEsF4vkYwcQju|54M*;(zzgW#TgJO}4i0`{&uaSNbs7*&klz)ulYYA2dmb
z2cL!$^T^=tecHCf{@q$+m&FnSXDx$u|4LQ^$q9E}s#j^Vwj4ug1Fe#ky{GXZE?b+Q
zUE!TKJ4{{-9%oRPa$ok8*BS2!LrwemMj?UB#m&{0yM_={OSQRLg!XunE5S7a8t}e<
zQp(>8!LcHLiIU%}yq|p56^Yo}arK<6EjDQPo7+$?Mqp-=4!ic^$u-z6R*HDeF$;}T
zo4FdP!w%Ti=jYSuwU#;;lijh}`>=8`g%(Ffr|^1(eq+wHv4ycLnfnhjqs84*@$lgF
z+Al#ynhjfKpxzB>vb$Gq##qAjO!C|&yJ)o5TINbqZCwX0Gt5Kx($lMOzAP`x9T`I<
z|9+g)Rd{*nwHVuRR9huwB^oy0n(pf}d6KqZyR7^;Km@w8dx9dFEpy+`d*|(FwkRWG
za&Rg49cs^;;Ji92b)wF!s8jlI{mI=3+uIkX7xgXV5rcGc
z7s246TV&;oef;)Tghxt8MlRmTxwiI2SO|mPAfXq5_vO`;%c|vAx-k8I`iuRUjSZL0
zEh%HpZJgJdD^HA^!>XjSVflU0fl*$-vaoDu>U*aWEt#}n2i&D!bGe&!`_
zvX%924y1vP;A$fi^Wq|`z
SgYipy^hJpxyvup
zm92^rxm?)KR=fhF>LE;8s!Q${wWk%MTIu^cTSrulV&r%1`195^-*)s~6bocimOk;Y
zLjubp&q|^2O1Ut5Sh8J^tm)dlPoD>)#UkxOOheaPzP%Wl3uNHg@)tJP`a@MH^^#Bf};vaNEX;r*hijY}P&jxV*Jmq}|^2C0F(HZQ;dbGV4wd+YZL@WuDgG=1^m
zThzBJkw@o*dLhMBG1}%9cD|Km@VM1>Z_k9mnB*^^<12g(lfqMdg?InZvOI+(d6_-F
zHz^a5i~5m5uc?|&`FLy6NMuFQ=a+wVl=I{)r7xmo?I^jKlH?kzylwnovQWc@$0J^(
zj><<@ym^65sCp^gAemh;C!O!xoL4Vmf$~c+O7pS}$hC>Wj}dt_$U+2_;dTqN36&a-@%c!Y^FQO>WlM_|O*F
zzVd`nEtxgO?AP9I()IzrI*s^grS5$r;TrKTZJU+tqW}6D02wA=yb@DEg=3OketVxFX
zE*E$Xyo{gBl)slc@TJlssC@J~da`7e6YFz6=0?iy;f^6LGE8c5g*~BmZ=a?63c`S9i
z7DDm9Vy3Cqy={Mrb9qVg@s|_wns)?x@ury+_n(%`*1xZJIa6tL2oHoG95uhOU2})p
zhH*aKE96m1sIh(yE}C##uaev}{h@RBIhBc5L((=b=TZo0`)NF!IkaGe;%83owHdJM
zfV4+u*;C9T3lkSYpG-KFh2UdIn%bv+&kLYDvM~MjVp({qj2dtZmLB4w$-Y3z3G8^8
zX=^PgT{V#8U1qPCMO@oYyH#q%cOs`bVP+ZbLKreY_^mZsgQc&3vV-EnbiMKTS~4ffzokKq)!(!K=W(-#^{5|T5!hR#^M{ou^IS+NXDoE`Lh!-qI;LnW$sh`y-O
zAH+5H_>8Q^MuFeO$SYxl<>h9$Lpp2XV4uR@UqVb`Op_%M8&~>q^z__B;Kkn!#Lhl9
zGmmj0m!}asMpC-wjYJH`npk=$Vi8KZS6HU1d(h;5mjpHu=%e6Ijg{bGsn0{8O0m+e
zIu~B|In!2Gd@}qBZN%-UEc{`>&LWYhJVF7zYrX*Q0K2$o#M`HENLOH2;!?b
zwUNKPY+ntACJiqYj=17wO3vL2noQAXU-~^=n|4F%anHX#70$~GO{HtOHJQgK+6M5v
z!aLsa?U|bRn6zKpsPH=s<9IyrD=D9X*e&yY1|9AAmgUZCRxEY#xd=RLxTDiRuuQBrKBn-8#pCOg9N`7s<
z8q+nOtc)P^i-N-GWly)clSSR3!K7f>zh3G#t6prq5tyhK(l3Ews-|yQd5{fSR(60dM*=A&TB1tB$1<~Upa``6op<<8(S*c4|6y!{22PFx)
zL)RKYIlb<3dbK|2wV^>B)oSr+HnNVC^M1d%&lQfXk4aDh)k#9noRHL2gO@v=&SQ@e
zqef=^@s@c=W~xHeMZbHIF>o;jP+949C_SmMtYDFD%fx4L9m=K1ZwZh>-*38q{}%>O
z1m*u>Z}2~V&_HOF_y_Fyx}^PqKl6I=2`$OFL
zT)nD(^{Sb4h4%k=0p|P2(=Ggef8n=Keo#l@kko%m2=auv1_n97{QmcMLY1Me`^2$
literal 0
HcmV?d00001
diff --git a/algpseudocode/KP-trees_RPhase.png b/algpseudocode/KP-trees_RPhase.png
new file mode 100644
index 0000000000000000000000000000000000000000..de40966be2429cd64a6a52d2afff3a6facdb713e
GIT binary patch
literal 26438
zcmeFZ2UJs8`!0^CU_qGCQ6_?5K~QNLRHOuS9K{HNh=R0WAs|AiQbGu280B+7Q9_Zf
zgNT3-BoG22fTAJ<5|kz+0R%#*0TMz8q5OAnzV>VPerwJD-rv3JE|)8ubN1P%z5Cto
z`#jIP?_abu+p=lrCMhYYE%0+cUy_oNB}qxGWo%dn_ITaTcmaN-ZTM>hYkY3qUBnrsiS>${uRErUSw63}xQXd4GQ
zQfUcd?3I#olY#$y>T*cWO6PWZmaR#oAk5xck1*sRz-PFBv;-
zIRuRVTJY-NmB7<|PhLADzyC;CeDt_B;*9G%8YMV`u5rROqdD`jCw>>^OxVpLif#Tg
zEE~_6Vsm{n;gonSLgnH$xw)c6EH>B?y3aoeWRYMGMc#1tb
z(o#~{YJTK{;E+`MtUY7>Za9yiW+r$#d1KbxfO^8zll+5}RM_vdaZARTFDALeF{qU5
zxY
zdB2;q@lyibMPY2@75y0H4Dh$w_ayz_C)`U(WIGNj1vHJ+`Hs|Od#~Gt3CnN$*0KHu
z<~XC)Awc|c@^^4b>vm~jTGRCH6C!wKx${o2r|PXCb3Dz7WmLKB25}aI>T(LzO#9}F
zXKC*6nnS3R8^>JO{-H}w2&hA&E@qM=5Hx~v{c*t=4R31
z*EE~b={-8{3dYccc$a$+A|-yyD0?P+J$@zYt71n!c=Uo9mr57unaaf*}2;_zP&9!phs@GEY5&(jG9`T7@+pF&&G3
zOC7}65-j3}i&o=gB6aCU7}N9MU?Jj#lJ1Xz4z`puF68h2mg_Ttvsrh(CMM0Z?NiK}pNAnqy71^N_a{^2#p`uuk
zK3~Yt`I)|0zL+9vKq=lp#wO!aiO@zkG0>0^t}->eBv>4|(g)#kG*DB#ekd9lNN>_O(qiZK4dUER0LlvKG$=$=twjDn_
zsO!8{Qe@oX!N4>e*A$C#h}4qYaMJCxF(~duk^7xM`mY{!&bTwCv#u?uWY(sq@nuPM
zJGPO!0=AO`1ZkmD_;>m)0qMvJ>5+Y?{Uz_$aRpwSD~WYy+uM})r?Oe03@!5Zld2w?
zQ7E@$Ub&NKz(xBPA7bF5due1U|gj%aJnnRbo~$8<-N?
zk&2R7;LhE?5{vzhZC@WWIC`@G|B935UrkESGP^{!2}5}(Mc_yyy}u`Q@!138ihpb#
zc`g~@{|#SASZC0hQ0{n`CY;ws+$?cb6~G)t{xgEuKQ@27BoA)=Z$9fk{chLnOU@=P
z$=NR4ol+OEiRLV$+DAiunuEmH*9o=BrXM3Zg7$>6Iea_5STbyGo@op1nNU4|lVkKY
zo}J}~=uf5al|*8c;=mQG{>s^9KZzr$eX`;c&J5aPdlYZhvK$3%@@5!yKdMd0W-EZf
zCQDfszoIiF-h$#!8t}>J^kCc#x5!7OJnbqj#3w8D?cz+VSD1@lFhWgP)h0%yx%iOf
zrc2Vb-wr2THz87Dq?Af*=;>ke{>kB3EvuD(@E3I8#o6I;DqNX%zJH0of>6j8qf*Gh
z;R!DEK5jQh6rpX(yFYt=p?eT5fOtY&_1p4H1DxH`8IB#@O(+YO#jtQA(-GE;cdsr-
zKQu(;)u66=D9Nt9!7$zB!=zhJlbWm}qkeMfT|I(~n8VDOl$nMn))EGMjv5nvcZHI=
zc)kk4%_{C01m%usqA<^;$^OHIh~x6(2cpB8ccaH95N>}R!q_XfQ^Z%qo{X`M=N?iQ
zQnePIX?S`QCTyj!&x_U(uf|IU+xz|Yv@~$zc9
zk;E!q@AIEdlE*wACN%TBDSz`D$H#5@5k(E5X{U}n-Eiv)>}75ZIy500Nwg2rj5PUd
zE6pghiozVzxSzWln8_{bj`duqj<~A})rGbskjDc1z}^yw<2DlYI?eydpi-aV1N%c@
z+Pc-)xsEITRF&nH%p%6FmjctT2S!?Ww9(kCq9!=)y5BB|rD=ajt8)%6#gPe@veXtN
z)8?&uaNhH*E>Y%GGqlCq9%XO@wEGgNy@{z{*mzP^au(i)u~z6Y>~=^NOo{>&y1$s8
z;pTcwG6f?TJJxZbT7w$Hdn;VU#I)*Bf^k}04z3;d8XvCwgT0wno43t5u
zs*F%%C}=Q}Y$D5TEBp(EJilVYc=_nb;bwU__R^W_%|XoumY?k5aD6NPb%Lhu-(S3Q+IdHQR`
zgj>nX@?`z{#UacVCkW`HE_np45S-BNp)&QRcRrE_wYNx7_Xf8&{cEq#*ECm-kluCy
zfpnG+$Z*XH9ykuVwCesvg{$EevYzXo0Z-shI5)>iZYI8)TL)`=D{4lP%+(n37$n{p~h
zk2PD~DNr*?)#%o3KuzrdeLEA&ql~t^s>@UhOrxkZ{yZf4q<#!#5=W`#7tFJ);RE+S
zg7~v;+rkf|zM}S