Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions metalearning_qaoa/metalearning_qaoa.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
"\n",
"Created : 2020-Feb-06\n",
"\n",
"Last updated : 2020-Mar-02"
"Last updated : 2020-Apr-09"
]
},
{
Expand Down Expand Up @@ -175,11 +175,11 @@
"source": [
"The QAOA ansatz consists of repeated applications of a mixer Hamiltonian $\\hat{H}_M$ and the cost Hamiltonian $\\hat{H}_C$. The total applied unitary is\n",
"$$\\hat{U}(\\eta,\\gamma) = \\prod_{j=1}^{p}e^{-i\\eta_{j}\\hat{H}_M}e^{-i\\gamma_{j} \\hat{H}_C},$$\n",
"where $p$ is the number of timesthe mixer and cost are applied; the parameters $\\eta_j, \\gamma_j$ are to be optimized to produce a bitstring of minimal energy with respect to $\\hat{H}_C$.\n",
"where $p$ is the number of times the mixer and cost are applied; the parameters $\\eta_j, \\gamma_j$ are to be optimized to produce a bitstring of minimal energy with respect to $\\hat{H}_C$.\n",
"\n",
"One traditional family of Hamiltonians used in QAOA are the Ising models. These are defined as\n",
"$$\\hat{H}_\\mathrm{P}=\\sum_i h_i \\hat{Z}^{(i)}+\\sum_{i,j} J_{ij} \\hat{Z}^{(i)}\\hat{Z}^{(j)}.$$\n",
"There is a one-to-one mapping between weighted graphs and Ising models: $h_i$ can be thought of as the weight of a graph node $i$ and $J_{ij}$ can be thought of as the weight of a graph edge between nodes $i$ and $j$. In applications such as [MaxCut](https://en.wikipedia.org/wiki/Maximum_cut), we have $h_i = 0$ and $J_{ij} = 1 \\forall i, j$. We this define a function that takes a graph and outputs the corresponding Ising model:"
"There is a one-to-one mapping between weighted graphs and Ising models: $h_i$ can be thought of as the weight of a graph node $i$ and $J_{ij}$ can be thought of as the weight of a graph edge between nodes $i$ and $j$. In applications such as [MaxCut](https://en.wikipedia.org/wiki/Maximum_cut), we have $h_i = 0$ and $J_{ij} = 1$ for all indices $i$ and $j$. The importance of this graph correspondence motivates us to define the following function, which takes a graph and returns the corresponding Ising model:"
]
},
{
Expand Down Expand Up @@ -294,7 +294,7 @@
" ops = inputs[1]\n",
" state = inputs[2]\n",
" params = inputs[3]\n",
" prev_output = inputs[3]\n",
" prev_output = inputs[4]\n",
" joined = tf.keras.layers.concatenate([state, params, prev_output])\n",
" shared = self.shared(joined)\n",
" s_inp = self.state(shared)\n",
Expand Down Expand Up @@ -421,15 +421,15 @@
"\n",
"# Our model will output it's parameter guesses along with the loss value that is\n",
"# computed over them. This way we can use the model to guess parameters later on\n",
"model = tf.keras.Model(inputs=[state_inp, params_inp, exp_inp, op_inp, circuit_inp],\n",
"model = tf.keras.Model(inputs=[circuit_inp, op_inp, state_inp, params_inp, exp_inp],\n",
" outputs=[\n",
" output_0[3], output_1[3], output_2[3], output_3[3], output_4[3],\n",
" full_loss\n",
" ])\n",
"model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),\n",
" loss=value_loss, loss_weights=[0, 0, 0, 0, 0, 1])\n",
"\n",
"model.fit(x=[initial_state, initial_params, initial_exp, ops_tensor, circuit_tensor],\n",
"model.fit(x=[circuit_tensor, ops_tensor, initial_state, initial_params, initial_exp],\n",
" y=[\n",
" np.zeros((N_POINTS, 1)),\n",
" np.zeros((N_POINTS, 1)),\n",
Expand Down Expand Up @@ -473,7 +473,7 @@
"initial_exp = np.zeros((N_POINTS // 2, 25)).astype(np.float32)\n",
"\n",
"out1, out2, out3, out4, out5, _ = model(\n",
" [initial_state, initial_guesses, initial_exp, ops_tensor, circuit_tensor])\n",
" [circuit_tensor, ops_tensor, initial_state, initial_guesses, initial_exp])\n",
"\n",
"one_vals = tf.reduce_mean(tfq.layers.Expectation()(\n",
" circuit_tensor,\n",
Expand Down Expand Up @@ -564,11 +564,11 @@
"plt.imshow(output_vals)\n",
"\n",
"guess_0, guess_1, guess_2, guess_3, guess_4, _ = model([\n",
" tfq.convert_to_tensor([test_graph_circuit]),\n",
" tfq.convert_to_tensor([[test_graph_op]]),\n",
" np.zeros((1, 25)).astype(np.float32),\n",
" np.zeros((1, 2)).astype(np.float32),\n",
" np.zeros((1, 25)).astype(np.float32),\n",
" tfq.convert_to_tensor([[test_graph_op]]),\n",
" tfq.convert_to_tensor([test_graph_circuit]), \n",
"])\n",
"all_guesses = [guess_0, guess_1, guess_2, guess_3, guess_4]\n",
"all_guesses = [list(a.numpy()[0]) for a in all_guesses]\n",
Expand Down