Skip to content
This repository has been archived by the owner on Aug 18, 2023. It is now read-only.

QMHL refactor #74

Merged
merged 30 commits into from
Aug 25, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
b2fd115
initial qmhl implementation
zaqqwerty Aug 16, 2021
b2051a8
added thetas grad
zaqqwerty Aug 16, 2021
9beec14
start implementing phis
zaqqwerty Aug 16, 2021
1ee0515
Merge branch 'main' into 18_qmhl_refactor
zaqqwerty Aug 18, 2021
603a0c7
removed old QMHL routines
zaqqwerty Aug 18, 2021
dc0f5e5
squashing bugs
zaqqwerty Aug 18, 2021
5514bf4
tracking down grad bug in qmhl
zaqqwerty Aug 18, 2021
f29a77e
Merge branch 'main' into 18_qmhl_refactor
zaqqwerty Aug 18, 2021
7348c80
Merge branch 'main' into 18_qmhl_refactor
zaqqwerty Aug 18, 2021
628ee7c
made operator an argument, still does not work
zaqqwerty Aug 18, 2021
09f5e5a
test
zaqqwerty Aug 18, 2021
42bfcc8
Merge branch 'main' into 18_qmhl_refactor
zaqqwerty Aug 20, 2021
b5a4eb4
QMHL zero gradient test passing
zaqqwerty Aug 20, 2021
8378d1e
format
zaqqwerty Aug 20, 2021
8f0a800
trying tf function
zaqqwerty Aug 23, 2021
d72579e
try non
zaqqwerty Aug 23, 2021
fb23dbe
Merge branch 'main' into 18_qmhl_refactor
zaqqwerty Aug 24, 2021
87857e1
works without decoration now
zaqqwerty Aug 24, 2021
2a6a6ee
use cached bitstrings
zaqqwerty Aug 24, 2021
f8dce25
format
zaqqwerty Aug 24, 2021
1e9ae92
comments
zaqqwerty Aug 24, 2021
cb1eac8
format
zaqqwerty Aug 24, 2021
cb74d15
decorated for now
zaqqwerty Aug 25, 2021
db10f6b
Merge branch 'main' into 18_qmhl_refactor
zaqqwerty Aug 25, 2021
9c56a69
Merge branch '18_qmhl_refactor' of https://github.com/zaqqwerty/qhbm-…
zaqqwerty Aug 25, 2021
3e857de
patch
zaqqwerty Aug 25, 2021
4db50a8
patch
zaqqwerty Aug 25, 2021
cc98a65
Merge branch 'main' into 18_qmhl_refactor
zaqqwerty Aug 25, 2021
e3da6ea
increase tolerance
zaqqwerty Aug 25, 2021
c03e16d
format
zaqqwerty Aug 25, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 9 additions & 3 deletions qhbmlib/ebm.py
Original file line number Diff line number Diff line change
Expand Up @@ -516,9 +516,14 @@ def is_analytic(self):
return self._is_analytic

def copy(self):
energy_sampler = self._energy_sampler.copy()
if self._energy_sampler is not None:
energy_sampler = self._energy_sampler.copy()
energy_function = energy_sampler.energy_function
else:
energy_sampler = None
energy_function = self._energy_function.copy()
return EBM(
energy_sampler.energy_function,
energy_function,
energy_sampler,
is_analytic=self.is_analytic,
name=self.name)
Expand Down Expand Up @@ -598,7 +603,8 @@ def is_analytic(self):
return self._is_analytic

def copy(self):
bernoulli = Bernoulli(self.num_bits, name=self.name)
bernoulli = Bernoulli(
self.num_bits, is_analytic=self.is_analytic, name=self.name)
bernoulli._variables.assign(self._variables)
return bernoulli

Expand Down
2 changes: 2 additions & 0 deletions qhbmlib/qhbm.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,9 @@ class QHBM(tf.keras.Model):
def __init__(self, ebm, qnn, name=None):
super().__init__(name=name)
self._ebm = ebm
self.thetas = ebm.trainable_variables
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this be entirely necessary, as we can simply access the ebm trainable variables via self.ebm.trainable_variables, and likewise for the qnn trainable variables?

Copy link
Contributor Author

@zaqqwerty zaqqwerty Aug 23, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not entirely necessary. But it is a convenience to match the notation we use in the theory. It also caches the call to trainable variables for later use, since the trainable_variables call searches recursively for variables each time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See other comment regarding grads

self._qnn = qnn
self.phis = qnn.trainable_variables
self._operator_shards = tfq.convert_to_tensor(
ebm.operator_shards(qnn.raw_qubits))

Expand Down
Loading