-
Notifications
You must be signed in to change notification settings - Fork 15
Conversation
@farice wondering if you have any ideas on the TODO I added in this PR? It's most of the reason the PR is so short, that error shows up when I try to decorate a lot of other tests in other modules. |
I think it's because we wouldn't usually create a Keras model/layer within a tf.function. For example, if you do this with an optimizer you may get a similar issue https://stackoverflow.com/questions/68382842/tensorflow-cannot-find-variables-after-values-change Hence, for example, with the |
After pulling the layer creation into
Not sure what it means yet, but the graph attribute being false means it doesn't represent a function? https://www.tensorflow.org/api_docs/python/tf/Graph#attributes |
Figured it out - the kernel var need to be constructed using https://keras.io/guides/making_new_layers_and_models_via_subclassing/ |
Still can't get it to work when I initialize the variable in the |
When i got it to work I used self.add_weight, keeping things in build.
…Sent from my iPhone
On Jan 27, 2022, at 7:17 PM, Antonio Martinez ***@***.***> wrote:
Still can't get it to work when I initialize the variable in the __init__
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.
|
Here's what I had, for completeness: class VariableDot(tf.keras.layers.Layer):
"""Utility layer for dotting input with a same-sized variable."""
def __init__(self,
initializer: tf.keras.initializers.Initializer = tf.keras
.initializers.RandomUniform()):
"""Initializes a VariableDot layer.
Args:
initializer: A `tf.keras.initializers.Initializer` which specifies how to
initialize the values of the parameters.
"""
super().__init__()
self._initializer = initializer
def build(self, input_shape):
"""Initializes the internal variables."""
self.kernel = self.add_weight(
shape=(input_shape[-1],),
initializer="random_uniform",
trainable=True)
def call(self, inputs):
"""Returns the dot product between the inputs and this layer's variables."""
return tf.reduce_sum(inputs * self.kernel, -1) @tfp_test_util.test_all_tf_execution_regimes
class VariableDotTest(tfp_test_util.TestCase):
"""Tests the VariableDot layer."""
def test_layer(self):
"""Confirms the layer dots with inputs correctly."""
constant = 2.5
actual_layer_constant = energy_model_utils.VariableDot(
tf.keras.initializers.Constant(constant))
actual_layer_constant.build([None, 3])
actual_layer_default = energy_model_utils.VariableDot()
actual_layer_default.build([None, 3])
inputs_list = [[1, 5, 9]]
inputs = tf.constant(inputs_list, dtype=tf.float32)
actual_outputs = actual_layer_constant(inputs)
expected_outputs = tf.math.reduce_sum(inputs * constant, -1)
self.assertAllEqual(actual_outputs, expected_outputs)
actual_outputs = actual_layer_default(inputs)
expected_outputs = []
for inner in inputs_list:
inner_sum = []
for i, k in zip(inner, actual_layer_default.kernel.numpy()):
inner_sum.append(i * k)
expected_outputs.append(sum(inner_sum))
self.assertAllClose(actual_outputs, expected_outputs) |
@farice wrote up the decorator we discussed, seems to finally be working this way. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, huge relief
* Enable tracing of QuantumCircuit __add__ (#131) * Weighted average utility (#134) * Add new style QHBMs (#125) * Replace custom code with TF op (#141) * Enable EBM seed (#143) * Add modular Hamiltonian expectation (#135) * Energy expectation (#150) * Update TF, TFP, and TFQ versions (#152) * try updating * update python versions * Tests passing * increase tolerance for flaky test * update setup.py * Utility function tests (#153) * Eager toggle decoration completion (#155) * circuits test passing * expectation test failing * parameter change * more tests * start refactor * add parameters to circuits * test passing * format * remove lint * revert superfluous changes * Restart CI * Update hamiltonian_infer_test.py * address review comments * restart CI Co-authored-by: Faris Sbahi <farissbahi@gmail.com>
* Enable tracing of QuantumCircuit __add__ (#131) * Weighted average utility (#134) * Add new style QHBMs (#125) * Replace custom code with TF op (#141) * Enable EBM seed (#143) * Add modular Hamiltonian expectation (#135) * start function * function written * start test fixture * initial test completed * test passing * remove lint * do not restrict to layers * typo * format * Initial gradient function written * Energy expectation (#150) * Update TF, TFP, and TFQ versions (#152) * try updating * update python versions * Tests passing * increase tolerance for flaky test * update setup.py * tests passing again * rempve extra layers * generalize derivative code * Enable tracing of QuantumCircuit __add__ (#131) * Weighted average utility (#134) * Add new style QHBMs (#125) * Replace custom code with TF op (#141) * Enable EBM seed (#143) * Add modular Hamiltonian expectation (#135) * Energy expectation (#150) * Update TF, TFP, and TFQ versions (#152) * try updating * update python versions * Tests passing * increase tolerance for flaky test * update setup.py * Utility function tests (#153) * Eager toggle decoration completion (#155) * basic test passing again * initial analytic expectation test * add broken utils test * weighted average now uses einsum * tests passing * tests passing * format * remove lint * update args to tensors * format * add tf function togfgle to test * remove lint * extend linter timeout window * remove tape argument * check gradients are gapped away from zero * change to atol * change to atol * tweak tolerances * remove superfluous argument to inner function * format * update docstring * add gradient graph test * add todo * format * change docstring * remove lint Co-authored-by: Faris Sbahi <farissbahi@gmail.com>
* Enable tracing of QuantumCircuit __add__ (#131) * Weighted average utility (#134) * Add new style QHBMs (#125) * Replace custom code with TF op (#141) * Enable EBM seed (#143) * Add modular Hamiltonian expectation (#135) * Energy expectation (#150) * Update TF, TFP, and TFQ versions (#152) * try updating * update python versions * Tests passing * increase tolerance for flaky test * update setup.py * Utility function tests (#153) * Eager toggle decoration completion (#155) * generalize QuantumInference expectation * removed reduced tests * inferece change * revert * genralize expectation * circuit infer tests passing * update infer * checkout energy files * format * circuit tests passing * updated h infer test passing * circuit tests passing * inference tests passing * format * remove lint * vqt tests passing * format * remove lint * start updating circuit infer * infere * update infere * energy tests passing * update infer * move arg * hamuiltonian inference tests passing * format * circuit infer test * remove lint * update vqt tets * format * tolerance update * lint Co-authored-by: Faris Sbahi <farissbahi@gmail.com>
Add tests that check both inside and outside eager mode.
Part of #136. Wrote a decorator for test functions that runs the function both inside and outside eager mode. Decorated all the utils tests with it.