Skip to content
This repository has been archived by the owner on Aug 18, 2023. It is now read-only.

Utility function tests #153

Merged
merged 18 commits into from
Jan 28, 2022
Merged

Utility function tests #153

merged 18 commits into from
Jan 28, 2022

Conversation

zaqqwerty
Copy link
Contributor

@zaqqwerty zaqqwerty commented Jan 26, 2022

Add tests that check both inside and outside eager mode.

Part of #136. Wrote a decorator for test functions that runs the function both inside and outside eager mode. Decorated all the utils tests with it.

@zaqqwerty
Copy link
Contributor Author

@farice wondering if you have any ideas on the TODO I added in this PR? It's most of the reason the PR is so short, that error shows up when I try to decorate a lot of other tests in other modules.

@zaqqwerty zaqqwerty requested a review from farice January 26, 2022 19:53
@farice
Copy link
Contributor

farice commented Jan 27, 2022

I think it's because we wouldn't usually create a Keras model/layer within a tf.function. For example, if you do this with an optimizer you may get a similar issue https://stackoverflow.com/questions/68382842/tensorflow-cannot-find-variables-after-values-change

Hence, for example, with the energy_model_utils.VariableDot(..) construction, I suppose we would want to pull this out and only decorate a subroutine of the existing test case?

@zaqqwerty
Copy link
Contributor Author

After pulling the layer creation into setUp and localizing the decorator, got a different error:

value = <tf.Tensor: shape=(1, 3), dtype=float32, numpy=array([[1., 5., 9.]], dtype=float32)>, dtype = None, name = None
as_ref = False, preferred_dtype = None, dtype_hint = None
ctx = <tensorflow.python.eager.context.Context object at 0x7f276fe30be0>
accepted_result_types = (<class 'tensorflow.python.framework.ops.Tensor'>,)

    @trace.trace_wrapper("convert_to_tensor")
    def convert_to_tensor(value,
                          dtype=None,
                          name=None,
                          as_ref=False,
                          preferred_dtype=None,
                          dtype_hint=None,
                          ctx=None,
                          accepted_result_types=(Tensor,)):
      """Implementation of the public convert_to_tensor."""
      # TODO(b/142518781): Fix all call-sites and remove redundant arg
      preferred_dtype = preferred_dtype or dtype_hint
      if isinstance(value, EagerTensor):
        if ctx is None:
          ctx = context.context()
        if not ctx.executing_eagerly():
          graph = get_default_graph()
          if not graph.building_function:
>           raise RuntimeError("Attempting to capture an EagerTensor without "
                               "building a function.")
E           RuntimeError: Attempting to capture an EagerTensor without building a function.

../../../.cache/pypoetry/virtualenvs/qhbmlib-yboeJX8V-py3.9/lib/python3.9/site-packages/tensorflow/python/framework/ops.py:1580: RuntimeError

Not sure what it means yet, but the graph attribute being false means it doesn't represent a function? https://www.tensorflow.org/api_docs/python/tf/Graph#attributes

@farice
Copy link
Contributor

farice commented Jan 27, 2022

Figured it out - the kernel var need to be constructed using self.add_weight within the scope of build. Using tf.Variable is fine in __init__

https://keras.io/guides/making_new_layers_and_models_via_subclassing/

@zaqqwerty
Copy link
Contributor Author

Still can't get it to work when I initialize the variable in the __init__

@farice
Copy link
Contributor

farice commented Jan 28, 2022 via email

@farice
Copy link
Contributor

farice commented Jan 28, 2022

Here's what I had, for completeness:

class VariableDot(tf.keras.layers.Layer):
  """Utility layer for dotting input with a same-sized variable."""

  def __init__(self,
               initializer: tf.keras.initializers.Initializer = tf.keras
               .initializers.RandomUniform()):
    """Initializes a VariableDot layer.

    Args:
      initializer: A `tf.keras.initializers.Initializer` which specifies how to
        initialize the values of the parameters.
    """
    super().__init__()
    self._initializer = initializer

  def build(self, input_shape):
    """Initializes the internal variables."""
    self.kernel = self.add_weight(
        shape=(input_shape[-1],),
        initializer="random_uniform",
        trainable=True)

  def call(self, inputs):
    """Returns the dot product between the inputs and this layer's variables."""
    return tf.reduce_sum(inputs * self.kernel, -1)
@tfp_test_util.test_all_tf_execution_regimes
class VariableDotTest(tfp_test_util.TestCase):
  """Tests the VariableDot layer."""

  def test_layer(self):
    """Confirms the layer dots with inputs correctly."""

    constant = 2.5
    actual_layer_constant = energy_model_utils.VariableDot(
        tf.keras.initializers.Constant(constant))
    actual_layer_constant.build([None, 3])

    actual_layer_default = energy_model_utils.VariableDot()
    actual_layer_default.build([None, 3])

    inputs_list = [[1, 5, 9]]
    inputs = tf.constant(inputs_list, dtype=tf.float32)

    actual_outputs = actual_layer_constant(inputs)
    expected_outputs = tf.math.reduce_sum(inputs * constant, -1)
    self.assertAllEqual(actual_outputs, expected_outputs)

    actual_outputs = actual_layer_default(inputs)
    expected_outputs = []
    for inner in inputs_list:
      inner_sum = []
      for i, k in zip(inner, actual_layer_default.kernel.numpy()):
        inner_sum.append(i * k)
      expected_outputs.append(sum(inner_sum))
    self.assertAllClose(actual_outputs, expected_outputs)

@zaqqwerty
Copy link
Contributor Author

@farice wrote up the decorator we discussed, seems to finally be working this way.

Copy link
Contributor

@farice farice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, huge relief

@zaqqwerty zaqqwerty merged commit a75df9d into google:main Jan 28, 2022
farice pushed a commit that referenced this pull request Feb 1, 2022
farice added a commit that referenced this pull request Feb 2, 2022
* Enable tracing of QuantumCircuit __add__ (#131)

* Weighted average utility (#134)

* Add new style QHBMs (#125)

* Replace custom code with TF op (#141)

* Enable EBM seed (#143)

* Add modular Hamiltonian expectation (#135)

* Energy expectation (#150)

* Update TF, TFP, and TFQ versions (#152)

* try updating

* update python versions

* Tests passing

* increase tolerance for flaky test

* update setup.py

* Utility function tests (#153)

* Eager toggle decoration completion (#155)

* circuits test passing

* expectation test failing

* parameter change

* more tests

* start refactor

* add parameters to circuits

* test passing

* format

* remove lint

* revert superfluous changes

* Restart CI

* Update hamiltonian_infer_test.py

* address review comments

* restart CI

Co-authored-by: Faris Sbahi <farissbahi@gmail.com>
zaqqwerty added a commit that referenced this pull request Feb 3, 2022
* Enable tracing of QuantumCircuit __add__ (#131)

* Weighted average utility (#134)

* Add new style QHBMs (#125)

* Replace custom code with TF op (#141)

* Enable EBM seed (#143)

* Add modular Hamiltonian expectation (#135)

* start function

* function written

* start test fixture

* initial test completed

* test passing

* remove lint

* do not restrict to layers

* typo

* format

* Initial gradient function written

* Energy expectation (#150)

* Update TF, TFP, and TFQ versions (#152)

* try updating

* update python versions

* Tests passing

* increase tolerance for flaky test

* update setup.py

* tests passing again

* rempve extra layers

* generalize derivative code

* Enable tracing of QuantumCircuit __add__ (#131)

* Weighted average utility (#134)

* Add new style QHBMs (#125)

* Replace custom code with TF op (#141)

* Enable EBM seed (#143)

* Add modular Hamiltonian expectation (#135)

* Energy expectation (#150)

* Update TF, TFP, and TFQ versions (#152)

* try updating

* update python versions

* Tests passing

* increase tolerance for flaky test

* update setup.py

* Utility function tests (#153)

* Eager toggle decoration completion (#155)

* basic test passing again

* initial analytic expectation test

* add broken utils test

* weighted average now uses einsum

* tests passing

* tests passing

* format

* remove lint

* update args to tensors

* format

* add tf function togfgle to test

* remove lint

* extend linter timeout window

* remove tape argument

* check gradients are gapped away from zero

* change to atol

* change to atol

* tweak tolerances

* remove superfluous argument to inner function

* format

* update docstring

* add gradient graph test

* add todo

* format

* change docstring

* remove lint

Co-authored-by: Faris Sbahi <farissbahi@gmail.com>
farice added a commit that referenced this pull request Feb 15, 2022
* Enable tracing of QuantumCircuit __add__ (#131)

* Weighted average utility (#134)

* Add new style QHBMs (#125)

* Replace custom code with TF op (#141)

* Enable EBM seed (#143)

* Add modular Hamiltonian expectation (#135)

* Energy expectation (#150)

* Update TF, TFP, and TFQ versions (#152)

* try updating

* update python versions

* Tests passing

* increase tolerance for flaky test

* update setup.py

* Utility function tests (#153)

* Eager toggle decoration completion (#155)

* generalize QuantumInference expectation

* removed reduced tests

* inferece change

* revert

* genralize expectation

* circuit infer tests passing

* update infer

* checkout energy files

* format

* circuit tests passing

* updated h infer test passing

* circuit tests passing

* inference tests passing

* format

* remove lint

* vqt tests passing

* format

* remove lint

* start updating circuit infer

* infere

* update infere

* energy tests passing

* update infer

* move arg

* hamuiltonian inference tests passing

* format

* circuit infer test

* remove lint

* update vqt tets

* format

* tolerance update

* lint

Co-authored-by: Faris Sbahi <farissbahi@gmail.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants