Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds support for TF 2.3 and Torch 1.6 #725

Merged
merged 12 commits into from
Jul 29, 2020
Merged

Adds support for TF 2.3 and Torch 1.6 #725

merged 12 commits into from
Jul 29, 2020

Conversation

josh146
Copy link
Member

@josh146 josh146 commented Jul 29, 2020

Context:

TensorFlow 2.3 and PyTorch 1.6 have just been released.

PyTorch 1.6 has the following interesting new features:

  • Mixed precision computations

  • Support for running ensemble models in parallel using torch.jit.fork and torch.jit.wait

  • RPC distributed training/batching

  • The ability to define asynchronous functions that return a torch.Futures object with the @rpc.functions.async_execution decorator

TF 2.3 has the following interesting new features:

  • tf.custom_gradient can now be applied to functions that accept nested structures of tensors as inputs (instead of just a list of tensors). Note that Python structures such as tuples and lists now won't be treated as tensors, so if you still want them to be treated that way, you need to wrap them with tf.convert_to_tensor.

This latter fix is huge, as it is the main reason behind our restrictive template signatures (the Torch and autograd interfaces have always supported nested tensor input).

Note that this introduces a breaking change --- you can no longer pass non-differentiable data to a TF QNode using lists or tuples; if you do, TensorFlow will flatten the input to sequence of floats, and you will receive an esoteric error of the form:

ValueError: ('custom_gradient function expected to return', len(flat_args),
             'gradients but returned', len(args), 'instead.')

From now on, all non-differentiable data to a TF QNode must also be a tensor or a NumPy array. This is the same restriction we have with the PyTorch interface, so both are now very aligned.

>>> params = tf.Variable([0.1, 0.2])
>>> data = [0, 1]
>>> circuit(data, params) # allowed only in in TF<2.2
>>> circuit(tf.convert_to_tensor([0, 1]), params) # TF>2.3
>>> circuit(np.array([0, 1]), params) # TF>2.3

Description of change:

  • Modifies TF tests to always pass non-differentiable data as a NumPy array or tensor

  • tf.python.tape.should_record_backprop([tensor]) now records False for differentiable variables outside of a tape context, a test was modified to take this into account.

  • Updates CI to test latest PyTorch and TF versions.

@codecov
Copy link

codecov bot commented Jul 29, 2020

Codecov Report

Merging #725 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #725   +/-   ##
=======================================
  Coverage   98.74%   98.74%           
=======================================
  Files         101      101           
  Lines        6373     6373           
=======================================
  Hits         6293     6293           
  Misses         80       80           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d99b2f9...5441b80. Read the comment docs.

@josh146 josh146 changed the title Update CI to test against TF 2.3 and Torch 1.6 Adds support for TF 2.3 and Torch 1.6 Jul 29, 2020
@josh146 josh146 added the review-ready 👌 PRs which are ready for review by someone from the core team. label Jul 29, 2020
Comment on lines +14 to +15
TF_VERSION: 2.3
TORCH_VERSION: 1.6
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we ever want to test against earlier versions of TF or Torch, rather than just the latest?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, it would be nice to test a much larger number of dependency permutations, but it gets prohibitive. Especially when we are using the free tier on CI services, where we can only have max 5 jobs running in parallel.

With these restrictions in mind, I'd say it's more important to ensure that PL works with a variety of Python versions as well as the latest release versions of TensorFlow/PyTorch; users having issues can more easily upgrade TF/PyTorch than upgrade Python.

@josh146 josh146 merged commit ed68e1f into master Jul 29, 2020
@josh146 josh146 deleted the upgrade-tf-torch branch July 29, 2020 11:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
review-ready 👌 PRs which are ready for review by someone from the core team.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants