Skip to content

Commit

Permalink
Merge pull request #28816 from tensorflow:jvishnuvardhan-patch-6
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 249097006
  • Loading branch information
tensorflower-gardener committed May 20, 2019
2 parents bd9f5d3 + 5436bd4 commit 8d5c652
Showing 1 changed file with 8 additions and 7 deletions.
15 changes: 8 additions & 7 deletions tensorflow/python/eager/backprop.py
Original file line number Diff line number Diff line change
Expand Up @@ -993,7 +993,7 @@ def jacobian(self,
experimental_use_pfor=True):
"""Computes the jacobian using operations recorded in context of this tape.
See http://en.wikipedia.org/wiki/jacobian_matrix_and_determinant for the
See [wikipedia article](http://en.wikipedia.org/wiki/jacobian_matrix_and_determinant) for the
definition of a Jacobian.
Example usage:
Expand Down Expand Up @@ -1094,15 +1094,16 @@ def batch_jacobian(self,
experimental_use_pfor=True):
"""Computes and stacks per-example jacobians.
See http://en.wikipedia.org/wiki/jacobian_matrix_and_determinant for the
definition of a Jacobian. This function is essentially an efficient
See [wikipedia article](http://en.wikipedia.org/wiki/jacobian_matrix_and_determinant) for the
definition of a Jacobian. This function is essentially an efficient
implementation of the following:
`tf.stack([self.jacobian(y[i], x[i]) for i in range(x.shape[0])])`.
Note that compared to `GradientTape.jacobian` which computes gradient of
each output value w.r.t each input value, this function is useful when
`target[i,...] is independent of `source[j,...]` for `j != i`. This
independence assumption allows more efficient computation as compared to
`target[i,...]` is independent of `source[j,...]` for `j != i`. This
assumption allows more efficient computation as compared to
`GradientTape.jacobian`. The output, as well as intermediate activations,
are lower dimensional and avoid a bunch of redundant zeros which would
result in the jacobian computation given the independence assumption.
Expand All @@ -1111,10 +1112,10 @@ def batch_jacobian(self,
```python
with tf.GradientTape() as g:
x = tf.constant([[1, 2], [3, 4]], dtype=tf.float32)
x = tf.constant([[1., 2.], [3., 4.]], dtype=tf.float32)
g.watch(x)
y = x * x
batch_jacobian = g.batch_jacobian(y, x)
batch_jacobian = g.batch_jacobian(y, x)
# batch_jacobian is [[[2, 0], [0, 4]], [[6, 0], [0, 8]]]
```
Expand Down

0 comments on commit 8d5c652

Please sign in to comment.