Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix missing tick in BatchNorm doc #40134

Merged
merged 2 commits into from
Jul 23, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion tensorflow/python/keras/layers/normalization_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ class BatchNormalization(normalization.BatchNormalizationBase):
__doc__ = normalization.replace_in_base_docstring([
('{{TRAINABLE_ATTRIBUTE_NOTE}}',
'''
**About setting `layer.trainable = False` on a `BatchNormalization layer:**
**About setting `layer.trainable = False` on a `BatchNormalization` layer:**

The meaning of setting `layer.trainable = False` is to freeze the layer,
i.e. its internal state will not change during training:
Expand Down
10 changes: 5 additions & 5 deletions tensorflow/python/keras/layers/recurrent.py
Original file line number Diff line number Diff line change
Expand Up @@ -323,7 +323,7 @@ class RNN(Layer):
This is the expected shape of your inputs
*including the batch size*.
It should be a tuple of integers, e.g. `(32, 10, 100)`.
- Specify `shuffle=False` when calling fit().
- Specify `shuffle=False` when calling `fit()`.

To reset the states of your model, call `.reset_states()` on either
a specific layer, or on your entire model.
Expand Down Expand Up @@ -1114,7 +1114,7 @@ def _create_non_trackable_mask_cache(self):
is used every time.

Also the caches are created without tracking. Since they are not picklable
by python when deepcopy, we don't want layer._obj_reference_counts_dict
by python when deepcopy, we don't want `layer._obj_reference_counts_dict`
to track it by default.
"""
self._dropout_mask_cache = K.ContextValueCache(self._create_dropout_mask)
Expand All @@ -1124,8 +1124,8 @@ def _create_non_trackable_mask_cache(self):
def reset_dropout_mask(self):
"""Reset the cached dropout masks if any.

This is important for the RNN layer to invoke this in it call() method so
that the cached mask is cleared before calling the cell.call(). The mask
This is important for the RNN layer to invoke this in it `call()` method so
that the cached mask is cleared before calling the `cell.call()`. The mask
should be cached across the timestep within the same batch, but shouldn't
be cached between batches. Otherwise it will introduce unreasonable bias
against certain index of data within the batch.
Expand Down Expand Up @@ -2634,7 +2634,7 @@ class LSTM(RNN):
the `recurrent_kernel` weights matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation")..
the output of the layer (its "activation").
kernel_constraint: Constraint function applied to
the `kernel` weights matrix.
recurrent_constraint: Constraint function applied to
Expand Down