Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix typos #10489

Merged
merged 1 commit into from
Mar 3, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/transformers/data/processors/squad.py
Original file line number Diff line number Diff line change
Expand Up @@ -779,7 +779,7 @@ class SquadFeatures:
token_to_orig_map: mapping between the tokens and the original text, needed in order to identify the answer.
start_position: start of the answer token index
end_position: end of the answer token index
encoding: optionally store the BatchEncoding with the fast-tokenizer alignement methods.
encoding: optionally store the BatchEncoding with the fast-tokenizer alignment methods.
"""

def __init__(
Expand Down
4 changes: 2 additions & 2 deletions src/transformers/models/ibert/quant_modules.py
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ def forward(
):

x_act = x if identity is None else identity + x
# collect runnng stats if traiing
# collect running stats if training
if self.training:
assert not self.percentile, "percentile mode is not currently supported for activation."
assert not self.per_channel, "per-channel mode is not currently supported for activation."
Expand Down Expand Up @@ -746,7 +746,7 @@ def batch_frexp(inputs, max_bit=31):

class FixedPointMul(Function):
"""
Function to perform fixed-point arthmetic that can match integer arthmetic on hardware.
Function to perform fixed-point arithmetic that can match integer arithmetic on hardware.

Args:
pre_act (:obj:`torch.Tensor`):
Expand Down
2 changes: 1 addition & 1 deletion src/transformers/trainer_pt_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
if is_torch_tpu_available():
import torch_xla.core.xla_model as xm

# this is used to supress an undesired warning emitted by pytorch versions 1.4.2-1.7.0
# this is used to suppress an undesired warning emitted by pytorch versions 1.4.2-1.7.0
try:
from torch.optim.lr_scheduler import SAVE_STATE_WARNING
except ImportError:
Expand Down