Skip to content

Fix typos in docstrings, comments, and error messages#43949

Merged
stevhliu merged 5 commits intohuggingface:mainfrom
DimiChatzipavlis:fix-typos-src-comprehensive
Feb 12, 2026
Merged

Fix typos in docstrings, comments, and error messages#43949
stevhliu merged 5 commits intohuggingface:mainfrom
DimiChatzipavlis:fix-typos-src-comprehensive

Conversation

@DimiChatzipavlis
Copy link

What does this PR do?

Fixes a collection of spelling errors found throughout src/transformers in docstrings, comments, and user-facing error messages.

Modifications

Corrected the following typos across multiple files in src/transformers:

  • subsquent -> subsequent (Standardized across 30+ occurrences)
  • garantees -> guarantees
  • alignmenet -> alignment
  • proces -> process
  • necesary -> necessary
  • interupted -> interrupted
  • registed -> registered
  • checkpoing -> checkpoint
  • arugment -> argument
  • separte -> separate
  • compileable -> compilable
  • continous -> continuous
  • deprection -> deprecation
  • avaialble -> available

Context-Sensitive Fixes:

  • Fixed ot -> to (e.g., "unable ot load")
  • Fixed ot -> not (e.g., "is ot None")
  • Fixed corrent -> current

Verification

  • Manual Audit: Verified that no variable names, function signatures, or public API symbols were modified.
  • Scope: Changes are strictly limited to string literals (docstrings/messages) and comments.
  • Logic Check: git diff -G "ot" confirmed that no logic involving the substring "ot" was touched.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

@ArthurZucker
@amyeroberts

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR cleans up spelling/grammar issues across src/transformers in docstrings, comments, and a few user-facing strings, improving readability and professionalism without changing runtime behavior.

Changes:

  • Corrects common typos (e.g., “subsquent” → “subsequent”, “compileable” → “compilable”, “continous” → “continuous”) across many modules.
  • Fixes several comment/docstring wording issues in core utilities (trainer, modeling/utils, loading, generation, and testing helpers).
  • Applies consistent typo fixes across many vision/multimodal model prepare_inputs_for_generation comments.

Reviewed changes

Copilot reviewed 45 out of 45 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
src/transformers/trainer_seq2seq.py Fix typo in generation-config comment.
src/transformers/testing_utils.py Fix docstring/comment typos in test utilities.
src/transformers/processing_utils.py Fix typo in a comment related to a test checkpoint.
src/transformers/models/vipllava/modeling_vipllava.py Fix “subsquent” typo in generation comment.
src/transformers/models/video_llava/modeling_video_llava.py Fix “subsquent” typo in generation comment.
src/transformers/models/perception_lm/modular_perception_lm.py Fix “subsquent” typo in generation comment (modular source).
src/transformers/models/perception_lm/modeling_perception_lm.py Same typo fix in auto-generated modeling file.
src/transformers/models/paligemma/modeling_paligemma.py Fix “subsquent” typo in generation comment.
src/transformers/models/ovis2/modeling_ovis2.py Fix “subsquent” typo in generation comment.
src/transformers/models/mistral3/modeling_mistral3.py Fix “subsquent” typo in generation comment.
src/transformers/models/llava_onevision/modular_llava_onevision.py Fix “subsquent” typo in generation comment (modular source).
src/transformers/models/llava_onevision/modeling_llava_onevision.py Same typo fix in auto-generated modeling file.
src/transformers/models/llava_next_video/modular_llava_next_video.py Fix “subsquent” typo in generation comment (modular source).
src/transformers/models/llava_next_video/modeling_llava_next_video.py Same typo fix in auto-generated modeling file.
src/transformers/models/llava_next/modeling_llava_next.py Fix “subsquent” typo in generation comment.
src/transformers/models/llava/modeling_llava.py Fix “subsquent” typo in generation comment.
src/transformers/models/llama4/modeling_llama4.py Fix “subsquent” typo in generation comment.
src/transformers/models/lighton_ocr/modeling_lighton_ocr.py Fix “subsquent” typo in generation comment.
src/transformers/models/lfm2_vl/modeling_lfm2_vl.py Fix “subsquent” typo in generation comment.
src/transformers/models/kosmos2_5/modeling_kosmos2_5.py Fix “subsquent” typo in caching comment.
src/transformers/models/kosmos2/modeling_kosmos2.py Fix “subsquent” typo in generation comment.
src/transformers/models/janus/modular_janus.py Fix “subsquent” typo in generation comment (modular source).
src/transformers/models/janus/modeling_janus.py Same typo fix in auto-generated modeling file.
src/transformers/models/internvl/modeling_internvl.py Fix “subsquent” typo in generation comment.
src/transformers/models/got_ocr2/modeling_got_ocr2.py Fix “subsquent” typo in generation comment.
src/transformers/models/gemma3/modular_gemma3.py Fix “subsquent” typo in generation comment (modular source).
src/transformers/models/gemma3/modeling_gemma3.py Same typo fix in auto-generated modeling file.
src/transformers/models/florence2/modeling_florence2.py Fix “subsquent” typo in generation comment.
src/transformers/models/fast_vlm/modeling_fast_vlm.py Fix “subsquent” typo in generation comment.
src/transformers/models/deepseek_vl_hybrid/modular_deepseek_vl_hybrid.py Fix “subsquent” typo in generation comment (modular source).
src/transformers/models/deepseek_vl_hybrid/modeling_deepseek_vl_hybrid.py Same typo fix in auto-generated modeling file.
src/transformers/models/deepseek_vl/modeling_deepseek_vl.py Fix “subsquent” typo in generation comment.
src/transformers/models/cohere2_vision/modeling_cohere2_vision.py Fix “subsquent” typo in generation comment.
src/transformers/models/chameleon/modeling_chameleon.py Fix “subsquent” typo in generation comment.
src/transformers/models/aya_vision/modeling_aya_vision.py Fix “subsquent” typo in generation comment.
src/transformers/models/auto/auto_factory.py Fix typo in a public API docstring (“Whether ot not” → “Whether or not”).
src/transformers/models/aria/modular_aria.py Fix “subsquent” typo in generation comment (modular source).
src/transformers/models/aria/modeling_aria.py Same typo fix in auto-generated modeling file.
src/transformers/modeling_utils.py Fix typos in comments/docstrings for loading/saving and tied-weights logic.
src/transformers/modeling_rope_utils.py Fix typo in RoPE docstring (“avaialble” → “available”).
src/transformers/modeling_layers.py Fix typo in comment (“deprection” → “deprecation”).
src/transformers/modeling_flash_attention_utils.py Fix typo in warning string (“continous” → “continuous”).
src/transformers/generation/configuration_utils.py Fix typo in CompileConfig docstring (“compileable” → “compilable”).
src/transformers/core_model_loading.py Fix multiple typos in comments/docstrings related to conversion/loading.
src/transformers/cache_utils.py Fix typo in property docstring (“compileable” → “compilable”).

Comment on lines 3777 to 3778
# Otherwise, we separate the representations of every elements along an outer dimension by new lines (after a `,`).
# The representatioin each element is obtained by calling this function recursively with current `indent_level`.
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is still a typo in this comment (“representatioin”), and the sentence reads a bit ungrammatical (“every elements”). Consider fixing to “representation” and adjusting wording (e.g., “all elements” / “each element”).

Copilot uses AI. Check for mistakes.
if (
"tokenizer" in sub_processor_type
): # This is only necessary for the checkpoing in test_procesing_mistral3.py which has no config.json and
): # This is only necessary for the checkpoint in test_procesing_mistral3.py which has no config.json and
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment references test_procesing_mistral3.py, but there is no such test file in the repository (the Mistral3 tests appear to live in tests/models/mistral3/test_modeling_mistral3.py). Please update the referenced filename so the comment stays accurate.

Copilot uses AI. Check for mistakes.
raising unneeded warnings/errors. This is performed in-place.
"""
# Old checkpoints may have keys for rotary_emb.inv_freq forach layer, however we moved this buffer to the main model
# Old checkpoints may have keys for rotary_emb.inv_freq foreach layer, however we moved this buffer to the main model
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor wording/grammar: “foreach layer” is likely intended to be “for each layer”.

Suggested change
# Old checkpoints may have keys for rotary_emb.inv_freq foreach layer, however we moved this buffer to the main model
# Old checkpoints may have keys for rotary_emb.inv_freq for each layer, however we moved this buffer to the main model

Copilot uses AI. Check for mistakes.
t = torch.tensor(t)

# Simply make the processing below simpler (not to hande both case)
# Simply make the processing below simpler (not to handle both case)
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment still has a small grammar issue: “both case” should be plural (“both cases”).

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 45 out of 45 changed files in this pull request and generated 1 comment.

if (
"tokenizer" in sub_processor_type
): # This is only necessary for the checkpoing in test_procesing_mistral3.py which has no config.json and
): # This is only necessary for the checkpoint in test_modeling_mistral3.py which has no config.json and
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment originally said "test_procesing_mistral3.py" (with a typo), but changed it to "test_modeling_mistral3.py". However, the actual test file is named "test_processing_mistral3.py" (with correct spelling). While the original typo "procesing" was fixed, the reference appears to have been changed to the wrong test file. The comment should likely reference "test_processing_mistral3.py" instead of "test_modeling_mistral3.py".

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 45 out of 45 changed files in this pull request and generated no new comments.

Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the fixes!

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@github-actions
Copy link
Contributor

[For maintainers] Suggested jobs to run (before merge)

run-slow: aria, auto, aya_vision, chameleon, cohere2_vision, deepseek_vl, deepseek_vl_hybrid, fast_vlm, florence2, gemma3, got_ocr2, internvl, janus, kosmos2, kosmos2_5, lfm2_vl

@stevhliu stevhliu merged commit c8f112d into huggingface:main Feb 12, 2026
25 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants