Fix typos in docstrings, comments, and error messages#43949
Fix typos in docstrings, comments, and error messages#43949stevhliu merged 5 commits intohuggingface:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR cleans up spelling/grammar issues across src/transformers in docstrings, comments, and a few user-facing strings, improving readability and professionalism without changing runtime behavior.
Changes:
- Corrects common typos (e.g., “subsquent” → “subsequent”, “compileable” → “compilable”, “continous” → “continuous”) across many modules.
- Fixes several comment/docstring wording issues in core utilities (trainer, modeling/utils, loading, generation, and testing helpers).
- Applies consistent typo fixes across many vision/multimodal model
prepare_inputs_for_generationcomments.
Reviewed changes
Copilot reviewed 45 out of 45 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| src/transformers/trainer_seq2seq.py | Fix typo in generation-config comment. |
| src/transformers/testing_utils.py | Fix docstring/comment typos in test utilities. |
| src/transformers/processing_utils.py | Fix typo in a comment related to a test checkpoint. |
| src/transformers/models/vipllava/modeling_vipllava.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/video_llava/modeling_video_llava.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/perception_lm/modular_perception_lm.py | Fix “subsquent” typo in generation comment (modular source). |
| src/transformers/models/perception_lm/modeling_perception_lm.py | Same typo fix in auto-generated modeling file. |
| src/transformers/models/paligemma/modeling_paligemma.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/ovis2/modeling_ovis2.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/mistral3/modeling_mistral3.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/llava_onevision/modular_llava_onevision.py | Fix “subsquent” typo in generation comment (modular source). |
| src/transformers/models/llava_onevision/modeling_llava_onevision.py | Same typo fix in auto-generated modeling file. |
| src/transformers/models/llava_next_video/modular_llava_next_video.py | Fix “subsquent” typo in generation comment (modular source). |
| src/transformers/models/llava_next_video/modeling_llava_next_video.py | Same typo fix in auto-generated modeling file. |
| src/transformers/models/llava_next/modeling_llava_next.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/llava/modeling_llava.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/llama4/modeling_llama4.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/lighton_ocr/modeling_lighton_ocr.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/lfm2_vl/modeling_lfm2_vl.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/kosmos2_5/modeling_kosmos2_5.py | Fix “subsquent” typo in caching comment. |
| src/transformers/models/kosmos2/modeling_kosmos2.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/janus/modular_janus.py | Fix “subsquent” typo in generation comment (modular source). |
| src/transformers/models/janus/modeling_janus.py | Same typo fix in auto-generated modeling file. |
| src/transformers/models/internvl/modeling_internvl.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/got_ocr2/modeling_got_ocr2.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/gemma3/modular_gemma3.py | Fix “subsquent” typo in generation comment (modular source). |
| src/transformers/models/gemma3/modeling_gemma3.py | Same typo fix in auto-generated modeling file. |
| src/transformers/models/florence2/modeling_florence2.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/fast_vlm/modeling_fast_vlm.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/deepseek_vl_hybrid/modular_deepseek_vl_hybrid.py | Fix “subsquent” typo in generation comment (modular source). |
| src/transformers/models/deepseek_vl_hybrid/modeling_deepseek_vl_hybrid.py | Same typo fix in auto-generated modeling file. |
| src/transformers/models/deepseek_vl/modeling_deepseek_vl.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/cohere2_vision/modeling_cohere2_vision.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/chameleon/modeling_chameleon.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/aya_vision/modeling_aya_vision.py | Fix “subsquent” typo in generation comment. |
| src/transformers/models/auto/auto_factory.py | Fix typo in a public API docstring (“Whether ot not” → “Whether or not”). |
| src/transformers/models/aria/modular_aria.py | Fix “subsquent” typo in generation comment (modular source). |
| src/transformers/models/aria/modeling_aria.py | Same typo fix in auto-generated modeling file. |
| src/transformers/modeling_utils.py | Fix typos in comments/docstrings for loading/saving and tied-weights logic. |
| src/transformers/modeling_rope_utils.py | Fix typo in RoPE docstring (“avaialble” → “available”). |
| src/transformers/modeling_layers.py | Fix typo in comment (“deprection” → “deprecation”). |
| src/transformers/modeling_flash_attention_utils.py | Fix typo in warning string (“continous” → “continuous”). |
| src/transformers/generation/configuration_utils.py | Fix typo in CompileConfig docstring (“compileable” → “compilable”). |
| src/transformers/core_model_loading.py | Fix multiple typos in comments/docstrings related to conversion/loading. |
| src/transformers/cache_utils.py | Fix typo in property docstring (“compileable” → “compilable”). |
src/transformers/testing_utils.py
Outdated
| # Otherwise, we separate the representations of every elements along an outer dimension by new lines (after a `,`). | ||
| # The representatioin each element is obtained by calling this function recursively with current `indent_level`. |
There was a problem hiding this comment.
There is still a typo in this comment (“representatioin”), and the sentence reads a bit ungrammatical (“every elements”). Consider fixing to “representation” and adjusting wording (e.g., “all elements” / “each element”).
src/transformers/processing_utils.py
Outdated
| if ( | ||
| "tokenizer" in sub_processor_type | ||
| ): # This is only necessary for the checkpoing in test_procesing_mistral3.py which has no config.json and | ||
| ): # This is only necessary for the checkpoint in test_procesing_mistral3.py which has no config.json and |
There was a problem hiding this comment.
This comment references test_procesing_mistral3.py, but there is no such test file in the repository (the Mistral3 tests appear to live in tests/models/mistral3/test_modeling_mistral3.py). Please update the referenced filename so the comment stays accurate.
src/transformers/modeling_utils.py
Outdated
| raising unneeded warnings/errors. This is performed in-place. | ||
| """ | ||
| # Old checkpoints may have keys for rotary_emb.inv_freq forach layer, however we moved this buffer to the main model | ||
| # Old checkpoints may have keys for rotary_emb.inv_freq foreach layer, however we moved this buffer to the main model |
There was a problem hiding this comment.
Minor wording/grammar: “foreach layer” is likely intended to be “for each layer”.
| # Old checkpoints may have keys for rotary_emb.inv_freq foreach layer, however we moved this buffer to the main model | |
| # Old checkpoints may have keys for rotary_emb.inv_freq for each layer, however we moved this buffer to the main model |
src/transformers/testing_utils.py
Outdated
| t = torch.tensor(t) | ||
|
|
||
| # Simply make the processing below simpler (not to hande both case) | ||
| # Simply make the processing below simpler (not to handle both case) |
There was a problem hiding this comment.
The comment still has a small grammar issue: “both case” should be plural (“both cases”).
src/transformers/processing_utils.py
Outdated
| if ( | ||
| "tokenizer" in sub_processor_type | ||
| ): # This is only necessary for the checkpoing in test_procesing_mistral3.py which has no config.json and | ||
| ): # This is only necessary for the checkpoint in test_modeling_mistral3.py which has no config.json and |
There was a problem hiding this comment.
The comment originally said "test_procesing_mistral3.py" (with a typo), but changed it to "test_modeling_mistral3.py". However, the actual test file is named "test_processing_mistral3.py" (with correct spelling). While the original typo "procesing" was fixed, the reference appears to have been changed to the wrong test file. The comment should likely reference "test_processing_mistral3.py" instead of "test_modeling_mistral3.py".
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
[For maintainers] Suggested jobs to run (before merge) run-slow: aria, auto, aya_vision, chameleon, cohere2_vision, deepseek_vl, deepseek_vl_hybrid, fast_vlm, florence2, gemma3, got_ocr2, internvl, janus, kosmos2, kosmos2_5, lfm2_vl |
What does this PR do?
Fixes a collection of spelling errors found throughout
src/transformersin docstrings, comments, and user-facing error messages.Modifications
Corrected the following typos across multiple files in
src/transformers:subsquent->subsequent(Standardized across 30+ occurrences)garantees->guaranteesalignmenet->alignmentproces->processnecesary->necessaryinterupted->interruptedregisted->registeredcheckpoing->checkpointarugment->argumentseparte->separatecompileable->compilablecontinous->continuousdeprection->deprecationavaialble->availableContext-Sensitive Fixes:
ot->to(e.g., "unable ot load")ot->not(e.g., "is ot None")corrent->currentVerification
git diff -G "ot"confirmed that no logic involving the substring "ot" was touched.Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@ArthurZucker
@amyeroberts