-
Notifications
You must be signed in to change notification settings - Fork 28.2k
Issues: huggingface/transformers
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Whisper pipeline returns empty segment for each processed audio chunk
bug
#36602
opened Mar 7, 2025 by
as-suvorov
1 of 4 tasks
lm_head parameters missing from named_parameters() in Qwen2.5-VL-3B-Instruct model
bug
#36598
opened Mar 7, 2025 by
Buhua-Liu
2 of 4 tasks
The number of safetensors files is different when using CPU and CUDA.
bug
#36595
opened Mar 6, 2025 by
makcedward
2 of 4 tasks
Error when changing vocab size when fine tuning llama-vision
bug
#36590
opened Mar 6, 2025 by
Ssukriti
4 tasks done
Inconsistent Outputs When Using Flash Attention 2 and SDPA Attention with Attention Mask
bug
#36585
opened Mar 6, 2025 by
tartarleft
2 of 4 tasks
Significant Increase in Computation Time When Using Attention Mask in SDPA Attention
bug
#36584
opened Mar 6, 2025 by
tartarleft
3 of 4 tasks
paligemma2-3B-mix in version4.49.0 not use GPU and 4.50.0.dev broken
bug
Cache
#36575
opened Mar 6, 2025 by
hanggun
4 tasks
After tokenizers upgrade, the length of the token does not correspond to the length of the model
bug
#36574
opened Mar 6, 2025 by
CurtainRight
2 of 4 tasks
In the latest version of transformers (4.49.0) matrix transformation error is encountered
bug
#36571
opened Mar 6, 2025 by
idebroy
2 of 4 tasks
Add support for StableAdamW optimizer in Trainer
Feature request
Request for a new feature
#36564
opened Mar 5, 2025 by
capemox
Allow video objects (np array etc.) in apply_chat_template (not just paths or urls)
Chat Template
Feature request
Request for a new feature
VLM
#36560
opened Mar 5, 2025 by
FredrikNoren
Error during processing: MllamaForCausalLM does not support Flash Attention 2.0 yet.
Feature request
Request for a new feature
#36557
opened Mar 5, 2025 by
sangramddreg
disable_compile
not honored as a kwarg in generate
bug
#36544
opened Mar 4, 2025 by
pcuenca
1 of 4 tasks
Bug when computing positional IDs from embeddings
bug
#36537
opened Mar 4, 2025 by
SabrinaRichter
4 tasks
Model.generate use_cache=True generates different results than use_cache=False
bug
#36536
opened Mar 4, 2025 by
edenlum
2 of 4 tasks
Warning related to torch.tensor() usage in transformers.models.encodec.modeling_encodec.py (Version 4.47.0)
bug
#36533
opened Mar 4, 2025 by
GiorgiaAuroraAdorni
4 tasks
Previous Next
ProTip!
no:milestone will show everything without a milestone.