Skip to content

Issues: pytorch/pytorch

[v.2.4.0] Release Tracker
#128436 opened Jun 11, 2024 by atalman
Open 14
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Label
Filter by label
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Milestones
Filter by milestone
Assignee
Filter by who’s assigned
Sort

Issues list

Outdated ncclResult code
#128756 opened Jun 14, 2024 by myungjin
DSD for TorchTune LoRA triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128745 opened Jun 14, 2024 by weifengpy
Flaky test page should include retry runs
#128735 opened Jun 14, 2024 by zou3519
partitioner doesn't appear to respect SAC region module: aotdispatch umbrella label for AOTAutograd issues module: pt2-dispatcher PT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op, oncall: pt2 triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128730 opened Jun 14, 2024 by bdhirsh
[Profiler][inductor] put kwinputs in chrome traces oncall: profiler profiler-related issues (cpu, gpu, kineto)
#128728 opened Jun 14, 2024 by davidberard98
Add RMS Norm layer
#128713 opened Jun 14, 2024 by PraNavKumAr01
Add Swiglu activation function
#128712 opened Jun 14, 2024 by PraNavKumAr01
does FSDP support AMSP (a new DP shard strategy) oncall: distributed Add this issue/PR to distributed oncall triage queue
#128706 opened Jun 14, 2024 by guoyejun
NotImplementedError: Operator aten.native_layer_norm_backward.default does not have a sharding strategy registered. oncall: distributed Add this issue/PR to distributed oncall triage queue triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128699 opened Jun 14, 2024 by Xingzhi107
ONNX docs missing info about how to remove custom domains module: docs Related to our documentation, both in docs/ and docblocks module: onnx Related to torch.onnx triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128698 opened Jun 14, 2024 by Jerry-Master
[DDP] DDP bucket memory release during fwd step oncall: distributed Add this issue/PR to distributed oncall triage queue triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128696 opened Jun 14, 2024 by lichenlu
Segmentation fault (core dumped) in torch._weight_norm_interface module: crash Problem manifests as a hard crash, as opposed to a RuntimeError module: edge cases Adversarial inputs unlikely to occur in practice module: empty tensor triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128695 opened Jun 14, 2024 by LongZE666
Segmentation fault (core dumped) in torch._fused_moving_avg_obs_fq_helper module: crash Problem manifests as a hard crash, as opposed to a RuntimeError module: edge cases Adversarial inputs unlikely to occur in practice triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128694 opened Jun 14, 2024 by LongZE666
Segmentation fault (core dumped) in torch.fused_moving_avg_obs_fake_quant module: crash Problem manifests as a hard crash, as opposed to a RuntimeError module: edge cases Adversarial inputs unlikely to occur in practice oncall: quantization Quantization support in PyTorch
#128693 opened Jun 14, 2024 by LongZE666
Segmentation fault (core dumped) in torch._weight_int4pack_mm module: crash Problem manifests as a hard crash, as opposed to a RuntimeError module: edge cases Adversarial inputs unlikely to occur in practice module: error checking Bugs related to incorrect/lacking error checking triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128692 opened Jun 14, 2024 by LongZE666
Segmentation fault (core dumped) in torch._remove_batch_dim module: crash Problem manifests as a hard crash, as opposed to a RuntimeError module: edge cases Adversarial inputs unlikely to occur in practice triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128691 opened Jun 14, 2024 by LongZE666
torch._transform_bias_rescale_qkv:FPE module: crash Problem manifests as a hard crash, as opposed to a RuntimeError module: edge cases Adversarial inputs unlikely to occur in practice oncall: transformer/mha Issues related to Transformers and MultiheadAttention triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128690 opened Jun 14, 2024 by LongZE666
Torchscript Error : Unsupported TypeMeta in ATen: float* oncall: jit Add this issue/PR to JIT oncall triage queue
#128689 opened Jun 14, 2024 by XiaoTongDeng
heartbeatMonitor error after run script multiple times oncall: distributed Add this issue/PR to distributed oncall triage queue triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128680 opened Jun 14, 2024 by garfield1997
[RFC][pipelining] PipelineStage should let user control send/recv endpoints module: pipelining Pipeline Parallelism oncall: distributed Add this issue/PR to distributed oncall triage queue
#128665 opened Jun 14, 2024 by wconstab
2.6.0 Released a second time on the same version breaking production customers oncall: releng In support of CI and Release Engineering
#128653 opened Jun 13, 2024 by skier233
Dynamo: contextlib.contextmanager doesn't work module: dynamo oncall: pt2 triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#128651 opened Jun 13, 2024 by zou3519
ProTip! Mix and match filters to narrow down what you’re looking for.