-
Notifications
You must be signed in to change notification settings - Fork 419
Issues: pytorch/xla
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Incomplete Checkpoints for Non-Sharded Parameters During SPMD Training in PyTorch XLA
#7215
opened Jun 7, 2024 by
huzama
In-place operations on an DLPack aliased XLA tensor does not propagate.
xla:gpu
#7198
opened Jun 5, 2024 by
ysiraichi
How do I know which pytorch parameter corresponds to which parameter in hlo ir
#7191
opened Jun 4, 2024 by
yao-jz
Select a model to train and run on TPUs
advanced
docathon-h1-2024
#7190
opened Jun 4, 2024 by
duncantech
Try running inference on an ARM CPU
advanced
docathon-h1-2024
#7185
opened Jun 4, 2024 by
duncantech
Create a distributed and single device example
advanced
docathon-h1-2024
#7183
opened Jun 4, 2024 by
duncantech
Run and suggest improvements for GPU setup
docathon-h1-2024
medium
#7178
opened Jun 4, 2024 by
duncantech
Why not register low precision autocast for scaled dot product attention?
#7177
opened Jun 4, 2024 by
lingzhi98
Persistent Cache will not recompile when
XLA_IR_DEBUG
and XLA_HLO_DEBUG
changed
#7169
opened Jun 3, 2024 by
JackCaoG
A large number of Tensors (>8000) in the graph will trigger an spmd sharding error
#7161
opened May 31, 2024 by
mars1248
torch.matmul output buffer dtype is not respected when output dtype is different from input dtype
#7160
opened May 30, 2024 by
HahTK
Setting FrontEnd attributes for CC ops replica groups in the HLO
#7139
opened May 29, 2024 by
amithrm
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.