Direction for extending MXGemmKernelChoice/MXLinearRecipeName for XPU support #3457
Replies: 4 comments
-
|
Hi, thanks for your interest in this - a few questions for you:
cc @vkuzo @drisspg who may be interested in this topic as well |
Beta Was this translation helpful? Give feedback.
-
|
@slabhs-aws we actually just deprecated KernelPreference should be device agnostic, so if this level of control is enough for you then you could make XPU work for KernelPreference.AUTO|TORCH|EMULATED.
Let us know if this works for what you have in mind? |
Beta Was this translation helpful? Give feedback.
-
** Edit : Technically not an Intel XPU but backend extensions could be generic. @vkuzo : Yes we are trying to avoid polluting the namespace with XPU specific names. Do you have an example for how mapping an XPU into |
Beta Was this translation helpful? Give feedback.
-
|
@HahTK , would tensors on your hardware have something like |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi team,
TorchAO today provides 3 MXGemm kernel paths:
For users running MX workloads on XPUs, it’s unclear how future MX kernel selection is intended to evolve.
Questions:
MXGemmKernelChoice, or is there a different path to onboard new XPU architectures?MXLinearRecipeNameThis context affects how upstream frameworks (e.g., torchtitan) gate MX execution.
Cross-referencing with the related torchtitan discussion: pytorch/torchtitan#2120
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions