You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@mgouicem, I am going through the document in https://github.com/mgouicem/oneDNN/tree/mgouicem/rfcs/brgemm/rfcs/20240326-block-level-api
to understand more. There is note mentioned that "or LLM optimizations, and "pure" matrix multiplication, batch-reduction is typically not necessary unless A and B are large and require K-blocking. This can be observed in current IPEX implementations using libxsmm for FlashAttention, Multi-Head Attention and Weights-only-Quantization Matmul.",
I wanted to understand :
1.In default pytorch uses mkl for matmul computation, So does IPEX uses any heuristic to switch between mkl and libxsmm or it uses only libxsmm for matmuls?
2. As in Icelake onednn is not used for matmuls in pytorch, does brgemm is refers to implementation inside mkl?
The text was updated successfully, but these errors were encountered:
Adding @Xia-Weiwen for ipex part. I believe that libxsmm ukernels are used only for some complex fused patterns. There is ongoing work with ipex team to migrate to the new oneDNN brgemm APIs.
Not sure I understand the question. @jgong can better address the details about pytorch internals.
@mgouicem, I am going through the document in
https://github.com/mgouicem/oneDNN/tree/mgouicem/rfcs/brgemm/rfcs/20240326-block-level-api
to understand more. There is note mentioned that "or LLM optimizations, and "pure" matrix multiplication, batch-reduction is typically not necessary unless A and B are large and require K-blocking. This can be observed in current IPEX implementations using libxsmm for FlashAttention, Multi-Head Attention and Weights-only-Quantization Matmul.",
I wanted to understand :
1.In default pytorch uses mkl for matmul computation, So does IPEX uses any heuristic to switch between mkl and libxsmm or it uses only libxsmm for matmuls?
2. As in Icelake onednn is not used for matmuls in pytorch, does brgemm is refers to implementation inside mkl?
The text was updated successfully, but these errors were encountered: