You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PyTorch Issues:
[#2832] Unable to export int8/4 ONNX model for OPT/LLAMA2 PyTorch 119621
Issues with lowering to LinAlg:
[#2838] bfloat16 MLP model IREE issue (@PhaneeshB )
Mismatch
[#430] int8 Resnet50 mismatch (@rsuderman ) -- Rob investigated it and found that actual classification is fine #431 bfloat16 conv2d/resnet50 mismatch (@PhaneeshB )
PyTorch to LinAlg Lowering
Fx Importer
[#2843] Fx Importer does not support bfloat16 (@dan-garvey )
The text was updated successfully, but these errors were encountered:
kumardeepakamd
changed the title
VAI-ML-FE : Support opt-1.3B, laama2-7b, ResNet50 ONNX and PyTorch Models
VAI-ML-FE : Support opt-1.3B, laama2-7b, ResNet50 ONNX to LinAlg lowering
Jan 31, 2024
kumardeepakamd
changed the title
VAI-ML-FE : Support opt-1.3B, laama2-7b, ResNet50 ONNX to LinAlg lowering
VAI-ML-FE : Support opt-1.3B, laama2-7b, ResNet50 ONNX or PyTorch to LinAlg lowering
Jan 31, 2024
kumardeepakamd
changed the title
VAI-ML-FE : Support opt-1.3B, laama2-7b, ResNet50 ONNX or PyTorch to LinAlg lowering
VAI-ML-FE : Support bfloat16/int8 opt/laama2-7b Fx and ONNX model in FE and IREE
Feb 9, 2024
kumardeepakamd
changed the title
VAI-ML-FE : Support bfloat16/int8 opt/laama2-7b Fx and ONNX model in FE and IREE
Shark FE : Support bfloat16/int8 opt/laama2-7b Fx and ONNX model in FE and IREE
Feb 12, 2024
kumardeepakamd
changed the title
Shark FE : Support bfloat16/int8 opt/laama2-7b Fx and ONNX model in FE and IREE
Shark FE : Support bfloat16/int8 opt/laama2-7b Fx and ONNX model
Feb 12, 2024
ONNX to LinAlg Lowering
PyTorch Issues:
[#2832] Unable to export int8/4 ONNX model for OPT/LLAMA2 PyTorch 119621
Issues with lowering to LinAlg:
[#2838] bfloat16 MLP model IREE issue (@PhaneeshB )
Mismatch
[#430] int8 Resnet50 mismatch (@rsuderman ) -- Rob investigated it and found that actual classification is fine
#431 bfloat16 conv2d/resnet50 mismatch (@PhaneeshB )
PyTorch to LinAlg Lowering
Fx Importer
[#2843] Fx Importer does not support bfloat16 (@dan-garvey )
The text was updated successfully, but these errors were encountered: