Skip to content

float8 ops on cpu broken in inductor #117119

@vkuzo

Description

@vkuzo

🐛 Describe the bug

If we have some code with device=cpu which uses float8, such as below, things are broken in inductor codegen:

def foo(x):                       
    x = x.to(torch.float8_e4m3fn) 
    return x                      
                                  
foo = torch.compile(foo)          
x = torch.randn(2, 2)             
x = foo(x)                        
print(x)                          

logs: https://gist.github.com/vkuzo/ae434bfe8a48c083e74377c313263a90

can we fall back to eager mode on cpu?

Versions

>>> torch.__version__
'2.2.0a0+git967863d'

Metadata

Metadata

Assignees

Labels

oncall: cpu inductorCPU Inductor issues for Intel team to triagetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions