-
Notifications
You must be signed in to change notification settings - Fork 456
cuSPARSELt FP8 on H100 not working #214
Copy link
Copy link
Closed
Description
I'm interested in running FP8 x FP8 sparse matmul on H100 GPUs. The example in cuSPARSELt/matmul is exactly what I'm looking for but does not seem to work for FP8. I am using Cuda 12.4 and CusparseLt version 6.2.
Running with AB_DTYPE FP16 or AB_DTYPE INT8 both work fine, produce correct results, and have great performance. However, when I use AB_DTYPE FP8, I get this error: CUSPARSE API failed at line 207 with error: operation not supported (10).
Based on the cusparseLt release notes, this should be supported. Is it possible the check in the cusparse API hasnt been updated? Can anyone help resolve this issue?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels