Skip to content

cuSPARSELt FP8 on H100 not working #214

@jwfromm

Description

@jwfromm

I'm interested in running FP8 x FP8 sparse matmul on H100 GPUs. The example in cuSPARSELt/matmul is exactly what I'm looking for but does not seem to work for FP8. I am using Cuda 12.4 and CusparseLt version 6.2.

Running with AB_DTYPE FP16 or AB_DTYPE INT8 both work fine, produce correct results, and have great performance. However, when I use AB_DTYPE FP8, I get this error: CUSPARSE API failed at line 207 with error: operation not supported (10).

Based on the cusparseLt release notes, this should be supported. Is it possible the check in the cusparse API hasnt been updated? Can anyone help resolve this issue?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions