Skip to content

No speedup using semi-structured sparsity #111339

@BDHU

Description

@BDHU

When I convert linear layer weights with to_sparse_semi_structured(module.weight), there doesn't appear to be any speedup when I set the dimension of the linear layer as 4096x4096 with input size being 1x4096. Pytorch is compiled with cuSparseLT. Is there any reason why there is no speedup? In addition, the sparse alternative consumes more memory than the dense one. Any hint will be helpful!

cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer

Metadata

Metadata

Assignees

Labels

module: sparseRelated to torch.sparsetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions