-
Notifications
You must be signed in to change notification settings - Fork 25.1k
Closed
Labels
module: sparseRelated to torch.sparseRelated to torch.sparsetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
When I convert linear layer weights with to_sparse_semi_structured(module.weight)
, there doesn't appear to be any speedup when I set the dimension of the linear layer as 4096x4096 with input size being 1x4096. Pytorch is compiled with cuSparseLT. Is there any reason why there is no speedup? In addition, the sparse alternative consumes more memory than the dense one. Any hint will be helpful!
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer
Metadata
Metadata
Assignees
Labels
module: sparseRelated to torch.sparseRelated to torch.sparsetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module