-
Notifications
You must be signed in to change notification settings - Fork 432
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Standalone sparse-dense matrix multiplication benchmark #390
Comments
@marsupialtail Such an API doesn't exist in FBGEMM at the moment. |
So is the easiest way of testing a SpMM fusing it with a quantized matmul at the moment? |
@marsupialtail: Please take a look at the https://github.com/pytorch/FBGEMM/blob/master/bench/I8SpmdmBenchmark.cc |
So FBGEMM currently only supports int8 SpMM rn? Does it support fp32? |
Currently it's int8 only. |
fp32 and int8 dense-sparse exist at https://github.com/pytorch/FBGEMM/blob/master/bench/SparseDenseMMFP32Benchmark.cc and https://github.com/pytorch/FBGEMM/blob/master/bench/SparseDenseMMInt8Benchmark.cc |
Regarding the SparseDenseMMInt8Benchmark.cc example, it seems that both the input matrix and the output matrix must be transposed in order to use the API. Namely, if I have an input matrix A and am interested in the output matrix C, I must first transpose A to use the API and must transpose the output C^T as well to get C. As these operations require memory copies which may be quite expensive, I've looked at using the CSC matrix format; for example the function |
Hi, I am wondering if FBGEMM supports standalone sparse matrix dense matrix multiplication using the unrolling approach to get register blocking as mentioned in the new release notes. It seems like the test involves the operation fused with another matrix multiplication. I am wondering if an API similar to say MKL's SpMM exists for FBGEMM. Thank you!
The text was updated successfully, but these errors were encountered: