Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

performance of hipblasHgemm #534

Closed
mathbird opened this issue Oct 6, 2022 · 9 comments
Closed

performance of hipblasHgemm #534

mathbird opened this issue Oct 6, 2022 · 9 comments
Assignees

Comments

@mathbird
Copy link

mathbird commented Oct 6, 2022

I measured the performance of hipblasHgemm using _Float16 on MI 250. here are what I got:

N=K=M=8192: MI250: ~38 TFLOPs. Is it right? or I missed anything?

https://www.amd.com/en/products/server-accelerators/instinct-mi250x said that "Peak Half Precision (FP16) Performance is 383 TFLOPs"

@daineAMD
Copy link
Contributor

daineAMD commented Oct 7, 2022

Hi @mathbird,

You're correct in reading that the theoretical peak performance for MI250X for Fp16 is ~383 TFLOPs. This comes from the following calculation:
theoretical perf = (frequency) * (# CUs) * (flop/cycle/cu)
For MI250X, the peak frequency is 1700MHz, and each compute die has 110 compute units. For FP16, we have 1024 flop/cycle/cu. This gives: theoretical perf = 1700MHz * 110CUs * 1024Flops/cycle/cu = ~191.5 TFLOPs per GCD. Each MI250X has 2 GCDs, giving ~383TFLOPs theoretical peak performance. Note that a hipBLAS call will only use a single GCD, so the theoretical performance for this would more accurately be ~191TFLOPs. This number is theoretical and can be limited by other factors such as clock throttling.

hipblasHgemm() will not be able to get this performance, but you should be able to get substantially better performance if you use the mixed precision hipblasGemmEx(...) function. With FP16 input/output and FP32 compute, you should see performance more in line with what is expected. An example call would be as follows:

hipblasDatatype_t fp16_type = HIPBLAS_R_16F;
hipblasDatatype_t fp32_type = HIPBLAS_R_32F;
status = hipblasGemmEx(handle, transA, transB, m, n, k, alpha,
                                         dA, fp16_type, lda,
                                         dB, fp16_type, ldb, beta,
                                         dC, fp16_type, ldc,
                                         fp32_type, HIPBLAS_GEMM_DEFAULT);

You can take a look at the gemmEx documentation in hipblas.h, or feel free to ask any questions you have and I'll be happy to help.

Thanks,
Daine

@mathbird
Copy link
Author

mathbird commented Oct 7, 2022

Thanks, Daine. I did observe the big performance improvement with GemmEX! But why the computeType "fp32_type" is used for hgemm? if I changed it back to fp16_type, the performance will be reduced to 38 TFLOPs again.

Can SGEMM and CGEMM do the mix-precision calculation, too? do you have a complete and detail table to explain what type combinations can be used together for A/B/C, just like cublasGemmEx?

@babakpst
Copy link

Thanks for the question.

But why the computeType "fp32_type" is used for hgemm? if I changed it back to fp16_type, the performance will be reduced to 38 TFLOPs again.

Following Daine's comments, there are two categories of gemm functions: High-precision accumulate (HPA) functions, where the compute type is different (and more precise) from the data type, and non high-precision accumulation functions, where all data types are the same. For fp16 data type or hgemm, where the input and output data types are fp16, you have two options:
1- non-HPA functions where input/output/compute types are all fp16. This function is slow and less accurate, due to the underlying instructions for this operation. This is what you are using now, and, generally, we don't recommend users use this function.
2- HPA functions (gemm_ex), where the input/output data types are fp16, and the compute data type is fp32. This one is fast and more accurate. From the users' perspective, these two options are the same because the input/output data types are fp16.

Can SGEMM and CGEMM do the mix-precision calculation, too?

No. The HPA functions are available only if the input data type is fp16/bf16/int8.

do you have a complete and detail table to explain what type combinations can be used together for A/B/C, just like cublasGemmEx?

Yes, please refer to 3.7.1 rocblas-bench section of the updated rocBLAS user guide. I recently added a table with this information.

@mathbird
Copy link
Author

mathbird commented Oct 11, 2022

Thanks for useful info. If d_A, d_B and d_C are all defined as float (32 bit) using hipMalloc, the following GemmEx call can convert d_A and d_B correctly as bf16 numbers, and result in the right d_C?

      CHECK_HIPBLAS_ERROR(hipblasGemmEx(handle, transa, transb, M, N, K, (const hipblasHalf*)&alpha,
                                  d_A, HIPBLAS_R_16B, M,
                                  d_B, HIPBLAS_R_16B, K, (const hipblasHalf*)&beta,
                                  d_C, HIPBLAS_R_32F, N,
                                       HIPBLAS_R_32F, HIPBLAS_GEMM_DEFAULT)); 

@daineAMD
Copy link
Contributor

No, you should ensure that each datatype provided matches the data stored in the corresponding matrix. The pointers will just be casted to the specified datatype (see rocblas_gemm_ex_template() and gemm_ex_typecasting() if you're interested in the code for the rocBLAS backend).
You can also take a look at the cuBLAS documentation and rocBLAS documentation for supported datatypes and some other info regarding this function.

@daineAMD
Copy link
Contributor

@mathbird Do you have any further questions or should we go ahead and close this issue?

@mathbird
Copy link
Author

mathbird commented Oct 14, 2022

I tried the following type combinations, it could be compiled, but the test showed the error "rocBLAS error: HIPBLAS_STATUS_INVALID_ENUM". did I miss anything? or the complex single precision gemm could only use "HIPBLAS_C_32F" input on MI250?

      CHECK_HIPBLAS_ERROR(hipblasGemmEx(handle, transa, transb, M, N, K, (const hipblasComplex *)&alpha,
                                  d_A, HIPBLAS_C_16B, M,
                                  d_B, HIPBLAS_C_16B, K, (const hipblasComplex*)&beta,
                                  d_C, HIPBLAS_C_32F, N,
                                       HIPBLAS_C_32F, HIPBLAS_GEMM_DEFAULT));

@daineAMD
Copy link
Contributor

That error code is incorrect, I see we are missing some type conversions in hipBLAS. I'll make a PR for that today.

However, that type combination isn't supported in rocBLAS or cuBLAS, so once the above is fixed, it will just return HIPBLAS_STATUS_NOT_SUPPORTED. What you have is "bfloat16 complex" for A/B and "float complex" for C/Compute. For the rocBLAS backend, our only support for HIPBLAS_C_32F in GemmEx is essentially the same as hipblasCgemm(), i.e. a_type = b_type = c_type = compute_type = HIPBLAS_C_32F.

@mathbird
Copy link
Author

Thanks. no other questions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants