-
Notifications
You must be signed in to change notification settings - Fork 13.8k
[FLINK-14153][ml] Add to BLAS a method that performs DenseMatrix and SparseVector multiplication. #9732
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…SparseVector multiplication.
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Automated ChecksLast check on commit c247440 (Wed Dec 04 14:48:48 UTC 2019) Warnings:
Mention the bot in a comment to re-run the automated checks. Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the contribution @xuyang1706 . looks good to me overall. I left a few comments. please kindly take a look if they make sense.
*/ | ||
public static void gemv(double alpha, DenseMatrix matA, boolean transA, | ||
SparseVector x, double beta, DenseVector y) { | ||
if (transA) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we create a transposePreconditionChecker
? feel like this would be use multiple place/times with duplicate code in such as gemv
, gemm
, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are right. We have created a method "gemvDimensionCheck". However, the dimension check in "gemm" and "gemv" are totally different, so "gemm" is left untouched.
private SparseVector spv2 = new SparseVector(3, new int[]{0, 2}, new double[]{1, 3}); | ||
|
||
@Test | ||
public void testGemvDense() throws Exception { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is the dense version not tested until now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We just added more test cases.
private SparseVector spv2 = new SparseVector(3, new int[]{0, 2}, new double[]{1, 3}); | ||
|
||
@Test | ||
public void testGemvDense() throws Exception { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing validator exception test cases:
- invalid dimension
- invalid dimension after transpose
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, we just added the test cases.
for (int i = 0; i < x.indices.length; i++) { | ||
int index = x.indices[i]; | ||
double value = alpha * x.values[i]; | ||
F2J_BLAS.daxpy(m, value, matA.data, index * m, 1, y.data, 0, 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not familiar with BLAS internal performance. is this faster than directly coding it up ?
the reason why I am asking is that: there's two step involved. (scal
and daxpy
). may have duplicate mem access ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are two reasons I use BLAS here.
- BLAS is a mature linear algebra libarary, which pays a lot of attension to data locality, thus it is more cache friendly than naive implementation. Usually we gain a lot of permformace improvement on level-2/level-3 BLAS routine through calling native (JNI) BLAS, while F2J BLAS is better in level-1 BLAS routines.
- In the case here, y is first scaled by b, then each columns of Ax is added to y. It is inevitable that y would be visited more than once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that makes sense. I am thinking that the assumption is for example BLAS can do some sort of SIMD/MIMD optimization based on the data locality so that it can save register/cache loadings and invalidations.
It is good to call out the rationale here by having some sort of inline comment:
// relying on the native implementation of BLAS for performance.
If there's any performance issue later, we can always avoid duplicate register loading by doing the multiplication the addition and variable assignment at the same line similar to
y.data[i] = beta * y.data[i] + alpha * s;
Thanks @walterddr for your comments and advices. I have added more test cases and precondition checkers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @xuyang1706 thanks for the prompt update. I think overall the patch looks good.
I only have 2 higher level questions regarding the usage of BLAS (which on a level not only relates to this PR but also related to previously committed codes). Please kindly take a look. Thanks -Rong
* y[yOffset:yOffset+n] += a * x[xOffset:xOffset+n] . | ||
*/ | ||
public static void axpy(int n, double a, double[] x, int xOffset, double[] y, int yOffset) { | ||
F2J_BLAS.daxpy(n, a, x, xOffset, 1, y, yOffset, 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A higher level question:
Maybe it is good to clarify when to use F2J_BLAS
vs NATIVE_BLAS
.
I found a very interesting read from SPARK's mailing list and seems like there are some considerations regarding this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I have clarified in the inline doc that we should use F2J_BLAS for level-1 routines and NATIVE_BLAS for level-2 and level-3 routines. This is also the practices adopted by SparkML.
The read from SPARK's mailing list you give here indeed shows the pitfalls when using NATIVE_BLAS. It makes clear that the underlying native BLAS libarary should not use multithreading. Fortunately, the default library uses a BLAS version provided by http://www.netlib.org, which is a single-threaded version.
final int n = matA.numCols(); | ||
final int lda = matA.numRows(); | ||
final String ta = transA ? "T" : "N"; | ||
NATIVE_BLAS.dgemv(ta, m, n, alpha, matA.getData(), lda, x.getData(), 1, beta, y.getData(), 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any reason why DenseVector
uses NATIVE_BLAS
while the SparseVector
uses F2J_BLAS
I think at least on a specific level (0,1,2,3 or up) we should probably only use one specific BLAS version unless specific reason comes up (IMO it should be some very strong justifications)
FYI: I am not sure whether this is related. Some suggestions on stack shows that there are some performance considerations coming from latest development from the JIT compiler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We consistently use F2J_BLAS for level-1 routines such as scal/axpy/asum, and use NATIVE_BLAS for level-2/level-3 routines such as gemv, gemm. As for the gemv case here, we use NATIVE_BLAS for the dense case. But for the sparse case, the BLAS library is not directly applicable because it is a library for dense linear algebra. So we implement gemv for SparseVector by hand, using F2J_BLAS to do axpy(level-1 routine) during the course.
Thanks for your questions and discuss, @walterddr. It is better to declare our rules for using |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @xuyang1706 for the explanation. overall it looks good to me. I think we just need to fix the comments and make sure that the explanation also goes into the code.
// For level-1 routines, we use Java implementation. | ||
private static final com.github.fommil.netlib.BLAS NATIVE_BLAS = com.github.fommil.netlib.BLAS.getInstance(); | ||
|
||
// For level-2 and level-3 routines, we use the native BLAS. | ||
private static final com.github.fommil.netlib.BLAS F2J_BLAS = com.github.fommil.netlib.F2jBLAS.getInstance(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment and code doesn't align.
- the field name is
NATIVE_BLAS
for the level-1, - the field name is
F2J_BLAS
for level-2,3.
any naming conversion problem?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, we have corrected the java doc.
double value = alpha * x.values[i]; | ||
F2J_BLAS.daxpy(m, value, matA.data, index * m, 1, y.data, 0, 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you add the explanation you added in this PR comment into the actual code comments? I think it helps others to understand this code in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, we did it.
Thanks for your carefully review, @walterddr . We have refined the JavaDoc. |
@flinkbot run travis |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the quick reply/update @xuyang1706 . I think the patch looks good overall.
I will made some minor refinement and merge it soon.
// For level-1 routines, we use Java implementation. | ||
private static final com.github.fommil.netlib.BLAS NATIVE_BLAS = com.github.fommil.netlib.BLAS.getInstance(); | ||
|
||
// For level-2 and level-3 routines, we use the native BLAS. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: use /* */
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, please refine it when merging. Thanks.
Thanks for your kindly help @walterddr. |
@flinkbot run travis |
What is the purpose of the change
Previously there is "gemv" method in BLAS that performs multiplications between DenseMatrix and DenseVector. Here we add another one that performs multiplications between DenseMatrix and SparseVector.
Brief change log
Verifying this change
This change added tests and can be verified as follows:
Does this pull request potentially affect one of the following parts:
@Public(Evolving)
: (no)Documentation