Skip to content

Conversation

xuyang1706
Copy link
Contributor

What is the purpose of the change

Previously there is "gemv" method in BLAS that performs multiplications between DenseMatrix and DenseVector. Here we add another one that performs multiplications between DenseMatrix and SparseVector.

Brief change log

  • Add gemv method to BLAS.
  • Add BLASTest that provides test cases for BLAS.

Verifying this change

This change added tests and can be verified as follows:

  • run test case pass

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
  • The serializers: (no)
  • The runtime per-record code paths (performance sensitive): (no)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  • The S3 file system connector: (no)

Documentation

  • Does this pull request introduce a new feature? (yes)
  • If yes, how is the feature documented? (JavaDocs)

@flinkbot
Copy link
Collaborator

flinkbot commented Sep 20, 2019

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit c247440 (Wed Dec 04 14:48:48 UTC 2019)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!
  • This pull request references an unassigned Jira ticket. According to the code contribution guide, tickets need to be assigned before starting with the implementation work.

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Sep 20, 2019

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build

Copy link
Contributor

@walterddr walterddr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the contribution @xuyang1706 . looks good to me overall. I left a few comments. please kindly take a look if they make sense.

*/
public static void gemv(double alpha, DenseMatrix matA, boolean transA,
SparseVector x, double beta, DenseVector y) {
if (transA) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we create a transposePreconditionChecker? feel like this would be use multiple place/times with duplicate code in such as gemv, gemm, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are right. We have created a method "gemvDimensionCheck". However, the dimension check in "gemm" and "gemv" are totally different, so "gemm" is left untouched.

private SparseVector spv2 = new SparseVector(3, new int[]{0, 2}, new double[]{1, 3});

@Test
public void testGemvDense() throws Exception {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is the dense version not tested until now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We just added more test cases.

private SparseVector spv2 = new SparseVector(3, new int[]{0, 2}, new double[]{1, 3});

@Test
public void testGemvDense() throws Exception {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing validator exception test cases:

  1. invalid dimension
  2. invalid dimension after transpose

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, we just added the test cases.

for (int i = 0; i < x.indices.length; i++) {
int index = x.indices[i];
double value = alpha * x.values[i];
F2J_BLAS.daxpy(m, value, matA.data, index * m, 1, y.data, 0, 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not familiar with BLAS internal performance. is this faster than directly coding it up ?
the reason why I am asking is that: there's two step involved. (scal and daxpy). may have duplicate mem access ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two reasons I use BLAS here.

  1. BLAS is a mature linear algebra libarary, which pays a lot of attension to data locality, thus it is more cache friendly than naive implementation. Usually we gain a lot of permformace improvement on level-2/level-3 BLAS routine through calling native (JNI) BLAS, while F2J BLAS is better in level-1 BLAS routines.
  2. In the case here, y is first scaled by b, then each columns of Ax is added to y. It is inevitable that y would be visited more than once.

Copy link
Contributor

@walterddr walterddr Oct 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that makes sense. I am thinking that the assumption is for example BLAS can do some sort of SIMD/MIMD optimization based on the data locality so that it can save register/cache loadings and invalidations.
It is good to call out the rationale here by having some sort of inline comment:

// relying on the native implementation of BLAS  for performance.

If there's any performance issue later, we can always avoid duplicate register loading by doing the multiplication the addition and variable assignment at the same line similar to

y.data[i] = beta * y.data[i] + alpha * s;

@xuyang1706
Copy link
Contributor Author

thanks for the contribution @xuyang1706 . looks good to me overall. I left a few comments. please kindly take a look if they make sense.

Thanks @walterddr for your comments and advices. I have added more test cases and precondition checkers.

Copy link
Contributor

@walterddr walterddr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @xuyang1706 thanks for the prompt update. I think overall the patch looks good.

I only have 2 higher level questions regarding the usage of BLAS (which on a level not only relates to this PR but also related to previously committed codes). Please kindly take a look. Thanks -Rong

* y[yOffset:yOffset+n] += a * x[xOffset:xOffset+n] .
*/
public static void axpy(int n, double a, double[] x, int xOffset, double[] y, int yOffset) {
F2J_BLAS.daxpy(n, a, x, xOffset, 1, y, yOffset, 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A higher level question:
Maybe it is good to clarify when to use F2J_BLAS vs NATIVE_BLAS.

I found a very interesting read from SPARK's mailing list and seems like there are some considerations regarding this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I have clarified in the inline doc that we should use F2J_BLAS for level-1 routines and NATIVE_BLAS for level-2 and level-3 routines. This is also the practices adopted by SparkML.

The read from SPARK's mailing list you give here indeed shows the pitfalls when using NATIVE_BLAS. It makes clear that the underlying native BLAS libarary should not use multithreading. Fortunately, the default library uses a BLAS version provided by http://www.netlib.org, which is a single-threaded version.

final int n = matA.numCols();
final int lda = matA.numRows();
final String ta = transA ? "T" : "N";
NATIVE_BLAS.dgemv(ta, m, n, alpha, matA.getData(), lda, x.getData(), 1, beta, y.getData(), 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason why DenseVector uses NATIVE_BLAS while the SparseVector uses F2J_BLAS

I think at least on a specific level (0,1,2,3 or up) we should probably only use one specific BLAS version unless specific reason comes up (IMO it should be some very strong justifications)

FYI: I am not sure whether this is related. Some suggestions on stack shows that there are some performance considerations coming from latest development from the JIT compiler

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We consistently use F2J_BLAS for level-1 routines such as scal/axpy/asum, and use NATIVE_BLAS for level-2/level-3 routines such as gemv, gemm. As for the gemv case here, we use NATIVE_BLAS for the dense case. But for the sparse case, the BLAS library is not directly applicable because it is a library for dense linear algebra. So we implement gemv for SparseVector by hand, using F2J_BLAS to do axpy(level-1 routine) during the course.

@xuyang1706
Copy link
Contributor Author

Hi @xuyang1706 thanks for the prompt update. I think overall the patch looks good.

I only have 2 higher level questions regarding the usage of BLAS (which on a level not only relates to this PR but also related to previously committed codes). Please kindly take a look. Thanks -Rong

Thanks for your questions and discuss, @walterddr. It is better to declare our rules for using F2J_BLAS and NATIVE_BLAS. We have added clear claims in the JavaDoc. Use F2J_BLAS for level-1 routines and NATIVE_BLAS for level-2 and level-3 routines.
Regards -Xu

Copy link
Contributor

@walterddr walterddr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @xuyang1706 for the explanation. overall it looks good to me. I think we just need to fix the comments and make sure that the explanation also goes into the code.

Comment on lines 29 to 33
// For level-1 routines, we use Java implementation.
private static final com.github.fommil.netlib.BLAS NATIVE_BLAS = com.github.fommil.netlib.BLAS.getInstance();

// For level-2 and level-3 routines, we use the native BLAS.
private static final com.github.fommil.netlib.BLAS F2J_BLAS = com.github.fommil.netlib.F2jBLAS.getInstance();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment and code doesn't align.

  • the field name is NATIVE_BLAS for the level-1,
  • the field name is F2J_BLAS for level-2,3.
    any naming conversion problem?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, we have corrected the java doc.

Comment on lines +222 to +223
double value = alpha * x.values[i];
F2J_BLAS.daxpy(m, value, matA.data, index * m, 1, y.data, 0, 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you add the explanation you added in this PR comment into the actual code comments? I think it helps others to understand this code in the future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, we did it.

@xuyang1706
Copy link
Contributor Author

Thanks @xuyang1706 for the explanation. overall it looks good to me. I think we just need to fix the comments and make sure that the explanation also goes into the code.

Thanks for your carefully review, @walterddr . We have refined the JavaDoc.

@walterddr
Copy link
Contributor

@flinkbot run travis

Copy link
Contributor

@walterddr walterddr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the quick reply/update @xuyang1706 . I think the patch looks good overall.
I will made some minor refinement and merge it soon.

// For level-1 routines, we use Java implementation.
private static final com.github.fommil.netlib.BLAS NATIVE_BLAS = com.github.fommil.netlib.BLAS.getInstance();

// For level-2 and level-3 routines, we use the native BLAS.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: use /* */

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, please refine it when merging. Thanks.

@xuyang1706
Copy link
Contributor Author

Thanks for the quick reply/update @xuyang1706 . I think the patch looks good overall.
I will made some minor refinement and merge it soon.

Thanks for your kindly help @walterddr.

@walterddr
Copy link
Contributor

@flinkbot run travis

@walterddr walterddr closed this in 23a34b8 Oct 25, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants