Skip to content

Conversation

@jinjingforever
Copy link
Collaborator

@jinjingforever jinjingforever commented Sep 15, 2020

  • Sin, Tan, Atan2, BatchNorm, multiply
  • Export KernelFunc from them (plus Cos)
  • Slightly refactor some methods in backend_webgl.ts to make them work better with kernels.

This change is Reviewable

- Sin, Tan, Atan2, BatchNorm
- Export KernelFunc (including Cos)
@jinjingforever jinjingforever marked this pull request as ready for review September 16, 2020 00:01
Copy link
Collaborator

@lina128 lina128 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @annxingyuan, @jinjingforever, @lina128, and @tafsiri)


tfjs-backend-webgl/src/backend_webgl.ts, line 874 at r1 (raw file):

  // TODO(jingjin): delete this method when batchMetMul (which uses multiply
  // here) is modularized.
  multiply(a: Tensor, b: Tensor): Tensor {

Multiply may need to be modularized along with Complex, because this.complex needs also be modularized in this method. For context, see this PR: #3868
In a nutshell, the underlying data structure for Complex will change, so kernels affected by that change needs to modularize together. Other than that, we should not leave implementation in backend_webgl if they are modularized, batchMatMul can call tf.mul(...), so that we decouple any dependency on multiply in this file.


tfjs-backend-webgl/src/kernels/Atan2.ts, line 28 at r1 (raw file):

    (params: {inputs: Atan2Inputs, backend: MathBackendWebGL}) =>
        TensorInfo | TensorInfo[] = ({inputs, backend}) => {
          const {a, b} = inputs;

Just some thoughts: I wonder if it's possible to abstract this kernelFunc to a method createBinaryKernelFunc(a, b, program, packedProgram), to be shared by all binary ops. Similar to cpu. Well, there may be more nuance in Webgl than Cpu, I'll leave it to @annxingyuan and @tafsiri to shed some light on this. Or we can keep modularizing and if we see a pattern, refactor later. Either way is fine by me.


tfjs-backend-webgl/src/kernels/BatchNorm.ts, line 29 at r1 (raw file):

  attrs: FusedBatchNormAttrs
}) => TensorInfo | TensorInfo[] = ({inputs, backend, attrs}) => {
  const {x, mean, variance, offset, scale} = inputs;

There's some transform code at the op level, we need to duplicate them here, because the inputs and attributes passed in are raw form (without the transform). For example, varianceEpsilon may be null, in this case, it should be set to 0.001. Take a look at the transform code in this part: https://github.com/tensorflow/tfjs/blob/master/tfjs-core/src/ops/batchnorm.ts#L63-L89 After modularization, the transform code at the op level should be removed.


tfjs-backend-webgl/src/kernels/Multiply_impl.ts, line 54 at r1 (raw file):

  }

  if (backend.shouldExecuteOnCPU([a, b]) && a instanceof Tensor &&

CPU has changed Complex.complexTensor to complexTensorInfo, I wonder if it's better to migrate WebGL in the same order, i.e. Complex along with Multiply.


tfjs-backend-webgl/src/kernels/Sin.ts, line 26 at r1 (raw file):

    (params: {inputs: SinInputs, backend: MathBackendWebGL}) =>
        TensorInfo | TensorInfo[] = ({inputs, backend}) => {
          const {x} = inputs;

Should we make a createUnaryKernelFunc(...) for all the unary ops?

Copy link
Contributor

@annxingyuan annxingyuan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @annxingyuan, @jinjingforever, @lina128, and @tafsiri)


tfjs-backend-webgl/src/kernels/Atan2.ts, line 28 at r1 (raw file):

Previously, lina128 (Na Li) wrote…

Just some thoughts: I wonder if it's possible to abstract this kernelFunc to a method createBinaryKernelFunc(a, b, program, packedProgram), to be shared by all binary ops. Similar to cpu. Well, there may be more nuance in Webgl than Cpu, I'll leave it to @annxingyuan and @tafsiri to shed some light on this. Or we can keep modularizing and if we see a pattern, refactor later. Either way is fine by me.

Creating an abstracted helper method sounds good to me! The only nuance in WebGL is that some of the binary kernels have packed implementations (backend_webgl.ts has a packedBinaryOp helper to deal with this specifically), which we would need to account for. Up to you whether to create that helper method in this PR or a follow-up PR.


tfjs-backend-webgl/src/kernels/Cos.ts, line 24 at r1 (raw file):

export const cosKernelFunc:
    (params: {inputs: CosInputs, backend: MathBackendWebGL}) =>

nit: we've been calling this 'args' rather than 'params' elsewhere (e.g. WASM kernels)


tfjs-backend-webgl/src/kernels/Multiply_impl.ts, line 54 at r1 (raw file):

  }

  if (backend.shouldExecuteOnCPU([a, b]) && a instanceof Tensor &&

why introduce the instanceof tensor checks here?


tfjs-backend-webgl/src/kernels/Sin.ts, line 26 at r1 (raw file):

Previously, lina128 (Na Li) wrote…

Should we make a createUnaryKernelFunc(...) for all the unary ops?

sgtm

@googlebot
Copy link

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.
In order to pass this check, please resolve this problem and then comment @googlebot I fixed it.. If the bot doesn't comment, it means it doesn't think anything has changed.

ℹ️ Googlers: Go here for more info.

@googlebot
Copy link

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

Copy link
Collaborator Author

@jinjingforever jinjingforever left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @annxingyuan, @lina128, and @tafsiri)


tfjs-backend-webgl/src/backend_webgl.ts, line 874 at r1 (raw file):

Previously, lina128 (Na Li) wrote…

Multiply may need to be modularized along with Complex, because this.complex needs also be modularized in this method. For context, see this PR: #3868
In a nutshell, the underlying data structure for Complex will change, so kernels affected by that change needs to modularize together. Other than that, we should not leave implementation in backend_webgl if they are modularized, batchMatMul can call tf.mul(...), so that we decouple any dependency on multiply in this file.

Thanks for the info! Just realized that this is a more involved changes than I thought:) Will do multiply+complex in another PR after I learn more about this topic. Thanks!


tfjs-backend-webgl/src/kernels/Atan2.ts, line 28 at r1 (raw file):

Previously, annxingyuan (Ann Yuan) wrote…

Creating an abstracted helper method sounds good to me! The only nuance in WebGL is that some of the binary kernels have packed implementations (backend_webgl.ts has a packedBinaryOp helper to deal with this specifically), which we would need to account for. Up to you whether to create that helper method in this PR or a follow-up PR.

Added unaryKernelFunc and binaryKernelFunc in kernel_funcs_utils.ts (see above, similar to #3919 from Na). Please see if those would work. Will also add the logic to handle complex64 data type in upcoming PRs.


tfjs-backend-webgl/src/kernels/BatchNorm.ts, line 29 at r1 (raw file):

Previously, lina128 (Na Li) wrote…

There's some transform code at the op level, we need to duplicate them here, because the inputs and attributes passed in are raw form (without the transform). For example, varianceEpsilon may be null, in this case, it should be set to 0.001. Take a look at the transform code in this part: https://github.com/tensorflow/tfjs/blob/master/tfjs-core/src/ops/batchnorm.ts#L63-L89 After modularization, the transform code at the op level should be removed.

Thanks for the discussion offline! Looks like this will also need reshape to be modularized and Ann has a PR (#3910) for that already. I will modularize batchNorm after that PR is merged. Removed from this PR for now.


tfjs-backend-webgl/src/kernels/Cos.ts, line 24 at r1 (raw file):

Previously, annxingyuan (Ann Yuan) wrote…

nit: we've been calling this 'args' rather than 'params' elsewhere (e.g. WASM kernels)

Ack.


tfjs-backend-webgl/src/kernels/Multiply_impl.ts, line 54 at r1 (raw file):

Previously, lina128 (Na Li) wrote…

CPU has changed Complex.complexTensor to complexTensorInfo, I wonder if it's better to migrate WebGL in the same order, i.e. Complex along with Multiply.

Agree. Will do.


tfjs-backend-webgl/src/kernels/Multiply_impl.ts, line 54 at r1 (raw file):

Previously, annxingyuan (Ann Yuan) wrote…

why introduce the instanceof tensor checks here?

Ack. Will clean this up when I modularize multiply+complex in another PR.


tfjs-backend-webgl/src/kernels/Sin.ts, line 26 at r1 (raw file):

Previously, annxingyuan (Ann Yuan) wrote…

sgtm

Done. See kernel_funcs_utils.ts.

Copy link
Contributor

@tafsiri tafsiri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 9 of 57 files at r2, 3 of 5 files at r3.
Reviewable status: 0 of 1 approvals obtained (waiting on @annxingyuan, @jinjingforever, @lina128, and @tafsiri)


tfjs-backend-webgl/src/kernels/Div.ts, line 25 at r3 (raw file):

export const divKernelFunc =
    binaryKernelFunc(binaryop_gpu.DIV, binaryop_packed_gpu.DIV, true);

use a named constant for true or add a /* name of arg */ comment inline for readability


tfjs-backend-webgl/src/kernels/SquaredDifference.ts, line 22 at r3 (raw file):

import {binaryKernelFunc} from '../kernel_utils/kernel_funcs_utils';

const SQUARED_DIFFERENCE = 'return (a - b) * (a - b);';

Question about conventions for these snippets. Other files like square above define this in unaryop_gpu, are they reused and this one is not. Is this an opportunity for unifying the pattern. I have a slight preference for this style where the snippet lives with the kernel definition. I could go either way, but thing they should be consistent if possible.

cc @annxingyuan

Copy link
Collaborator Author

@jinjingforever jinjingforever left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Yannick! Will wait for Ann's opinion on where to put op snippets.

Reviewable status: 0 of 1 approvals obtained (waiting on @annxingyuan, @jinjingforever, @lina128, and @tafsiri)


tfjs-backend-webgl/src/kernels/Div.ts, line 25 at r3 (raw file):

Previously, tafsiri (Yannick Assogba) wrote…

use a named constant for true or add a /* name of arg */ comment inline for readability

Done

Copy link
Contributor

@annxingyuan annxingyuan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @annxingyuan, @lina128, and @tafsiri)


tfjs-backend-webgl/src/kernels/SquaredDifference.ts, line 22 at r3 (raw file):

Previously, tafsiri (Yannick Assogba) wrote…

Question about conventions for these snippets. Other files like square above define this in unaryop_gpu, are they reused and this one is not. Is this an opportunity for unifying the pattern. I have a slight preference for this style where the snippet lives with the kernel definition. I could go either way, but thing they should be consistent if possible.

cc @annxingyuan

Putting the snippets in the file SGTM!

Copy link
Collaborator Author

@jinjingforever jinjingforever left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @annxingyuan, @lina128, and @tafsiri)


tfjs-backend-webgl/src/kernels/SquaredDifference.ts, line 22 at r3 (raw file):

Previously, annxingyuan (Ann Yuan) wrote…

Putting the snippets in the file SGTM!

Done. Thanks!

Copy link
Contributor

@tafsiri tafsiri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Reviewed 10 of 10 files at r4.
Reviewable status: :shipit: complete! 1 of 1 approvals obtained (waiting on @annxingyuan, @lina128, and @tafsiri)

@jinjingforever jinjingforever merged commit 47a4348 into master Sep 17, 2020
@jinjingforever jinjingforever deleted the modularize branch September 17, 2020 16:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants