Skip to content

Conversation

@pyu10055
Copy link
Collaborator

@pyu10055 pyu10055 commented Jun 30, 2020

Moved split handling -1 value to core
add support for 3d tensor irfft
updated where op to allow broadcasting from a and b inputs.

To see the logs from the Cloud Build CI, please join either our discussion or announcement mailing list.


This change is Reviewable

@pyu10055 pyu10055 requested a review from lina128 June 30, 2020 22:48
Copy link
Collaborator

@lina128 lina128 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @lina128 and @pyu10055)


tfjs-core/src/ops/split.ts, line 76 at r1 (raw file):

        new Array(numOrSizeSplits).fill($x.shape[$axis] / numOrSizeSplits);
  } else {
    // Allow the last number of split array to be -1, which indicates the rest

Add a todo to move transformation to kernels. Eventually ops should only have validation checks.


tfjs-core/src/ops/where.ts, line 58 at r1 (raw file):

  const $condition = convertToTensor(condition, 'condition', 'where', 'bool');

  // find the broadcastable shape for $a and $b

c++ broadcast in the kernel, can we align with that implementation.

Copy link
Collaborator Author

@pyu10055 pyu10055 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @lina128)


tfjs-core/src/ops/split.ts, line 76 at r1 (raw file):

Previously, lina128 (Na Li) wrote…

Add a todo to move transformation to kernels. Eventually ops should only have validation checks.

done


tfjs-core/src/ops/where.ts, line 58 at r1 (raw file):

Previously, lina128 (Na Li) wrote…

c++ broadcast in the kernel, can we align with that implementation.

Added TODO

Copy link
Contributor

@tafsiri tafsiri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @lina128 and @pyu10055)


tfjs-core/src/ops/where.ts, line 58 at r1 (raw file):

Previously, pyu10055 (Ping Yu) wrote…

Added TODO

Small note about temporary logic that is going to move to the kernels: it should be within the forwardFunc (first argument to runKernelFunc) and the original inputs ($a, $b) should kept as is in the inputs map below (also the initial input validation would be on $a and $b). This allows non modular backends to run but ensures that any new implementations conform to the target interface.

See ops/max.ts for an example of a forwardFunc that does fair amount of preprocessing.


tfjs-core/src/ops/where.ts, line 47 at r2 (raw file):

 * @param b A tensor with the same dtype as `a` and with shape that is
 *     compatible with `a`.
 * @return A tensor with same detype as `a` and `b`, and shape that is

typo: detype

Copy link
Collaborator Author

@pyu10055 pyu10055 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @lina128 and @tafsiri)


tfjs-core/src/ops/split.ts, line 76 at r1 (raw file):

Previously, pyu10055 (Ping Yu) wrote…

done

moved the logic to forward func.


tfjs-core/src/ops/where.ts, line 58 at r1 (raw file):

Previously, tafsiri (Yannick Assogba) wrote…

Small note about temporary logic that is going to move to the kernels: it should be within the forwardFunc (first argument to runKernelFunc) and the original inputs ($a, $b) should kept as is in the inputs map below (also the initial input validation would be on $a and $b). This allows non modular backends to run but ensures that any new implementations conform to the target interface.

See ops/max.ts for an example of a forwardFunc that does fair amount of preprocessing.

move the logic to forwardFunc.


tfjs-core/src/ops/where.ts, line 47 at r2 (raw file):

Previously, tafsiri (Yannick Assogba) wrote…

typo: detype

Done.

Copy link
Collaborator

@lina128 lina128 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @lina128, @pyu10055, and @tafsiri)


tfjs-converter/src/operations/executors/spectral_executor.ts, line 44 at r3 (raw file):

        case 'IRFFT': {
          const x = getParamValue('x', node, tensorMap, context) as tfc.Tensor;
          let result = tfc.irfft(

Hi Ping, should this post-processing be moved to irfft op too?


tfjs-core/src/ops/split.ts, line 54 at r3 (raw file):

 *
 * @param x The input tensor to split.
 * @param numOrSizeSplits Either an integer indicating the number of

Please add a sentence about -1. Something similar to this: https://www.tensorflow.org/api_docs/python/tf/raw_ops/SplitV


tfjs-core/src/ops/split.ts, line 73 at r3 (raw file):

        $x.shape[$axis] % numOrSizeSplits === 0,
        () => 'Number of splits must evenly divide the axis.');
    splitSizes =

Move this to forwrdFunc too.

Copy link
Contributor

@tafsiri tafsiri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 1 of 2 files at r3.
Reviewable status: 0 of 1 approvals obtained (waiting on @lina128 and @pyu10055)


tfjs-core/src/ops/where.ts, line 67 at r3 (raw file):

  const inputs: SelectV2Inputs = {condition: $condition, t: $a, e: $b};
  return ENGINE.runKernelFunc((backend, save) => {
    // find the broadcastable shape for $a and $b

nit: since this is longer that a single line we have been pulling these functions up and assigning them to a const.

Copy link
Collaborator Author

@pyu10055 pyu10055 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @lina128 and @pyu10055)


tfjs-converter/src/operations/executors/spectral_executor.ts, line 44 at r3 (raw file):

Previously, lina128 (Na Li) wrote…

Hi Ping, should this post-processing be moved to irfft op too?

It is not consistent with our other fft related API, probably leave it in converter, until we modularize everything.


tfjs-core/src/ops/split.ts, line 73 at r3 (raw file):

Previously, lina128 (Na Li) wrote…

Move this to forwrdFunc too.

Done.

Copy link
Contributor

@tafsiri tafsiri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @lina128 and @pyu10055)


tfjs-converter/src/operations/executors/spectral_executor.ts, line 44 at r3 (raw file):

Previously, pyu10055 (Ping Yu) wrote…

It is not consistent with our other fft related API, probably leave it in converter, until we modularize everything.

Could you elaborate on this one more, looking at the comment here it looks like this is just a bug in irfft and we should fix it there? (orthogonal to modularization). Is that correct? I tried running the following in colab (python) and tfjs and got different shaped results.

real_part = tf.random.uniform([3,4,5])
input_tensor = tf.complex(real=real_part, imag=tf.zeros_like(real_part))
irfft_res = tf.signal.irfft(input_tensor)
irfft_res.shape

Produces: TensorShape([3, 4, 8])

const real = tf.randomUniform([3, 4, 5]);
const imag = tf.zerosLike(real);
const x = tf.complex(real, imag);

console.log(x.irfft().shape)

Produces: 12,8

Is that a good comparison? If so I'd also say this should just move to fix irfft.

Copy link
Collaborator Author

@pyu10055 pyu10055 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @lina128)


tfjs-converter/src/operations/executors/spectral_executor.ts, line 44 at r3 (raw file):

Previously, tafsiri (Yannick Assogba) wrote…

Could you elaborate on this one more, looking at the comment here it looks like this is just a bug in irfft and we should fix it there? (orthogonal to modularization). Is that correct? I tried running the following in colab (python) and tfjs and got different shaped results.

real_part = tf.random.uniform([3,4,5])
input_tensor = tf.complex(real=real_part, imag=tf.zeros_like(real_part))
irfft_res = tf.signal.irfft(input_tensor)
irfft_res.shape

Produces: TensorShape([3, 4, 8])

const real = tf.randomUniform([3, 4, 5]);
const imag = tf.zerosLike(real);
const x = tf.complex(real, imag);

console.log(x.irfft().shape)

Produces: 12,8

Is that a good comparison? If so I'd also say this should just move to fix irfft.

moved the logic to core.


tfjs-core/src/ops/split.ts, line 54 at r3 (raw file):

Previously, lina128 (Na Li) wrote…

Please add a sentence about -1. Something similar to this: https://www.tensorflow.org/api_docs/python/tf/raw_ops/SplitV

Done.


tfjs-core/src/ops/where.ts, line 58 at r1 (raw file):

Previously, pyu10055 (Ping Yu) wrote…

move the logic to forwardFunc.

Done.

Copy link
Collaborator Author

@pyu10055 pyu10055 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review, PTAL.

Reviewable status: 0 of 1 approvals obtained (waiting on @lina128)


tfjs-core/src/ops/where.ts, line 58 at r1 (raw file):

Previously, pyu10055 (Ping Yu) wrote…

Done.

move the assertion and broadcast logic out to core API to keep the kernel interface consistent. The issue is broadcast op uses a different op, while utility method can not seem to perform tensor manipulation.

Copy link
Collaborator

@lina128 lina128 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great. Thank you, Ping!

Reviewable status: :shipit: complete! 1 of 1 approvals obtained (waiting on @lina128)

Copy link
Contributor

@tafsiri tafsiri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Ping! Just one change request and a TODO to add in the where op.

Reviewed 3 of 8 files at r1, 9 of 10 files at r4.
Reviewable status: :shipit: complete! 1 of 1 approvals obtained (waiting on @lina128 and @pyu10055)


tfjs-core/src/ops/split.ts, line 75 at r4 (raw file):

  const inputs: SplitVInputs = {x: $x};
  const attr: SplitVAttrs = {numOrSizeSplits, axis: $axis};

this should be original axis param, modular kernels should do the parseAxisParams as well.


tfjs-core/src/ops/where.ts, line 58 at r1 (raw file):

Previously, pyu10055 (Ping Yu) wrote…

move the assertion and broadcast logic out to core API to keep the kernel interface consistent. The issue is broadcast op uses a different op, while utility method can not seem to perform tensor manipulation.

Yes in this case the backend should implement broadcastTo as a kernel and use that functionality internally. Could you file an issue ti implement broadCastTo in wasm and add a todo here to move this code into forwardFunc once that is done.

Copy link
Collaborator Author

@pyu10055 pyu10055 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 1 of 1 approvals obtained (waiting on @lina128 and @tafsiri)


tfjs-core/src/ops/split.ts, line 75 at r4 (raw file):

Previously, tafsiri (Yannick Assogba) wrote…

this should be original axis param, modular kernels should do the parseAxisParams as well.

Done.

Copy link
Collaborator Author

@pyu10055 pyu10055 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 1 of 1 approvals obtained (waiting on @lina128 and @tafsiri)


tfjs-core/src/ops/where.ts, line 58 at r1 (raw file):

Previously, tafsiri (Yannick Assogba) wrote…

Yes in this case the backend should implement broadcastTo as a kernel and use that functionality internally. Could you file an issue ti implement broadCastTo in wasm and add a todo here to move this code into forwardFunc once that is done.

I think broadCastTo is part the modularization op list.

Copy link
Contributor

@tafsiri tafsiri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Reviewed 2 of 2 files at r5.
Reviewable status: :shipit: complete! 2 of 1 approvals obtained (waiting on @lina128)


tfjs-core/src/ops/where.ts, line 58 at r1 (raw file):

Previously, pyu10055 (Ping Yu) wrote…

I think broadCastTo is part the modularization op list.

Yep. but this issue would be to get the kernel done in wasm sooner rather than later so that when we can eventually remove runKernelFunc from where.

Since all of wasm's kernels are modular it is the reason why you could just put this in the forward pass (note that broadcastTo the op can run in wasm its implementation is a composition of various ops).

@pyu10055 pyu10055 merged commit c98faee into master Jul 8, 2020
@pyu10055 pyu10055 deleted the op_fixes branch July 8, 2020 20:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants