Skip to content
This repository has been archived by the owner on Sep 17, 2022. It is now read-only.

Implement ops and add functionality to bindings #70

Merged
merged 7 commits into from Apr 16, 2018
Merged

Implement ops and add functionality to bindings #70

merged 7 commits into from Apr 16, 2018

Conversation

dsmilkov
Copy link
Contributor

@dsmilkov dsmilkov commented Apr 15, 2018

NOTE: Depends on tensorflow/tfjs-core#956

  • Implement: argMin, argMax, logicalXor, log1p, eluDer, clip, mod, round, sign, rsqrt, reciprocal, asinh, acosh, atanh, squaredDifference, expm1, atan2, conv2d, conv2dDerInput, conv2dDerFilter, maxPool, maxPoolBackprop, avgPool, avgPoolBackprop, tile, resizeBilinear, batchNorm, LRN, multinomial, softplus.
  • Implement depthwiseConv2D for dilations=1. dilations>1 requires spaceToBatch to be added to core
  • Implement the backend methods: time, memory.
  • Add binding functionality for types of attributes: float and list(int)
  • Test the ops using tfjs-core tests
  • Throw error if
    • padding type is number since TF backend doesn't have kernels for that - only valid and same is supported. We support padding type number in the webgl and cpu backend.
    • multinomial gets called with normalized probabilities. TF backend only supports logits (and you can't undo softmax), while we support both logits and probabilities in the webgl and cpu backend.

This change is Reviewable

@dsmilkov dsmilkov changed the title [WIP] Fix failing ops, align core API, add functionality to bindings [WIP] Implement ops, add functionality to bindings Apr 15, 2018
@dsmilkov dsmilkov requested a review from nkreeger April 16, 2018 03:16
@dsmilkov dsmilkov changed the title [WIP] Implement ops, add functionality to bindings Implement ops and adds functionality to bindings Apr 16, 2018
@dsmilkov dsmilkov changed the title Implement ops and adds functionality to bindings Implement ops and add functionality to bindings Apr 16, 2018
@dsmilkov dsmilkov requested a review from nsthorat April 16, 2018 03:18
@nkreeger
Copy link
Contributor

This is excellent - thank you so much! A few comments on the c++ ping me when they're updated and I'll rubber-stamp.


Review status: 0 of 4 files reviewed at latest revision, all discussions resolved.


binding/tfe_utils.cc, line 244 at r1 (raw file):

nullptr

Sorry these macros are funny - nullptr should probably be false or is_array. The 3rd param is the early return value. Honestly the only reason to return early is to bubble exceptions. Further up the call chain, calls into AssignOpAttr() have an exception-pending check. To make this easier, I'd do something like this:

inline bool IsArray(napi_env, napi_status& nstatus, napi_value* val) {
   // Use in-param nstatus
}

This way if an exception happens - it gets bubbled up through the nstatus passed in.


binding/tfe_utils.cc, line 300 at r1 (raw file):

if (IsArray(env, &js_value)) {

I wonder if we'll have to support this for TF_ATTR_FLOAT as well. Mind cleaning up this TODO: https://github.com/tensorflow/tfjs-node/blob/master/binding/tfe_utils.cc#L332


binding/tfe_utils.cc, line 302 at r1 (raw file):

auto 

There is already a nstatus variable in scope on this method.


binding/tfe_utils.cc, line 304 at r1 (raw file):

auto data = new int64_t[length];

This actually allocates on the heap and you're on the hook to cleanup. I'd use std::unique_ptr like the python bindings do:

std::unique_ptr<int64_t[]> values(new int64_t[num_values]);

When values goes out-of-scope it automatically deletes the wrapped pointer.


binding/tfe_utils.cc, line 327 at r1 (raw file):

nstatus

Ditto here with the nstatus reference.


Comments from Reviewable

@nsthorat
Copy link

Review status: 0 of 4 files reviewed at latest revision, 5 unresolved discussions.


src/nodejs_kernel_backend.ts, line 325 at r1 (raw file):

  logicalXor(a: Tensor, b: Tensor): Tensor {
    // x ^ y = (x | y) & ~(x & y)
    return logicalOr(a, b).logicalAnd(logicalAnd(a, b).logicalNot());

I think we should do this at the op level and remove the kernel entirely, WDYT?

Same for other places you implement the kernel as a combination of other ops.


src/nodejs_kernel_backend.ts, line 660 at r1 (raw file):

      },
    ];
    return this.executeSingleOutput('MaxPoolGrad', opAttrs, [x, y, dy]) as

thoughts on renaming our kernel methods to align with TF?

So maxPoolBackprop => maxPoolGrad


src/run_tests.ts, line 32 at r1 (raw file):

const IGNORE_LIST: string[] = [
  'depthwiseConv2D',  // Requires space_to_batch() for dilation > 1.

You can refer to the issue

tensorflow/tfjs#161


Comments from Reviewable

@dsmilkov
Copy link
Contributor Author

PTAL. All comments addressed


Review status: 0 of 4 files reviewed at latest revision, 8 unresolved discussions.


binding/tfe_utils.cc, line 244 at r1 (raw file):

Previously, nkreeger (Nick Kreeger) wrote…
nullptr

Sorry these macros are funny - nullptr should probably be false or is_array. The 3rd param is the early return value. Honestly the only reason to return early is to bubble exceptions. Further up the call chain, calls into AssignOpAttr() have an exception-pending check. To make this easier, I'd do something like this:

inline bool IsArray(napi_env, napi_status& nstatus, napi_value* val) {
   // Use in-param nstatus
}

This way if an exception happens - it gets bubbled up through the nstatus passed in.

Done.


binding/tfe_utils.cc, line 300 at r1 (raw file):

Previously, nkreeger (Nick Kreeger) wrote…
if (IsArray(env, &js_value)) {

I wonder if we'll have to support this for TF_ATTR_FLOAT as well. Mind cleaning up this TODO: https://github.com/tensorflow/tfjs-node/blob/master/binding/tfe_utils.cc#L332

Added code for list of floats and list of bools and removed the TODO. Didn't add code for list of type and list of strings - we can add that as needed, since there is no way to test it's behavior right now, unless you call an op that actually uses it.


binding/tfe_utils.cc, line 302 at r1 (raw file):

Previously, nkreeger (Nick Kreeger) wrote…
auto 

There is already a nstatus variable in scope on this method.

Done.


binding/tfe_utils.cc, line 304 at r1 (raw file):

Previously, nkreeger (Nick Kreeger) wrote…
auto data = new int64_t[length];

This actually allocates on the heap and you're on the hook to cleanup. I'd use std::unique_ptr like the python bindings do:

std::unique_ptr<int64_t[]> values(new int64_t[num_values]);

When values goes out-of-scope it automatically deletes the wrapped pointer.

Done. Thanks! I have no idea what I'm doing, but eager to learn - thanks for the guidance :)


binding/tfe_utils.cc, line 327 at r1 (raw file):

Previously, nkreeger (Nick Kreeger) wrote…
nstatus

Ditto here with the nstatus reference.

Done.


src/nodejs_kernel_backend.ts, line 325 at r1 (raw file):

Previously, nsthorat (Nikhil Thorat) wrote…

I think we should do this at the op level and remove the kernel entirely, WDYT?

Same for other places you implement the kernel as a combination of other ops.

Done.


src/nodejs_kernel_backend.ts, line 660 at r1 (raw file):

Previously, nsthorat (Nikhil Thorat) wrote…

thoughts on renaming our kernel methods to align with TF?

So maxPoolBackprop => maxPoolGrad

let's do it in another PR (potentially)


src/run_tests.ts, line 32 at r1 (raw file):

Previously, nsthorat (Nikhil Thorat) wrote…

You can refer to the issue

tensorflow/tfjs#161

Done.


Comments from Reviewable

@nkreeger
Copy link
Contributor

:lgtm_strong:


Review status: 0 of 4 files reviewed at latest revision, 8 unresolved discussions.


Comments from Reviewable

@dsmilkov dsmilkov merged commit d99d108 into master Apr 16, 2018
@dsmilkov dsmilkov deleted the align branch April 16, 2018 18:56
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
3 participants