Skip to content

Conversation

@nsthorat
Copy link
Contributor

@nsthorat nsthorat commented Oct 28, 2019

Adds Conv2D.

  • Calls into xnn pack, caching the operators by arguments to the xnn operator constructor.
  • Since the same filter can be used with multiple arguments, we keep 1) a map of filter to array of arguments and 2) a map of arguments to the xnn operator.
  • When a filter is disposed, we kill all of the xnn operators associated with it.

Also throws better errors for when kernels aren't defined.

This change is Reviewable

Copy link
Contributor Author

@nsthorat nsthorat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained


tfjs-core/src/kernel_registry.ts, line 26 at r1 (raw file):

/** These are extra non-tensor/primitive params passed to kernel functions. */
export type Attribute =
    number|number[]|boolean|boolean[]|string|string[]|NamedAttrMap;

this is so Conv2DInfo can be used

@nsthorat nsthorat changed the title (NOT READY) [tfjs-backend-wasm] Add Conv2D. [WASM] Add Conv2D that calls into xnn pack. Nov 4, 2019
util::warn(
"XNN status for xnn_create_convolution2d_nhwc_f32 is not successful. "
"Got status %d. Use -c dbg to see XNN logs.",
status);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does util::warn throw an exception, or otherwise exit the function? if operation creation failed, it makes no sense to continue

float output_min = -std::numeric_limits<float>::infinity();
float output_max = std::numeric_limits<float>::infinity();

const float* bias_buf = nullptr;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does TF.js support fused Conv2D+Bias? Adding bias separately is inefficient

Copy link
Contributor Author

@nsthorat nsthorat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @annxingyuan, @dsmilkov, and @Maratyszcza)


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 95 at r2 (raw file):

Previously, Maratyszcza (Marat Dukhan) wrote…

Does TF.js support fused Conv2D+Bias? Adding bias separately is inefficient

We have a separate kernel for that (because we're eager only). In a follow up PR I'll add support for fused bias and activations!


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 108 at r2 (raw file):

Previously, Maratyszcza (Marat Dukhan) wrote…

does util::warn throw an exception, or otherwise exit the function? if operation creation failed, it makes no sense to continue

Currently we just console.warn in the browser -- normal exceptions aren't propogated: https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md

We might at some point have an explicit call back into JavaScript to inform it that an error is thrown but we haven't done anything like that yet. Is there something you were doing in xnnpack + wasm that's simpler?

Copy link
Collaborator

@Maratyszcza Maratyszcza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @annxingyuan, @dsmilkov, and @nsthorat)


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 97 at r2 (raw file):

    const float* bias_buf = nullptr;
    xnn_status status = xnn_create_convolution2d_nhwc_f32(
        pad_top, pad_right, pad_bottom, pad_left, filter_height, filter_width,

Note: to get a convolution with "SAME" padding, use pad_top = pad_right = pad_left = pad_bottom = 0 and flags = XNN_FLAG_TENSORFLOW_SAME_PADDING. This will ensure that the padding is adjusted depending on input size specified in xnn_setup_convolution2d_nhwc_f32. Otherwise, you'd risk creating multiple XNNPACK Convolution operators for a single TF.js Convolution operator, just because changing input size also causes changes in padding.


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 108 at r2 (raw file):

Previously, nsthorat (Nikhil Thorat) wrote…

Currently we just console.warn in the browser -- normal exceptions aren't propogated: https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md

We might at some point have an explicit call back into JavaScript to inform it that an error is thrown but we haven't done anything like that yet. Is there something you were doing in xnnpack + wasm that's simpler?

In case of failure XNNPACK returns one of the error codes and leaves the output conv2d_op variable unchanged (i.e. nullptr in this case). However, xnn_setup_* and xnn_run_operator assume non-null operator pointer, and will crash if you pass a null pointer. So TF.js wrapper should return early in case of error.

Nikhil Thorat added 2 commits November 5, 2019 07:44
Copy link
Contributor Author

@nsthorat nsthorat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @annxingyuan, @dsmilkov, and @Maratyszcza)


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 97 at r2 (raw file):

Previously, Maratyszcza (Marat Dukhan) wrote…

Note: to get a convolution with "SAME" padding, use pad_top = pad_right = pad_left = pad_bottom = 0 and flags = XNN_FLAG_TENSORFLOW_SAME_PADDING. This will ensure that the padding is adjusted depending on input size specified in xnn_setup_convolution2d_nhwc_f32. Otherwise, you'd risk creating multiple XNNPACK Convolution operators for a single TF.js Convolution operator, just because changing input size also causes changes in padding.

Done, thanks for the tip!


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 108 at r2 (raw file):

Previously, Maratyszcza (Marat Dukhan) wrote…

In case of failure XNNPACK returns one of the error codes and leaves the output conv2d_op variable unchanged (i.e. nullptr in this case). However, xnn_setup_* and xnn_run_operator assume non-null operator pointer, and will crash if you pass a null pointer. So TF.js wrapper should return early in case of error.

Okay great to know! Since this affects prelu and conv2d I'll add this in a follow up: #2328

xnn_delete_operator(conv2d_op);
tfjs::backend::xnn_operator_count--;
operator_cache.erase(operator_cache_key);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Erase filter operator cache key

Copy link
Contributor Author

@nsthorat nsthorat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 0 of 1 approvals obtained (waiting on @annxingyuan, @dsmilkov, and @Maratyszcza)


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 55 at r4 (raw file):

Previously, annxingyuan (Ann Yuan) wrote…

Erase filter operator cache key

Done.

Copy link
Contributor

@dsmilkov dsmilkov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great unit tests! Nice work!!

Reviewed 7 of 15 files at r1, 2 of 6 files at r2, 4 of 5 files at r3, 1 of 1 files at r5.
Reviewable status: :shipit: complete! 1 of 1 approvals obtained (waiting on @annxingyuan, @dsmilkov, @Maratyszcza, and @nsthorat)


tfjs-backend-wasm/src/setup_test.ts, line 54 at r3 (raw file):

    excludes: [
      'gradient',      // Gradients not defined yet.
      'fused conv2d',  // Fused conv2d not yet implemented.

fyi, we will probably need this one when we run the newest converter on the model


tfjs-backend-wasm/src/cc/kernels/Conv2D.h, line 25 at r3 (raw file):

            const int input_width, const int filter_id, const int filter_height,
            const int filter_width, int pad_top, int pad_right, int pad_bottom,
            int pad_left, const int is_same_pad, const int dilation_height,

why not const bool is_same_pad ?


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 36 at r5 (raw file):

// std::array instead of a vanilla array as it implements the compare operator
// needed for std::map.
typedef std::array<int, 15> operator_cache_key;

typenames must be camel-cased with starting capital letter: https://google.github.io/styleguide/cppguide.html#Type_Names


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 49 at r5 (raw file):

void delete_xnn_operators(int filter_id) {
  std::vector<operator_cache_key> operator_cache_keys =
      filter_operator_cache_key_map.at(filter_id);

s/at()/bracket notation/ here and below


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 51 at r5 (raw file):

      filter_operator_cache_key_map.at(filter_id);
  for (auto& operator_cache_key : operator_cache_keys) {
    auto conv2d_op = operator_cache.at(operator_cache_key);

auto& conv2d_op just so we don't think about it


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 79 at r5 (raw file):

  auto& out_info = backend::get_tensor_info(out_id);

  const float* x_buf = reinterpret_cast<float*>(x_info.memory_offset);

const float* here and below


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 83 at r5 (raw file):

  float* out_buf = reinterpret_cast<float*>(out_info.memory_offset);

  xnn_operator_t conv2d_op = nullptr;

unique_ptr<xnn_operator_t, xnn_delete_operator> and then you don't have to call xnn_delete_operator yourself, since map.erase() will do that for you


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 120 at r5 (raw file):

    }

    operator_cache.insert({cache_key, conv2d_op});

when conv2d_op becomes unique_ptr, you must do std::move(conv2d_op) here


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 126 at r5 (raw file):

      std::vector<operator_cache_key> cache_keys = {std::move(cache_key)};
      // We do a move here to avoid a copy.
      filter_operator_cache_key_map.insert({filter_id, std::move(cache_keys)});

s/std::move(cache_keys)/{cache_key}/. Also s/insert/emplace/ ?

Nikhil Thorat added 2 commits November 5, 2019 15:06
Copy link
Contributor Author

@nsthorat nsthorat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 1 of 1 approvals obtained (waiting on @annxingyuan, @dsmilkov, @Maratyszcza, and @nsthorat)


tfjs-backend-wasm/src/setup_test.ts, line 54 at r3 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

fyi, we will probably need this one when we run the newest converter on the model

yep, follow up I'll do this


tfjs-backend-wasm/src/cc/kernels/Conv2D.h, line 25 at r3 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

why not const bool is_same_pad ?

the three types allowed across this bridge are "number", "string" and "array" and I wanted to be consistent: https://emscripten.org/docs/porting/connecting_cpp_and_javascript/Interacting-with-code.html


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 36 at r5 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

typenames must be camel-cased with starting capital letter: https://google.github.io/styleguide/cppguide.html#Type_Names

Done


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 49 at r5 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

s/at()/bracket notation/ here and below

Done


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 51 at r5 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

auto& conv2d_op just so we don't think about it

Done


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 83 at r5 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

unique_ptr<xnn_operator_t, xnn_delete_operator> and then you don't have to call xnn_delete_operator yourself, since map.erase() will do that for you

Done


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 120 at r5 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

when conv2d_op becomes unique_ptr, you must do std::move(conv2d_op) here

I'll do this in a follow up because there are strange complexities with this type not matching the map (and I'll also do it for prelu, merging the xnnoperatorcount into a shared util).


tfjs-backend-wasm/src/cc/kernels/Conv2D.cc, line 126 at r5 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

s/std::move(cache_keys)/{cache_key}/. Also s/insert/emplace/ ?

I can't just do {cache_key}, it doesn't recognize it as a std::vector. So I created a variable and removed std::move on cache_key.

@nsthorat nsthorat merged commit ea83eee into master Nov 5, 2019
@nsthorat nsthorat deleted the wasm-dconv branch November 5, 2019 23:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants