Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
111 commits
Select commit Hold shift + click to select a range
2b69289
initial
annxingyuan Mar 23, 2020
b159a1f
webgl
annxingyuan Mar 23, 2020
86c7451
add divnonan
annxingyuan Mar 23, 2020
90cb0e6
add divnonan
annxingyuan Mar 23, 2020
9a985a7
test
annxingyuan Mar 23, 2020
422bb98
merge
annxingyuan Mar 23, 2020
ec37e57
chagne import
annxingyuan Mar 23, 2020
d2b40bb
binary inputs
annxingyuan Mar 23, 2020
d32569f
avoid public api
annxingyuan Mar 23, 2020
5a410dd
rename
annxingyuan Mar 23, 2020
d0f5edd
rename
annxingyuan Mar 23, 2020
66ac656
pr comments
annxingyuan Mar 23, 2020
12c457e
remove out tensor
annxingyuan Mar 23, 2020
0720c82
simplify
annxingyuan Mar 23, 2020
32c89e0
pr comments
annxingyuan Mar 23, 2020
83d9c7f
clean
annxingyuan Mar 23, 2020
4960a94
merge
annxingyuan Mar 23, 2020
15e6c72
lint
annxingyuan Mar 23, 2020
6385e9e
save
annxingyuan Mar 24, 2020
f1997b4
merge
annxingyuan Mar 24, 2020
4af9bbc
separate out
annxingyuan Mar 25, 2020
3076d8a
modify
annxingyuan Mar 25, 2020
3cb6611
unchain
annxingyuan Mar 25, 2020
78ed40f
remove der
annxingyuan Mar 25, 2020
cd37732
divnonan
annxingyuan Mar 25, 2020
f18f996
Merge branch 'master' into modularize_div
annxingyuan Mar 25, 2020
24e6ecf
save
annxingyuan Mar 25, 2020
6fcecd3
update workspace
annxingyuan Mar 25, 2020
40ad5b1
Merge branch 'master' into update_xnn
annxingyuan Mar 25, 2020
3cc70fd
update
annxingyuan Mar 26, 2020
55a08e4
merge
annxingyuan Apr 1, 2020
fb76ed3
add build
annxingyuan Apr 1, 2020
1ee6e91
update
annxingyuan Apr 2, 2020
59a1041
del
annxingyuan Apr 2, 2020
d045a69
Merge branch 'master' into update_xnn
annxingyuan Apr 2, 2020
fb4f6cb
del
annxingyuan Apr 2, 2020
b5d9d96
update
annxingyuan Apr 2, 2020
fba46b9
Merge branch 'master' into update_xnn
annxingyuan Apr 3, 2020
f4f9a76
use latest
annxingyuan Apr 3, 2020
beeb5e1
Merge branch 'master' into update_xnn
annxingyuan Apr 6, 2020
c64fb35
add build options
annxingyuan Apr 6, 2020
d4d35c9
update
annxingyuan Apr 6, 2020
00e1079
add opts
annxingyuan Apr 6, 2020
35025ab
update
annxingyuan Apr 7, 2020
d7bcefb
inc max mem
annxingyuan Apr 7, 2020
0aa9e66
Merge branch 'master' into update_xnn
annxingyuan Apr 8, 2020
6b6c385
update
annxingyuan Apr 8, 2020
f3d1bb5
config
annxingyuan Apr 8, 2020
b2200f1
Merge branch 'master' into update_xnn
annxingyuan Apr 14, 2020
38de385
it compiles
annxingyuan Apr 14, 2020
8dab94d
Merge branch 'master' into update_xnn
annxingyuan Apr 14, 2020
a3ed77f
Merge branch 'master' into update_xnn
annxingyuan Apr 20, 2020
296930a
clean
annxingyuan Apr 20, 2020
0c5b182
add
annxingyuan Apr 20, 2020
211d333
fix
annxingyuan Apr 20, 2020
8d5b167
bugfix
annxingyuan Apr 20, 2020
65b438e
Merge branch 'upgrade_emscripten' into update_xnn
annxingyuan Apr 20, 2020
51fb318
Merge branch 'master' into update_xnn
annxingyuan Apr 21, 2020
a3a0788
add cpy
annxingyuan Apr 21, 2020
922c6fa
add opts
annxingyuan Apr 21, 2020
5c0368e
add
annxingyuan Apr 21, 2020
5260b23
add cpy
annxingyuan Apr 21, 2020
d99cc65
add
annxingyuan Apr 21, 2020
22953b0
Merge branch 'master' into update_xnn
annxingyuan Apr 21, 2020
3a937f4
Merge branch 'master' into update_xnn
annxingyuan Apr 23, 2020
163dbcb
fix
annxingyuan Apr 24, 2020
d006737
add to bench
annxingyuan Apr 24, 2020
132f204
update build
annxingyuan Apr 24, 2020
9b6ab35
beautify
annxingyuan Apr 24, 2020
80d6e5a
update
annxingyuan Apr 25, 2020
9797eac
Merge branch 'master' into update_xnn
annxingyuan Apr 29, 2020
74948f5
works
annxingyuan Apr 29, 2020
17f2467
add beautified worker
annxingyuan Apr 29, 2020
93c93ec
add standalone script
annxingyuan Apr 29, 2020
3bcf7d3
register kernel
annxingyuan Apr 30, 2020
164a1aa
add
annxingyuan Apr 30, 2020
e174848
add sqrt
annxingyuan Apr 30, 2020
a28eb08
Merge branch 'master' into wasm_split
annxingyuan May 1, 2020
ac5c447
fix
annxingyuan May 1, 2020
c05db95
Merge branch 'master' into wasm_split
annxingyuan May 4, 2020
641a2fa
revive
annxingyuan May 4, 2020
6ca0544
fix
annxingyuan May 4, 2020
a160974
add sqrt
annxingyuan May 4, 2020
87254ad
fix
annxingyuan May 4, 2020
e7acbaf
Merge branch 'master' into wasm_split
annxingyuan May 4, 2020
6940be1
Merge branch 'master' into update_xnn
annxingyuan May 4, 2020
cbffd80
initialization works?
annxingyuan May 5, 2020
d55fe0c
create 2 threads
annxingyuan May 5, 2020
e85cf91
example
annxingyuan May 5, 2020
43fa15e
Merge branch 'wasm_split' into update_xnn
annxingyuan May 5, 2020
b62bb4f
add ref
annxingyuan May 5, 2020
81b0243
update build
annxingyuan May 5, 2020
45ea48e
more builds
annxingyuan May 6, 2020
a2cecb1
merge
annxingyuan May 11, 2020
471dfe8
fix
annxingyuan May 11, 2020
1174ca1
rm
annxingyuan May 12, 2020
68499dd
bench
annxingyuan May 12, 2020
073969b
add
annxingyuan May 12, 2020
091e5cb
update build
annxingyuan May 12, 2020
b42f281
build
annxingyuan May 12, 2020
b5f8528
beautify
annxingyuan May 12, 2020
c310312
Merge branch 'master' into update_xnn
annxingyuan May 13, 2020
ee28254
everything back to normal
annxingyuan May 13, 2020
c0e406b
inline worker
annxingyuan May 15, 2020
5783fd0
clean
annxingyuan May 18, 2020
7343992
delete worker
annxingyuan May 19, 2020
c620223
sigh
annxingyuan May 19, 2020
3f18a31
Merge branch 'master' into update_xnn
annxingyuan May 26, 2020
b0470a4
prepping for multiple builds
annxingyuan May 27, 2020
1a51c5c
setwasmpath
annxingyuan Jun 3, 2020
03e6caf
add
annxingyuan Jun 3, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 22 additions & 5 deletions tfjs-backend-wasm/WORKSPACE
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,13 @@ git_repository(
shallow_since = "1582560423 -0800",
)

git_repository(
name = "xnnpack-threaded",
commit = "1841b1a240544214c279b9d3b8f91bacc69a206c",
remote = "https://github.com/google/XNNPACK.git",
shallow_since = "1586291019 -0700",
)

# The libraries below are transitive dependencies of XNNPACK that we need to
# explicitly enumerate here. See https://docs.bazel.build/versions/master/external.html#transitive-dependencies

Expand Down Expand Up @@ -40,7 +47,7 @@ http_archive(

# pthreadpool library, used for parallelization
http_archive(
name = "pthreadpool",
name = "pthreadpool-unthreaded",
build_file = "@xnnpack//third_party:pthreadpool.BUILD",
sha256 = "c2328fdf9e48ac9b928953bcbc442eb14402d393e4cfae0541581a3d39efca9d",
strip_prefix = "pthreadpool-0e275fe56094626349c55a524ea8b71a85daa64b",
Expand All @@ -49,6 +56,17 @@ http_archive(
],
)

# pthreadpool library, used for parallelization
http_archive(
name = "pthreadpool",
build_file = "@xnnpack-threaded//third_party:pthreadpool.BUILD",
sha256 = "91c7b00c16c60c96f23d1966d524879c0f6044caf4bc5e9fc06518dda643e07e",
strip_prefix = "pthreadpool-76042155a8b1e189c8f141429fd72219472c32e1",
urls = [
"https://github.com/Maratyszcza/pthreadpool/archive/76042155a8b1e189c8f141429fd72219472c32e1.tar.gz",
],
)

# clog library, used for logging
http_archive(
name = "clog",
Expand All @@ -64,11 +82,10 @@ http_archive(
http_archive(
name = "cpuinfo",
build_file = "@xnnpack//third_party:cpuinfo.BUILD",
patches = ["@xnnpack//third_party:cpuinfo.patch"],
sha256 = "3f2dc1970f397a0e59db72f9fca6ff144b216895c1d606f6c94a507c1e53a025",
strip_prefix = "cpuinfo-d5e37adf1406cf899d7d9ec1d317c47506ccb970",
sha256 = "80625d0b69a3d69b70c2236f30db2c542d0922ccf9bb51a61bc39c49fac91a35",
strip_prefix = "cpuinfo-0cc563acb9baac39f2c1349bc42098c4a1da59e3",
urls = [
"https://github.com/pytorch/cpuinfo/archive/d5e37adf1406cf899d7d9ec1d317c47506ccb970.tar.gz",
"https://github.com/pytorch/cpuinfo/archive/0cc563acb9baac39f2c1349bc42098c4a1da59e3.tar.gz",
],
)

Expand Down
6 changes: 3 additions & 3 deletions tfjs-backend-wasm/karma.conf.js
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ const karmaTypescriptConfig = {
sourceMap: true,
// Ignore the import of the `worker_threads` package used in a core test
// meant to run in node.
exclude: ['worker_threads'],
exclude: ['worker_threads', 'perf_hooks'],
// worker_node_test in tfjs-core contains a conditional require statement
// that confuses the bundler of karma-typescript.
ignore: ['./worker_node_test'],
Expand All @@ -39,7 +39,7 @@ const karmaTypescriptConfig = {
// Disable coverage reports and instrumentation by default for tests
coverageOptions: {instrumentation: false},
reports: {},
include: ['src/', 'wasm-out/']
include: ['src/']
};

module.exports = function(config) {
Expand All @@ -66,7 +66,7 @@ module.exports = function(config) {
],
exclude: ['src/test_node.ts'],
preprocessors: {
'wasm-out/**/*.js': ['karma-typescript'],
// 'wasm-out/tfjs-backend-wasm.js': ['karma-typescript'],
'**/*.ts': ['karma-typescript']
},
karmaTypescriptConfig,
Expand Down
3 changes: 2 additions & 1 deletion tfjs-backend-wasm/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,8 @@
"test-node": "ts-node --skip-ignore -P tsconfig.test.json src/test_node.ts",
"test-bundle-size": "./scripts/test-bundle-size.js",
"test-cc": "bazel test //src/cc:cc_tests --test_output=all",
"test-browser-ci": "karma start --singleRun --browsers=bs_chrome_mac"
"test-browser-ci": "karma start --singleRun --browsers=bs_chrome_mac",
"test-standalone": "./scripts/test-standalone.sh"
},
"browser": {
"fs": false,
Expand Down
2 changes: 1 addition & 1 deletion tfjs-backend-wasm/rollup.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ function config({plugins = [], output = {}}) {
globals: {'@tensorflow/tfjs-core': 'tf', 'fs': 'fs', 'path': 'path'},
...output,
},
external: ['crypto', '@tensorflow/tfjs-core', 'fs', 'path'],
external: ['crypto', '@tensorflow/tfjs-core', 'fs', 'path', '../wasm-out/tfjs-backend-wasm.js'],
onwarn: warning => {
let {code} = warning;
if (code === 'CIRCULAR_DEPENDENCY' || code === 'CIRCULAR' ||
Expand Down
9 changes: 6 additions & 3 deletions tfjs-backend-wasm/scripts/build-wasm.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,15 @@

set -e

yarn bazel build -c opt //src/cc:tfjs-backend-wasm.js --config=wasm
# yarn bazel build -c opt //src/cc:tfjs-backend-wasm.js --config=wasm
yarn bazel build -c opt //src/cc:tfjs-backend-wasm-threaded.js --config=wasm --copt="-pthread"
# The typescript code and karma config expect the output of emscripten to be in
# wasm-out/ so we copy the bazel output there.
cp -f bazel-bin/src/cc/tfjs-backend-wasm.js \
bazel-bin/src/cc/tfjs-backend-wasm.wasm \
cp -f bazel-bin/src/cc/tfjs-backend-wasm-threaded.js \
bazel-bin/src/cc/tfjs-backend-wasm-threaded.worker.js \
bazel-bin/src/cc/tfjs-backend-wasm-threaded.wasm \
wasm-out/

mkdir -p dist
cp wasm-out/*.wasm dist/
cp wasm-out/*.worker.js dist/
9 changes: 9 additions & 0 deletions tfjs-backend-wasm/scripts/inline-worker.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
const fs = require('fs');

const workerContents = fs.readFileSync('./wasm-out/tfjs-backend-wasm-threaded.worker.js', "utf8");

const fileContents = `export const wasmWorkerContents = '${workerContents.trim()}';`;

fs.writeFile('./wasm-out/tfjs-backend-wasm-threaded.worker.ts', fileContents, function(err) {
console.log("dobne");
});
11 changes: 11 additions & 0 deletions tfjs-backend-wasm/scripts/test-standalone.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# build, then copy to benchmarks

yarn build # this creates wasm-out directory

node ./scripts/inline-worker.js

rollup -c # this creates tf-backend-wasm.js

cp dist/tf-backend-wasm.js ../tfjs-core/benchmarks/
cp wasm-out/tfjs-backend-wasm-threaded.js ../tfjs-core/benchmarks/tfjs-backend-wasm.js
cp wasm-out/tfjs-backend-wasm-threaded.wasm ../tfjs-core/benchmarks/
34 changes: 29 additions & 5 deletions tfjs-backend-wasm/src/backend_wasm.ts
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,14 @@
* =============================================================================
*/

import {backend_util, BackendTimingInfo, DataStorage, DataType, engine, KernelBackend, registerBackend, TensorInfo, util} from '@tensorflow/tfjs-core';
import {backend_util, BackendTimingInfo, DataStorage, DataType, engine, env, KernelBackend, registerBackend, TensorInfo, util} from '@tensorflow/tfjs-core';

import {BackendWasmModule, WasmFactoryConfig} from '../wasm-out/tfjs-backend-wasm';
import wasmFactory from '../wasm-out/tfjs-backend-wasm.js';
import {BackendWasmModule, WasmFactoryConfig} from '../wasm-out/tfjs-backend-wasm.js';

declare const WasmBackendModule: Function;

// @ts-ignore
import {wasmWorkerContents} from '../wasm-out/tfjs-backend-wasm.worker.js';

const WASM_PRIORITY = 2;

Expand Down Expand Up @@ -192,6 +196,10 @@ function createInstantiateWasmFunc(path: string) {
};
}

function fetchText(path: string) {
return fetch(path).then(response => response.text());
}

/**
* Initializes the wasm module and creates the js <--> wasm bridge.
*
Expand All @@ -200,14 +208,28 @@ function createInstantiateWasmFunc(path: string) {
* in Chrome 76).
*/
export async function init(): Promise<{wasm: BackendWasmModule}> {
const emscriptenContents = await fetchText('./tfjs-backend-wasm.js');
env().global.eval(emscriptenContents);
return new Promise((resolve, reject) => {
const factoryConfig: WasmFactoryConfig = {};

const locateFile = (path: string, prefix: string) => {
if (path.endsWith('.worker.js')) {
const response = wasmWorkerContents;
const blob = new Blob([response], {type: 'application/javascript'});
return URL.createObjectURL(blob);
}
return prefix + path;
};

factoryConfig.locateFile = locateFile;

if (wasmPath != null) {
factoryConfig.locateFile = (path, prefix) => {
if (path.endsWith('.wasm')) {
return wasmPath;
}
return prefix + path;
return locateFile(path, prefix);
};
// use wasm instantiateWasm override when system fetch is not available.
// For detail references
Expand All @@ -216,9 +238,11 @@ export async function init(): Promise<{wasm: BackendWasmModule}> {
factoryConfig.instantiateWasm = createInstantiateWasmFunc(wasmPath);
}
}
const wasm = wasmFactory(factoryConfig);
const wasm = WasmBackendModule(factoryConfig);
const voidReturnType: string = null;
// Using the tfjs namespace to avoid conflict with emscripten's API.
wasm.mainScriptUrlOrBlob =
new Blob([emscriptenContents], {type: 'text/javascript'});
wasm.tfjs = {
init: wasm.cwrap('init', null, []),
registerTensor: wasm.cwrap(
Expand Down
38 changes: 38 additions & 0 deletions tfjs-backend-wasm/src/cc/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,33 @@ cc_binary(
],
)

cc_binary(
name = "tfjs-backend-wasm-threaded.js",
srcs = ["backend.cc"] + KERNELS_WITH_KEEPALIVE,
linkopts = [
"-s ALLOW_MEMORY_GROWTH=1",
"-s DEFAULT_LIBRARY_FUNCS_TO_INCLUDE=[]",
"-s DISABLE_EXCEPTION_CATCHING=1",
"-s FILESYSTEM=0",
"-s EXIT_RUNTIME=0",
"-s EXPORTED_FUNCTIONS='[\"_malloc\", \"_free\"]'",
"-s EXTRA_EXPORTED_RUNTIME_METHODS='[\"cwrap\"]'",
"-s MODULARIZE=1",
"-s EXPORT_NAME=WasmBackendModule",
"-s MALLOC=emmalloc",
"-s USE_PTHREADS=1",
"-s PTHREAD_POOL_SIZE=4",
"-s ASSERTIONS=1",
"-s INITIAL_MEMORY=1Gb",
"-s MAXIMUM_MEMORY=1Gb",
"-s PROXY_TO_PTHREAD=1",
],
deps = [
":all_kernels",
":backend_threaded",
],
)

test_suite(
name = "cc_tests",
)
Expand All @@ -45,6 +72,17 @@ tfjs_cc_library(
],
)

tfjs_cc_library(
name = "backend_threaded",
srcs = ["backend.cc"],
hdrs = ["backend.h"],
deps = [
":check_macros",
":util",
"@xnnpack-threaded//:xnnpack_operators_nhwc_f32",
],
)

tfjs_unit_test(
name = "backend_tests",
srcs = glob(["*_test.cc"]),
Expand Down
2 changes: 2 additions & 0 deletions tfjs-backend-wasm/src/cc/backend.cc
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,8 @@ TensorInfo &get_tensor_info_out(const size_t tensor_id) {

size_t xnn_operator_count = 0;

pthreadpool *threadpool = pthreadpool_create(4);

// Registers a disposal callback for a tensor id with a given callback function.
void register_disposal_callback(const size_t tensor_id,
const DisposeFunction dispose_fn) {
Expand Down
3 changes: 3 additions & 0 deletions tfjs-backend-wasm/src/cc/backend.h
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
#ifndef BACKEND_H_
#define BACKEND_H_

#include <xnnpack.h>
#include <cstddef>
#include <cstdint>

Expand Down Expand Up @@ -81,6 +82,8 @@ const size_t num_tensors();

// The number of instantiated XNN operators.
extern size_t xnn_operator_count;

extern pthreadpool *threadpool;
} // namespace backend

namespace wasm {
Expand Down
4 changes: 2 additions & 2 deletions tfjs-backend-wasm/src/cc/batch_mat_mul_impl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ void xnn_matmul(const size_t a_id, const size_t* a_shape_ptr,
const size_t batch_size = a_shape_ptr[1];
xnn_status status =
xnn_setup_fully_connected_nc_f32(fully_connected_op, batch_size, a_buf,
out_buf, nullptr /* thread pool */);
out_buf, tfjs::backend::threadpool);
if (status != xnn_status_success) {
tfjs::util::warn(
"XNN status for xnn_setup_fully_connected_nc_f32 is not successful. "
Expand All @@ -159,7 +159,7 @@ void xnn_matmul(const size_t a_id, const size_t* a_shape_ptr,
return;
}

xnn_run_operator(fully_connected_op, nullptr /* thread pool */);
xnn_run_operator(fully_connected_op, tfjs::backend::threadpool);
}

void slow_batch_matmul(const size_t a_id, const size_t* a_shape_ptr,
Expand Down
4 changes: 2 additions & 2 deletions tfjs-backend-wasm/src/cc/binary.cc
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ void binary_xnn_f32(const size_t a_id, const size_t* a_shape_ptr,
const size_t batch_size = out_info.size;
xnn_status status =
setup_op(binary_op, a_shape_len, a_shape_ptr, b_shape_len, b_shape_ptr,
a_buf, b_buf, out_buf, nullptr /* thread pool */);
a_buf, b_buf, out_buf, tfjs::backend::threadpool);
if (status != xnn_status_success) {
util::warn(
"XNN status for xnn_setup_*_nd_f32 is not successful. Got "
Expand All @@ -73,7 +73,7 @@ void binary_xnn_f32(const size_t a_id, const size_t* a_shape_ptr,
return;
}

xnn_run_operator(binary_op, nullptr /* thread pool */);
xnn_run_operator(binary_op, tfjs::backend::threadpool);
}

} // namespace wasm
Expand Down
4 changes: 2 additions & 2 deletions tfjs-backend-wasm/src/cc/conv2d_impl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -259,15 +259,15 @@ void conv2d(const size_t x_id, const size_t batch_size,

xnn_status status = xnn_setup_convolution2d_nhwc_f32(
conv2d_op, batch_size, input_height, input_width, x_buf, out_buf,
nullptr /* thread pool */);
tfjs::backend::threadpool);
if (status != xnn_status_success) {
util::warn(
"XNN status for xnn_setup_convolution2d_nhwc_f32 is not successful. "
"Got status %d. Use -c dbg to see XNN logs.",
status);
}

xnn_run_operator(conv2d_op, nullptr /* thread pool */);
xnn_run_operator(conv2d_op, tfjs::backend::threadpool);

if (activation == FusableActivation::PRELU) {
prelu(out_buf, out_info.size, prelu_weights_id, out_id);
Expand Down
4 changes: 2 additions & 2 deletions tfjs-backend-wasm/src/cc/kernels/ClipByValue.cc
Original file line number Diff line number Diff line change
Expand Up @@ -79,15 +79,15 @@ void ClipByValue(const size_t x_id, const float min, const float max,

const size_t batch_size = x_info.size;
xnn_status status = xnn_setup_clamp_nc_f32(
clamp_op, batch_size, x_buf, out_buf, nullptr /* thread pool */);
clamp_op, batch_size, x_buf, out_buf, tfjs::backend::threadpool);
if (status != xnn_status_success) {
util::warn(
"XNN status for xnn_setup_clamp_nc_f32 is not successful. Got "
"status %d. Use -c dbg to see XNN logs.",
status);
}

xnn_run_operator(clamp_op, nullptr /* thread pool */);
xnn_run_operator(clamp_op, tfjs::backend::threadpool);
}

} // extern "C"
Expand Down
16 changes: 14 additions & 2 deletions tfjs-backend-wasm/src/index_test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -58,8 +58,8 @@ describeWithFlags('wasm init', BROWSER_ENVS, () => {
}, 100);

// Silences backend registration warnings.
spyOn(console, 'warn');
spyOn(console, 'log');
// spyOn(console, 'warn');
// spyOn(console, 'log');
});

afterEach(() => {
Expand Down Expand Up @@ -121,4 +121,16 @@ describeWithFlags('wasm init', BROWSER_ENVS, () => {
expect(() => setWasmPath('too/late'))
.toThrowError(/The WASM backend was already initialized. Make sure/);
});

fit('A x B', async () => {
const a = tf.tensor2d([1, 2, 3, 4, 5, 6], [2, 3]);
const b = tf.tensor2d([0, 1, -3, 2, 2, 1], [3, 2]);

const c = tf.matMul(a, b);

expect(c.shape).toEqual([2, 2]);
const d = await c.data();
console.log(Array.from(d));
// expectArraysClose(await c.data(), [0, 8, -3, 20]);
});
});
Loading